LLM Prompt Engineering for Behavioral Strategy
This guide provides structured prompts, reasoning chains, and evaluation frameworks specifically designed for LLMs to apply Behavioral Strategy principles effectively.
Core Prompt Templates
1. Problem Market Fit Analysis
## Behavioral Strategy Analysis: Problem Market Fit
Given the following context:
- Target Users: {user_description}
- Suspected Problem: {problem_statement}
- Current Solutions: {existing_solutions}
- User Research Data: {research_findings}
Apply the Behavioral Strategy framework to assess Problem Market Fit:
1. **Problem Validation**
- Is this a genuine problem users actively experience? Evidence:
- Rate problem severity (1-10):
- Rate problem urgency (1-10):
- Do users actively seek solutions? Evidence:
2. **Behavioral Indicators**
- What behaviors indicate users experience this problem?
- What behaviors show users seeking solutions?
- Frequency of problem-indicating behaviors:
3. **Market Size Assessment**
- Estimated % of target users experiencing problem:
- Intensity of problem-seeking behavior:
- Current solution inadequacy score:
4. **PMF Score Calculation**
PMF = (Severity × 0.3) + (Urgency × 0.3) + (Active_Seeking × 0.4)
5. **Recommendation**
- PMF Status: [Strong/Moderate/Weak/None]
- Confidence Level: [High/Medium/Low]
- Next Steps:
Provide reasoning for each assessment using behavioral evidence.
2. Behavior Market Fit Evaluation
## Behavioral Strategy Analysis: Behavior Market Fit
Context:
- Validated Problem: {problem_with_pmf}
- Target Behavior: {proposed_behavior}
- User Capabilities: {user_ability_data}
- Environmental Context: {context_factors}
Evaluate Behavior Market Fit using the BMF algorithm:
1. **Behavior Feasibility Analysis**
For each user segment:
- Current capability level: [score]
- Required capability for behavior: [score]
- Capability gap: [calculation]
- Learning curve assessment: [steep/moderate/shallow] ```
- Motivation Alignment
Intrinsic motivators present: [list] Extrinsic rewards available: [list] Motivation-behavior alignment score: [0-10] Sustained motivation likelihood: [percentage]
- Context Compatibility
Environmental barriers: [list with severity] Social acceptance factors: [positive/neutral/negative] Resource requirements: [list with availability] Context score: [0-10]
- Competitive Behavior Analysis
Current behaviors addressing problem: [list] Switching cost from current to target: [high/medium/low] Relative advantage of target behavior: [score]
- BMF Calculation
BMF_Score = ( Feasibility * 0.25 + Motivation * 0.25 + Context * 0.20 + Triggers * 0.15 + Habit_Potential * 0.15 )
- Behavioral Recommendation
- BMF Status: [Excellent/Good/Viable/Poor]
- Primary Barriers: [ordered list]
- Behavior Modifications Needed: [specific suggestions]
- Alternative Behaviors: [if BMF < 50] ```
3. Solution Design Chain-of-Thought
## Behavioral Strategy: Solution Design Reasoning
Given validated Problem-Behavior fit:
- Problem: {validated_problem}
- Target Behavior: {validated_behavior}
- Behavioral Barriers: {identified_barriers}
Design a solution using step-by-step reasoning:
### Step 1: Behavioral Requirements Analysis
What must the solution do to enable the target behavior?
- Reduce friction by: [specific mechanisms]
- Increase motivation through: [specific features]
- Provide triggers via: [specific touchpoints]
- Support habit formation with: [specific patterns]
### Step 2: Solution Architecture
Map solution components to behavioral requirements:
Behavioral Requirement → Solution Component
- [Requirement 1] → [Component 1]
- [Requirement 2] → [Component 2] … ```
Step 3: Behavioral Flow Design
graph LR
A[User State] --> B{Trigger}
B --> C[Behavior Initiation]
C --> D[Behavior Execution]
D --> E[Reward/Feedback]
E --> F[Habit Reinforcement]
F --> A
Step 4: Validation Criteria
How will we measure if the solution enables the behavior?
- Behavioral KPI 1: [metric] Target: [value]
- Behavioral KPI 2: [metric] Target: [value]
- Success Threshold: [definition]
Step 5: Risk Assessment
What could prevent behavioral adoption?
- Risk 1: [description] Mitigation: [strategy]
- Risk 2: [description] Mitigation: [strategy]
Solution Recommendation
- Solution Type: [product/service/intervention]
- Core Features: [prioritized list]
- MVP Scope: [essential behavioral elements]
- Expected Behavior Adoption Rate: [percentage with confidence interval] ```
Advanced Reasoning Patterns
Multi-Step Behavioral Chain Analysis
# Prompt for analyzing complex behavioral sequences
BEHAVIORAL_CHAIN_PROMPT = """
Analyze the behavioral chain required to achieve {end_goal}:
1. Decompose into atomic behaviors:
End Goal: {end_goal}
↓
Required Behaviors (in sequence):
B1: {behavior_1} →
B2: {behavior_2} →
B3: {behavior_3} →
... →
Bn: {end_goal_behavior}
2. For each behavior in chain:
- Dependency on previous: [strong/moderate/weak]
- Failure probability: [percentage]
- Alternative path exists: [yes/no]
3. Critical Path Analysis:
- Bottleneck behaviors: [list]
- Highest failure risk: [behavior]
- Chain success probability: ∏(success_probability_i)
4. Optimization Strategy:
- Simplify: [which behaviors to combine/eliminate]
- Support: [where to add assistance]
- Bypass: [alternative paths to create]
"""
Behavioral Intervention Generator
## Generate Behavioral Intervention
Context: {problem_context}
Target Behavior: {desired_behavior}
Current Behavior: {current_behavior}
Constraints: {design_constraints}
Generate intervention using COM-B framework:
### Capability Interventions
IF user lacks [physical/psychological] capability:
THEN implement:
- Training: [specific skills to develop]
- Environmental restructuring: [changes to make behavior easier]
- Enablement: [tools/resources to provide]
### Opportunity Interventions
IF user lacks [physical/social] opportunity:
THEN implement:
- Environmental changes: [specific modifications]
- Social restructuring: [peer/community elements]
- Access improvements: [barriers to remove]
### Motivation Interventions
IF user lacks [automatic/reflective] motivation:
THEN implement:
- Education: [information to provide]
- Persuasion: [arguments to make]
- Incentivization: [rewards to offer]
- Coercion: [consequences to implement]
- Modeling: [examples to show]
### Integrated Intervention Design
Combine elements for maximum impact:
1. Primary intervention: [highest impact element]
2. Supporting interventions: [complementary elements]
3. Sequencing: [order of implementation]
4. Measurement: [behavioral metrics to track]
Evaluation Rubrics for LLM Output
Problem Market Fit Assessment Quality
rubric:
excellent (90-100):
- Uses multiple behavioral indicators
- Quantifies problem severity with evidence
- Identifies specific problem-seeking behaviors
- Provides confidence intervals
- Suggests validation methods
good (70-89):
- Identifies key behavioral indicators
- Assesses problem severity
- Notes problem-seeking behaviors
- Provides clear recommendation
adequate (50-69):
- Basic problem identification
- Some behavioral evidence
- General assessment provided
poor (0-49):
- Vague problem description
- Lacks behavioral evidence
- No clear assessment framework
Behavior Design Quality Metrics
def evaluate_behavior_design(llm_output):
"""
Score LLM's behavior design quality
"""
criteria = {
'specificity': {
'weight': 0.2,
'check': lambda x: has_specific_actions(x) and has_measurable_outcomes(x)
},
'feasibility_analysis': {
'weight': 0.2,
'check': lambda x: includes_ability_assessment(x) and identifies_barriers(x)
},
'motivation_alignment': {
'weight': 0.2,
'check': lambda x: maps_to_user_motivations(x) and includes_rewards(x)
},
'context_consideration': {
'weight': 0.15,
'check': lambda x: addresses_environment(x) and social_factors(x)
},
'measurement_plan': {
'weight': 0.15,
'check': lambda x: has_behavioral_kpis(x) and success_criteria(x)
},
'iteration_strategy': {
'weight': 0.1,
'check': lambda x: includes_testing_plan(x) and optimization_approach(x)
}
}
score = sum(
criteria[key]['weight'] * (100 if criteria[key]['check'](llm_output) else 0)
for key in criteria
)
return {
'total_score': score,
'breakdown': {k: criteria[k]['check'](llm_output) for k in criteria},
'recommendation': get_improvement_suggestions(score, criteria)
}
Few-Shot Learning Examples
Example 1: Technology Product
**Input**: Design a feature to increase user engagement in a meditation app
**Problem**: Users download the app but don't maintain regular practice
**Target Users**: Busy professionals seeking stress relief
**High-Quality Behavioral Strategy Response**:
1. **Problem Validation** ✓
- Severity: 7/10 (high stress levels documented)
- Urgency: 6/10 (seeking immediate relief)
- Active Seeking: Yes (1.2M monthly searches for "meditation for stress")
- PMF Score: 6.5/10 (Strong)
2. **Behavior Identification** ✓
- Current: Sporadic meditation attempts
- Target: Daily 5-minute meditation session
- Barriers: Time perception, forgetting, lack of immediate reward
3. **BMF Analysis** ✓
- Feasibility: 8/10 (5 minutes is achievable)
- Motivation Alignment: 6/10 (stress relief aligns with need)
- Context: 7/10 (can be done anywhere)
- BMF Score: 72/100 (Good Fit)
4. **Solution Design** ✓
- Core Feature: "Micro-meditation" mode
- Behavioral Elements:
* Calendar integration for optimal timing
* 30-second "breathing break" option for habit building
* Streak visualization with stress level tracking
* Smart notifications at detected stress moments
5. **Success Metrics** ✓
- 7-day activation: 60% complete at least one session
- 30-day retention: 40% maintain 5+ sessions/week
- Behavioral velocity: Time to first session < 24 hours
Example 2: Healthcare Intervention
**Input**: Improve medication adherence for diabetes patients
**Problem**: Patients skip doses leading to poor health outcomes
**Target Users**: Type 2 diabetes patients on daily medication
**High-Quality Behavioral Strategy Response**:
1. **Problem Validation** ✓
- Severity: 9/10 (life-threatening complications)
- Urgency: 8/10 (immediate health impact)
- Active Seeking: Moderate (seek solutions after complications)
- PMF Score: 7.8/10 (Strong)
2. **Behavior Identification** ✓
- Current: Inconsistent medication taking (avg 60% adherence)
- Target: Daily medication at consistent time
- Barriers: Forgetting, side effects, cost concerns, complexity
3. **BMF Analysis** ✓
- Feasibility: 9/10 (simple physical action)
- Motivation Alignment: 5/10 (prevention less motivating than treatment)
- Context: 6/10 (requires planning, supplies)
- BMF Score: 68/100 (Good Fit with barriers)
4. **Solution Design** ✓
- Core Intervention: "MedBuddy" system
- Behavioral Elements:
* Smart pill dispenser with cellular connectivity
* Pairing medication with existing routine (meals)
* Family notification system for social accountability
* Glucose reading integration showing medication impact
* Simplified refill process with auto-delivery
5. **Success Metrics** ✓
- Adherence rate: >80% doses taken on schedule
- Behavioral consistency: <2 hour variance in dosing time
- Clinical outcome: HbA1c reduction of 0.5% in 90 days
Integration with LLM Workflows
API-Based Behavioral Analysis
# Example integration for LLM-powered behavioral analysis
class BehavioralStrategyLLM:
def __init__(self, llm_client, bs_api_client):
self.llm = llm_client
self.bs_api = bs_api_client
async def analyze_behavior_fit(self, problem, proposed_behavior, context):
# Step 1: LLM generates behavioral hypothesis
hypothesis = await self.llm.complete(
prompt=BEHAVIOR_HYPOTHESIS_PROMPT,
variables={
'problem': problem,
'behavior': proposed_behavior,
'context': context
}
)
# Step 2: Validate with Behavioral Strategy API
validation = await self.bs_api.validate_behavior(
hypothesis=hypothesis,
historical_data=context.get('historical_data')
)
# Step 3: LLM refines based on validation
refined_analysis = await self.llm.complete(
prompt=BEHAVIOR_REFINEMENT_PROMPT,
variables={
'hypothesis': hypothesis,
'validation_results': validation,
'improvement_areas': validation.get('gaps')
}
)
# Step 4: Generate implementation plan
implementation = await self.llm.complete(
prompt=IMPLEMENTATION_DESIGN_PROMPT,
variables={
'refined_behavior': refined_analysis,
'constraints': context.get('constraints'),
'resources': context.get('available_resources')
}
)
return {
'analysis': refined_analysis,
'implementation_plan': implementation,
'confidence_score': validation.get('confidence'),
'behavioral_kpis': self.extract_kpis(implementation)
}
Prompt Chaining for Complex Analysis
behavioral_strategy_chain:
- prompt: problem_identification
output: validated_problem
- prompt: behavior_brainstorming
input: validated_problem
output: candidate_behaviors
- prompt: behavior_evaluation
input: candidate_behaviors
output: ranked_behaviors
- prompt: bmf_analysis
input: top_3_behaviors
output: behavior_scores
- prompt: solution_design
input: highest_scoring_behavior
output: solution_blueprint
- prompt: implementation_planning
input: solution_blueprint
output: implementation_roadmap
- prompt: measurement_design
input: implementation_roadmap
output: behavioral_kpis_dashboard
Best Practices for LLMs
Do’s
- Always validate problems with behavioral evidence
- Calculate quantitative scores (PMF, BMF) with clear formulas
- Consider multiple user segments separately
- Provide confidence intervals and uncertainty estimates
- Generate specific, measurable behavioral KPIs
- Include failure scenarios and mitigation strategies
Don’ts
- Don’t assume user stated preferences equal actual behavior
- Don’t skip systematic validation steps
- Don’t apply generic solutions without context adaptation
- Don’t ignore competitive behaviors already in place
- Don’t design solutions before validating behavior fit
- Don’t forget to specify measurement methodologies
Continuous Learning Loop
# LLM improvement through behavioral outcome tracking
async def behavioral_learning_loop(llm_system):
while True:
# Generate behavioral predictions
predictions = await llm_system.predict_behaviors(current_context)
# Deploy and measure
actual_outcomes = await measure_behavioral_outcomes(
predictions,
duration_days=30
)
# Calculate prediction accuracy
accuracy_metrics = calculate_prediction_accuracy(
predictions,
actual_outcomes
)
# Update LLM with results
await llm_system.update_knowledge(
predictions=predictions,
outcomes=actual_outcomes,
accuracy=accuracy_metrics,
lessons_learned=extract_lessons(accuracy_metrics)
)
# Adjust prompting strategies
if accuracy_metrics['bmf_score_accuracy'] < 0.8:
await llm_system.refine_prompts(
area='bmf_calculation',
performance_data=accuracy_metrics
)
Licensing: This LLM guide and prompt templates are protected under the Behavioral Strategy Specification License (BSSL). Commercial use in AI products or services requires explicit licensing. See full licensing terms for details.
Attribution: LLM Prompt Engineering for Behavioral Strategy by Jason Hreha. Learn more at behavioralstrategy.com