Behavior Matching
Behavior change is a matching problem, not a forcing problem. The central insight of Behavior Matching is deceptively simple: rather than asking “How can we make people do X?” organizations should ask “What behavior Y naturally appeals to our target group while achieving the same outcome?”
This reframing transforms how teams approach product design, marketing, and organizational change. Pick the wrong behavior and nothing else matters. Even excellent execution cannot overcome a fundamental mismatch between the behavior and the target population’s inclinations.
The Match Not Hack Philosophy
Traditional approaches to behavior change treat people as obstacles to overcome. They deploy persuasion techniques, incentive structures, and friction reduction to push users toward predetermined behaviors. This forcing approach creates several predictable problems:
- Resistance and psychological reactance
- Constant need for motivation and intervention
- Small, temporary effects at scale (often ~1–2 pp in at-scale nudge-unit RCTs)
- User disengagement when external pressure stops
Behavior Matching takes the opposite approach. Instead of fighting against user psychology, it works with it. The goal is to find behaviors that users already want to perform, or would want to perform given minimal support, that also achieve organizational objectives.
This is not about manipulation or persuasion. It is about selection and alignment.
Durable behavior is identity‑constrained (and context‑constrained)
Behavior Matching is especially important for durable behaviors (the ones you want repeated over weeks and months). In Behavioral Strategy, we assume durable behaviors are constrained by:
- Identity factors that are relatively stable (personality, values, self‑concept, status concerns)
- Context realities that are stubbornly real (time, tools, social norms, physical environment)
If a behavior is a poor match on either axis, you can sometimes buy short‑run compliance, but it decays when incentives, reminders, novelty, or pressure stop.
See: Identity Fit, Personality, and Durable Behavior.
One‑off vs durable behaviors
- One‑off actions (a single conversion, a one‑time sign‑up, a default setting) can sometimes respond to prompts, incentives, and interface optimization.
- Durable behaviors (retention, adherence, routines) usually require Behavior Matching plus enablement: choose a high‑fit behavior, then design systems that make repetition feasible. See: Behavioral Strategy vs Habit Formation.
The Matching Framework
Behavior Matching follows three sequential steps. Skipping or reordering these steps leads to the forcing trap.
Step 1: Define the Desired Outcome
Start with outcomes, not behaviors. What specific result do you need to achieve? Be precise about the outcome independent of how it gets achieved.
Common mistake: Teams often conflate outcomes with behaviors. “We need users to complete onboarding” is a behavior. “Users understand our core value proposition” is an outcome. The outcome can be achieved through many different behaviors.
Questions to clarify the outcome:
- What measurable change would indicate success?
- Why does this outcome matter for the user?
- Why does this outcome matter for the business?
- How would we know if we achieved this outcome through a completely different behavior?
Step 2: Explore Multiple Behavioral Alternatives
Generate at least five candidate behaviors that could accomplish the defined outcome. This expansion phase prevents premature commitment to a single approach.
For each candidate behavior, describe:
- The specific actions involved
- The time and effort required
- The skills or resources needed
- The context where it would occur
Example for the outcome “Users understand our core value proposition”:
- Read a product overview page
- Watch a 90-second explainer video
- Complete an interactive tutorial
- Talk with a customer success representative
- Use the product immediately with guided prompts
- See a side-by-side comparison with alternatives they already use
Each behavior could achieve the same outcome but places different demands on users.
Step 3: Evaluate Using the Behavior Fit Assessment
Score each candidate behavior using the Behavior Fit Assessment. All three dimensions must pass threshold for a behavior to be viable.
| Dimension | Question | Threshold |
|---|---|---|
| Identity Fit | Does this behavior align with who they see themselves as? | ≥ 6/10 |
| Capability Fit | Can they actually perform this behavior? | ≥ 6/10 |
| Context Fit | Does their environment support this behavior? | ≥ 6/10 |
The key rule: If any dimension scores below 6, you’re forcing, not matching. Either select a different behavior or design interventions to raise the limiting dimension.
Selection criteria: Among behaviors that pass all thresholds, select the one with the highest minimum score, meaning the behavior with the smallest gap between its strongest and weakest dimension.
After this analysis, design interventions or products that enable the selected behavior. Not before.
Why Forcing Fails
Organizations default to forcing for understandable reasons. They have already decided what behavior they want. They have built products around that behavior. They have invested in campaigns promoting that behavior. Admitting the behavior is wrong feels like admitting failure.
In practice, forcing tends to produce:
Resistance and Reactance: When people feel pressured to behave in certain ways, they often push back. Psychological reactance is the tendency to do the opposite of what we are told, especially when we feel our freedom is threatened.
Intervention Dependency: Forced behaviors require constant reinforcement. Remove the incentive, the reminder, or the friction reduction, and the behavior stops. This creates perpetual cost without building sustainable routines (habits when eligible).
Small Effects: At scale, “nudge-first” tactics often produce small average effects (often ~1–2 pp in at-scale trials).
Bias‑corrected analyses report near‑zero average effects across nudge meta‑analyses.
Temporary Change: Forced behaviors rarely persist. Effects decay back toward baseline once the intervention ends.
Why Matching Works
Matching produces fundamentally different outcomes:
Alignment with Existing Identity: When behaviors match who users see themselves as, there’s no identity friction. Users continue the behavior because it’s consistent with who they are, not because someone is pushing them.
Self‑Sustaining Loops: Well‑matched behaviors create positive feedback cycles. The behavior produces value, which reinforces the behavior, which produces more value. No external intervention required.
Higher Leverage: Because matching changes what you ask people to do (not just how you ask), upstream behavior selection often has more leverage than downstream optimization. In some cases it changes the adoption trajectory entirely (e.g., Burbn -> Instagram).
Lasting Change: Matched behaviors persist because they fit. Users do not need reminders or incentives. The behavior becomes part of how they operate.
How to Apply Behavior Matching
Step‑by‑Step Process
1. Start with user research, not product features.
Before defining what behavior you want, understand who your users are. What do they already do? What do they want to do? What comes naturally to them?
Methods:
- Behavioral observation (watch what users actually do, not what they say they do)
- Identity interviews (explore how users see themselves, their values, their aspirations)
- Context mapping (understand the environments where users operate)
2. Define outcomes in user‑centric terms.
Translate business objectives into outcomes users would recognize and value. “Increase engagement” becomes “help users feel competent and connected.” “Drive adoption” becomes “enable users to accomplish their goals faster.”
3. Generate candidate behaviors through divergent thinking.
Push past the obvious first answer. Ask:
- How would users accomplish this outcome if our product did not exist?
- What behaviors do our most successful users already perform?
- What adjacent behaviors in other domains could transfer?
- What behaviors would feel like play rather than work?
4. Score candidates using the Behavior Fit Assessment.
For your target user population, evaluate each candidate behavior across all three dimensions:
| Behavior | Identity Fit | Capability Fit | Context Fit | Minimum | Viable? |
|---|---|---|---|---|---|
| Behavior A | __/10 | __/10 | __/10 | __ | Y/N |
| Behavior B | __/10 | __/10 | __/10 | __ | Y/N |
| Behavior C | __/10 | __/10 | __/10 | __ | Y/N |
Be honest about low scores. A behavior that scores below 6 on any dimension faces an uphill battle.
5. Select the highest‑scoring behavior that achieves the outcome.
Resist the temptation to pick the behavior that is easiest to build or measure. Pick the behavior that best fits your users. Execution challenges are solvable. Fit problems are not.
6. Design solutions that enable the selected behavior.
Only now should you think about product features, interfaces, and interventions. Your job is to make the well‑matched behavior easy and obvious. Remove friction. Provide tools. Create supporting context.
7. Validate fit before scaling.
Test whether the behavior actually matches. Watch for signs of forcing:
- High drop‑off despite good activation
- Constant need for reminders or re‑engagement
- Low retention without incentives
- Users describing the behavior as obligation, not value
If you see these signs, return to step 3 and explore alternatives.
Deeper Diagnosis: When Matched Behaviors Fail
Sometimes a behavior passes Behavior Fit Assessment screening but still doesn’t perform as expected. When this happens, use the full Behavioral State Model for granular diagnosis.
The Behavior Fit Assessment collapses eight BSM components into three dimensions. For troubleshooting, examine all eight:
| Assessment Dimension | BSM Components to Examine |
|---|---|
| Identity Fit | Personality, Perception, Social Status |
| Capability Fit | Abilities, Physical Environment |
| Context Fit | Emotions, Motivations, Social Environment, Physical Environment |
Identify which specific component is scoring low, then design targeted interventions to address it.
Examples
Instagram vs. Burbn
Burbn was a check‑in app. Users were supposed to broadcast their location to friends.
Behavior Fit Assessment scores for check‑ins (example):
- Identity Fit: 4/10: Most people do not see themselves as “check‑in people”; feels awkward
- Capability Fit: 7/10: Technically easy, but requires remembering to act
- Context Fit: 4/10: Context doesn’t naturally prompt check‑ins; value proposition unclear
Minimum score: 4. Verdict: Forcing.
Instagram pivoted to photo sharing.
Behavior Fit Assessment scores for photo sharing (example):
- Identity Fit: 8/10: Visual self‑expression aligns with identity
- Capability Fit: 9/10: Phone cameras are easy; filters solved the skill gap
- Context Fit: 8/10: Mobile context supports quick capture; social environment rewards sharing
Minimum score: 8. Verdict: Matching.
Recruiting: Matching vs. Mass Filtering
Traditional recruiting behavior: review hundreds of resumes to find qualified candidates.
Behavior Fit Assessment scores for CV filtering (example):
- Identity Fit: 3/10: Tedious; nobody identifies as “resume reader”
- Capability Fit: 5/10: Hard to evaluate candidates from paper; high cognitive load
- Context Fit: 3/10: Draining activity; low motivation; no immediate reward
Minimum score: 3. Verdict: Forcing.
Alternative behavior: connect directly with well‑matched candidates through warm introductions and targeted outreach.
Behavior Fit Assessment scores for targeted connection (example):
- Identity Fit: 7/10: Feels strategic and human
- Capability Fit: 7/10: Requires network access, but learnable
- Context Fit: 8/10: Directly tied to hiring goal
Minimum score: 7. Verdict: Matching.
Anti‑Patterns: Common Matching Mistakes
Premature Behavior Lock‑In
Mistake: Deciding on a behavior before exploring alternatives, then looking for ways to force it.
Example: “Users need to complete our 12‑step onboarding” becomes the fixed requirement. All effort goes into making users complete those 12 steps rather than asking whether 12 steps is the right approach.
Fix: Always start with outcomes. Ask “What are we trying to achieve?” before “How do we get users to do X?”
Scoring Based on Ideal Users
Mistake: Evaluating behaviors based on your most engaged users rather than your target population.
Example: Power users love the advanced dashboard. Scoring based on them suggests everyone will. But most users aren’t power users.
Fix: Score behaviors for your median target user, not your best users. Better yet, segment and score separately for different user types.
Ignoring Identity Fit
Mistake: Focusing only on Capability and Context (making the behavior easy) while ignoring whether users want to perform the behavior.
Example: Reducing friction to zero for a behavior users do not value. They still will not do it; they just have fewer excuses.
Fix: Check Identity Fit first. If the behavior conflicts with how users see themselves, no amount of friction reduction will help.
Optimizing One Dimension at the Expense of Others
Mistake: Improving one fit dimension while damaging another.
Example: Adding gamification to boost Context Fit (making it fun) but triggering negative Identity Fit (users feel manipulated or childish).
Fix: Evaluate interventions across all three dimensions. A gain in one area that creates a loss elsewhere often nets negative.
Confusing Initial Appeal with Durable Adoption
Mistake: Selecting behaviors that seem appealing at first but do not sustain.
Example: A reward‑based behavior that attracts users initially but loses appeal once the novelty wears off.
Fix: Score behaviors for sustained engagement, not just initial trial. Ask “Would users do this on day 100?” not just “Would users try this on day 1?”
Key Takeaways
- Behavior change is a matching problem. Finding behaviors that fit users matters more than forcing behaviors that do not.
- Start with outcomes, not behaviors. Define what you want to achieve before deciding how to achieve it.
- Generate multiple candidates. The first behavior you think of is rarely the best match.
- Use the Behavior Fit Assessment. Score Identity Fit, Capability Fit, and Context Fit. All three must score ≥ 6.
- The minimum score is the bottleneck. A behavior scoring 9/9/4 will fail. Fix the 4 or choose a different behavior.
- Fit beats force. Even excellent execution cannot overcome fundamental mismatch.
- Validate before scaling. Signs of forcing indicate a matching problem.
- When in doubt, return to the user. The answer lives with your users, not in your conference room.
Further Reading:
- Behavior Fit Assessment: The rapid evaluation tool for behavior selection
- Behavioral State Model: The full 8‑component diagnostic framework