Behavioral Strategy Synthesis: Why Behavior Selection Matters Most
This page synthesizes patterns from 15+ success and failure cases across consumer, enterprise, and public sector to answer one question: Why do some behavioral initiatives scale while others burn money? The answer is counterintuitive: the behavior you select determines your success more than the quality of execution.
Evidence note: Quantitative figures on this page are examples unless linked to the Evidence Ledger or a primary source + date.
Executive Summary: The Core Finding
Across all successful cases, including Instagram, Slack, Duolingo, M-PESA, Netflix, Spotify, Zoom, Proposify, Airbnb, and RobinHood, there is a single consistent pattern:
Success comes from selecting a behavior that already fits the user’s context, motivations, and abilities. Execution quality matters, but only after fit is established.
Across all failures, including Quibi, Google+, Burbn, most gamification, corporate wellness, and Mint, there is an equally consistent anti-pattern:
Failure comes from selecting a behavior that requires users to change their environment, identity, or core motivation. No amount of optimization, incentives, or execution excellence can overcome this mismatch.
This synthesis reveals five signature insights that separate winning strategies from expensive mistakes.
Insight #1: Behavior Selection > Execution Quality
The Pattern in Successes
Every successful case involved a pivot moment where teams recognized the wrong behavior had been chosen. They pivoted to one with higher fit:
-
Instagram: Started with check-ins (low frequency, low social salience). Pivoted to photo sharing (high frequency, strong identity fit). Result: 25M users by 2012.
-
Slack: Started as an internal tool for team coordination. Pivoted to persistent, searchable team messaging (matching what teams were already doing informally via email/IM). Result: 93% team retention among power users.
-
YouTube: Started with dating videos (prescribed behavior). Expanded to “broadcast yourself” (any user-selected behavior). Result: Viral organic growth.
-
Duolingo: Focused on micro-lessons (1–3 minutes), not full-course study. Users already had fragmented attention; provided solution that matched their reality. Result: 100M+ active learners.
-
M-PESA: Did not invent “mobile money.” Enabled phone-based remittance (users were already sending money; USSD + agent network just removed friction). Result: 80%+ adoption in Kenya.
The Pattern in Failures
Every failure case involved teams that either:
- Never validated behavior fit before building, or
- Continued optimizing the wrong behavior despite poor fit signals
-
Quibi: Selected passive short-form video consumption (TV behavior) for mobile context where users exhibit interactive, lean-in behaviors (games, stories, tap/swipe interactions). No amount of production quality rescued the mismatch. Result: ~$1B+ loss.
-
Google+: Selected symmetric social networking (mutual friends, personal sharing) for users oriented to asymmetric discovery (YouTube) and professional info. Ignored network effects and switching costs. Result: Platform shutdown.
-
Burbn: Selected check-in + photo hybrid for mobile. Facebook already dominated check-ins; the behavior didn’t fit without a major differentiation (which became Instagram’s photo-first pivot).
-
Most gamification failures: Selected point/badge collection for tasks users didn’t want to do. No amount of extrinsic reward can overcome low intrinsic motivation or high friction. Result: Temporary spikes, rapid decline, abandoned systems.
The Insight
Behavior selection is 80% of success; execution is 20%. A mediocre team selecting the right behavior will outcompete an excellent team selecting the wrong one. The converse is also true: brilliant execution of a misfit behavior is just expensive failure delayed.
Insight #2: Identity Fit Often Determines Success
What Is Identity Fit?
Identity fit means: Does performing this behavior reinforce or contradict how users see themselves?
When identity fit is high, users adopt the behavior as a natural expression of who they are. When it’s low, adoption requires constant friction mitigation, incentives, or external enforcement.
Successful Identity Fits
-
Instagram (photo sharing): Users see themselves as photographers, moment-capturers, curators. Sharing photos = identity expression. High intrinsic motivation; network effects self-reinforce.
-
Duolingo (micro-learning): Users see themselves as language learners, even if just “dabbling.” The 3-minute behavior is humble enough to feel achievable; the streak/habit language positions learners as persistent, successful. Identity = “I’m someone who learns.”
-
Spotify (Discover Weekly): Users see themselves as music enthusiasts who appreciate novelty. Listening to playlists = discovering new music = being a “sophisticated listener.” Identity aligns; behavior sustains.
-
Airbnb (peer-to-peer booking): Guests see themselves as adventurers/explorers; hosts as entrepreneurs. Both identities are reinforced by the behavior. Friction is worth it because identity is strong.
-
M-PESA (mobile remittance): Senders see themselves as providers/family supporters. Mobile transfer = faster, safer way to fulfill this identity. Not a new behavior; just identity-aligned enablement.
Failed Identity Fits
-
Quibi (passive video): Selected behavior (lean-back TV watching) contradicted mobile identity. Mobile users see themselves as active, in-control, doing things. Passive consumption feels wrong on a device where users expect agency. Identity mismatch = unsustainable friction.
-
Google+ (mutual friending): Selected behavior (symmetric friendship) contradicted Google users’ identity as discovery-oriented, curators, not primarily social sharers. Users already had symmetric friend lists (Facebook). Google+ required identity shift they didn’t want. Network effects worked backwards.
-
Corporate wellness gamification: Employees see themselves as professionals, not game-players. Badges and points feel patronizing; behavior (tracking workouts) requires identity shift to “gamified person.” Identity friction overcomes any extrinsic reward.
-
Most habit-tracking apps: Assume users identify as “habit-formers.” But most don’t; they see themselves as busy, reactive people. Friction of tracking + identity mismatch = abandonment by week 3.
The Insight
When identity fit is high, behavior often sustains with minimal intervention. When it’s low, nudges and incentives rarely compensate for the friction. Successful strategies align behavior with how users already see themselves, or carefully design the solution to make new identity adoption feel natural, not forced.
Insight #3: Context Fit Must Match User Environment
What Is Context Fit?
Context fit means: Does the user’s environment enable this behavior to happen naturally?
Context includes:
- Physical environment: Where users are, what devices they have, what tools are nearby.
- Temporal patterns: When users have attention, how fragmented or focused their time is.
- Social environment: Who else is present, what are group norms, what do peers reinforce.
- Infrastructure: Access to networks, agents, APIs, payment systems, etc.
Successful Context Fits
-
Slack (persistent messaging): Context = distributed teams, always-on devices, need for searchable history. Solution fits the context perfectly. TTFB (time to first message) = minutes; behavior sustains because context enables daily triggering.
-
Zoom (video meetings): Context = remote workers, no commute, need for synchronous collaboration. One-click join from link matches user context (distracted, multi-tasking, needs frictionless entry). TTFB = seconds; behavior sustains through pandemic and beyond.
-
M-PESA (mobile remittance): Context = populations without bank access but with dense agent networks and feature phones. Solution perfectly fits infrastructure and user location patterns. TTFB = minutes; behavior sustains because no friction at point of use.
-
Duolingo (micro-lessons): Context = commuters, lunch breaks, and waiting rooms. Time is fragmented. 3-minute lessons fit user’s actual time availability, not aspirational 30-minute study sessions. Behavior sustains because context enables daily repetition.
-
Spotify (Discover Weekly): Context = users in the shower, commuting, and cooking. These are autopilot moments. One-click play removes decision friction. Delivered weekly (aligns with work-week rhythm). Context enables habitual consumption.
-
Proposify (value-first onboarding): Context = busy sales professionals who context-switch frequently. Guiding directly to “send proposal” (core value) matches user context: they’re trial-testing because they want to send proposals, not explore templates. TTFB reduction from 14% to >30% completion.
Failed Context Fits
-
Quibi (lean-back video): Context = mobile devices, which are primarily interaction-driven. TV-watching behavior requires lean-back attention; mobile context doesn’t support this. Users in actual mobile contexts (transit, multitasking) can’t sustain passive viewing. Context contradiction = $2B failure.
-
Google+ (friend network recreation): Context = desktop-first era with scattered social graph. Users already had friendship lists on Facebook (mobile-friendly, larger network). Google+ required rebuilding a list; wrong context (competing platform, switched ecosystem).
-
Corporate wellness (office-focused): Context = one-size-fits-all gym memberships and office wellness rooms. Remote workers face different contexts (no gym access, different routines). Behavior doesn’t fit actual context; high churn among distributed workforce.
-
Mint (budgeting): Context = moment of spending or weekly budget review. Mint required batched monthly data entry. Context friction (delayed feedback, manual work) kills behavior. Users need real-time feedback at point-of-purchase context.
-
Most habit-tracking apps: Context = assumption of dedicated morning/evening ritual. Actual context = scattered, interrupted days. Behavior requires finding same time/place daily; most users don’t have this environmental stability.
The Insight
Context fit often determines whether a behavior can sustain at all. Even if behavior has high identity fit and users are motivated, if context doesn’t enable natural triggering and repetition, the behavior dies. Successful strategies either match existing context or deliberately redesign context (infrastructure, physical space, social norms) to enable the behavior.
Insight #4: Nudges Are Not a Strategy (Treat as Marginal Optimization)
“Nudge” is often used as a catch-all for good UX, good onboarding, and good product design. On this site we use a narrower meaning: choice-architecture tweaks (defaults, framing, reminders, simplification) that aim to shift behavior without changing the underlying feasibility or value of the behavior.
When you weight the evidence toward (1) large at-scale field RCT programs and (2) publication-bias-corrected syntheses, the expected average effect is small and may be near-zero once bias is accounted for. See: Behavioral Strategy vs Nudging and Why Nudges Fail.
Practical implication
Don’t lead with nudges. If your plan depends on “nudging people” into a behavior, you are usually trying to solve the wrong problem:
- the behavior may not fit the segment or context,
- the system may not enable the action,
- the value loop may be weak or delayed.
What to do instead
- Select a behavior that fits (Identity + Capability + Context).
- Enable the behavior (tools, workflow, infrastructure, incentives/governance where appropriate).
- Use last-mile context tweaks only as experiments with clear success criteria.
If you still test a nudge
Treat it as a falsifiable experiment:
- define the target behavior, denominator, and window
- pre-commit to a minimum effect that justifies adoption
- set rollback criteria, especially for trust/ethics
Insight #5: Observe Behavior Before Designing Solution
The Validation Pattern in Successes
Every successful case involved deep observation of what users were already trying to do, before designers intervened:
-
Instagram: Observed that early adopters (mobile users, especially women) were manually cropping photos to square formats before uploading elsewhere. Behavior signal: users want simple, beautiful photo sharing. Built Instagram around this observed behavior.
-
Slack: Observed that teams were trying to organize scattered communication (email, Campfire, AIM). Behavior signal: users wanted searchable, persistent team chat. Slack built exactly that, not a “better email” or generic collaboration suite.
-
YouTube: Observed that users were uploading personal videos, tutorials, music, not just dating videos. Behavior signal: people want to share any content. Expanded from prescribed to user-driven behaviors.
-
Duolingo: Observed that successful language learners used micro-practice in fragmented time. Behavior signal: respect actual user time availability, not aspirational 30-minute sessions. Built for reality, not ideals.
-
M-PESA: Observed that in Kenya, informal money-sending networks (hawala-style) were thriving. Behavior signal: people want to send money via trusted channels. Built on existing trust patterns, just with digital infrastructure.
-
Zoom: Observed that pandemic-era workers needed frictionless video calls. No installation, no meeting ID confusion. Behavior signal: users need maximum simplicity. Built for what they were trying to do (call in, join, talk).
-
Spotify Discover Weekly: Observed that users spent significant time in “search/discovery” interaction. Behavior signal: people want novelty but are overwhelmed by choice. Built personalized pre-selection to remove friction.
The Validation Pattern in Failures
Every failure case involved assumption-driven design (building what designers thought users should do), not observation-driven design:
-
Quibi: Assumed users wanted TV-quality short-form video on mobile. Never validated whether mobile context actually supports passive consumption. Built what seemed logical, not what users did.
-
Google+: Assumed users wanted to rebuild social graphs. Never observed that users were already invested in Facebook’s larger networks and mobile experience. Assumption-driven; observation would have revealed network lock-in problem.
-
Most gamification failures: Assumed that adding game elements (badges, points) would motivate behavioral change. Never observed whether the underlying behavior was desired by target users. Assumption-driven; observation would have revealed motivation mismatch.
-
Habit-tracking apps: Assumed users would benefit from tracking. Never observed that most busy professionals don’t have stable routines for daily tracking. Assumption-driven; observation would have revealed context and time friction.
-
Corporate wellness programs: Assumed that gym memberships and reminders would drive behavior. Never observed that employees’ actual barriers were time, family obligations, and home-based routines. Assumption-driven; observation would have revealed different intervention points.
The Validation Process
Successful teams used Problem Market Fit validation before designing:
- Observe what target users are already doing (behaviors, workarounds, informal solutions).
- Validate that this observed behavior is a real problem (not an edge case or wishful thinking).
- Rank candidate behaviors by frequency, importance, and user motivation (not by what’s novel or exciting to build).
- Select the behavior with highest fit to context and identity.
- Build the solution around enabling this behavior, not changing it.
The Insight
The best way to predict product success is not surveys or focus groups, but observation of what users are already doing. Survey data tells you what users say they want (often aspirational, often inaccurate). Behavioral data shows you what they actually do, at what frequency, with what effort. Behavioral strategy starts by observing existing behaviors, then removing friction from them, not inventing new ones.
The Five Signature Moves of Behavioral Strategy
Across all successful cases, five specific moves repeat:
Move 1: Pivot When Behavior Doesn’t Fit
- Instagram pivoted from check-ins to photo sharing.
- Slack pivoted from internal coordination tool to team messaging platform.
- YouTube pivoted from dating videos to user-selected content.
Key principle: When early data shows behavior-market-fit is low (low TTFB, poor retention, low natural frequency), don’t double down. Pivot to a higher-fit behavior before building further.
Signal to watch: If less than 20% of users complete first target behavior naturally, or if retention D7 is below 30%, behavior fit is questionable. Validate alternatives before continuing.
Move 2: Simplify Behavior to Increase Capability Fit
- Duolingo reduced language learning to 3-minute units (vs. traditional 30–60 minute lessons).
- Proposify reduced onboarding to single-action focus: “send proposal” (vs. choosing templates, configuring, exploring).
- M-PESA reduced money remittance to USSD menu taps (vs. bank account setup, account minimums, documents).
- Zoom reduced joining to one-click link (vs. remembering meeting IDs, navigating software).
Key principle: When users struggle with first behavior completion (high TTFB, low completion rate), simplify the behavior itself, not just the interface. Reduce steps, reduce decisions, reduce cognitive load.
Signal to watch: If TTFB exceeds 5–10 minutes for the target behavior, or if <30% of users complete first instance, capability friction is too high. Redesign the behavior to be simpler or more modular.
Move 3: Change Context to Increase Context Fit
- Peloton brought fitness to home context (vs. gym context).
- Google Photos leveraged cloud storage and automatic backup (matching user’s mental context: “I want my memories safe and accessible,” vs. “I want to organize files”).
- M-PESA leveraged distributed agent networks (matching user’s physical context: nearby, trusted, accessible).
- Spotify Discover Weekly leveraged weekly rhythm (matching user’s temporal context: work week, commute routine).
Key principle: When behavior has identity fit and motivation fit, but users aren’t adopting, context may be blocking. Rather than add friction-removing features, change the environment to enable natural triggering and repetition.
Signal to watch: If users report that timing, location, or tooling prevents behavior (qualitative), redesign context first before optimizing interface.
Move 4: Target a Different Actor When Direct Approach Fails
This is less common but powerful:
- Spain’s organ donation system: Instead of changing individual decision-makers, targeted hospital coordinators as the actor. Coordinators became behavioral enablers for the system.
- Digital health platform redesign: Instead of expecting individual IT managers to self-train on complex integration, redesigned behavior to involve webinars and support staff (shifting actor from individual to group).
- Server training vs. patron education: In restaurant/bar settings, it’s often more effective to train servers (what menu items to recommend, how to present) than to educate patrons directly. Change the actor who influences the decision.
Key principle: When target user isn’t adopting, sometimes you need to work backward through the system to find a leverage point. Who influences the target user? Can you design behavior for that influence actor instead?
Signal to watch: If target users consistently fail to adopt despite low friction and high motivation, ask: who else is involved in this decision chain? Can we design for them instead?
Move 5: Measure via Durable Behavior, Not Engagement Metrics
- Instagram: Measured photo uploads, not logins.
- Slack: Measured message count and team retention, not DAU.
- Duolingo: Measured daily lesson completion and streaks, not app opens.
- Proposify: Measured proposal sends, not onboarding page views.
- Spotify: Measured listening time to Discover Weekly, not playlist deliveries.
Key principle: Engagement metrics (DAU, logins, page views) are cheap to game and tell you little about actual behavior adoption. Target behavior metrics (frequency, completion rate, retention) are harder to game and reveal real fit.
Signal to watch: If DAU is high but target behavior completion is low, or if retention is poor despite engagement spikes, re-examine whether measured behavior is the right behavior.
Why Failures Happened: A Taxonomy
Type 1: Conceptual Failure (Wrong Behavior Selected)
Root cause: Designers selected a behavior that doesn’t fit user context, identity, or motivation.
Examples: Quibi (passive video on mobile), Google+ (symmetric friending for asymmetric-discovery users), most gamification (points for unwanted behaviors).
Prevention: Validate behavior market fit before building. Use TTFB, first-completion rates, and retention metrics to test whether behavior is naturally desired.
Type 2: Design Failure (Right Behavior, Too Much Friction)
Root cause: Behavior is right, but solution adds friction instead of removing it.
Examples: Early habit trackers (requiring daily manual entry), Mint (batched monthly review instead of real-time), corporate wellness (gym memberships vs. home-based routines).
Prevention: Test TTFB and first-completion rates with early prototypes. If <60% of users complete first behavior, friction is too high.
Type 3: Context Failure (Right Behavior, Wrong Environment)
Root cause: Behavior is right, but user’s actual environment doesn’t enable it.
Examples: Wellness programs for remote workers (assuming office context), financial apps (assuming moment of decision is when you’re planning budget, not when you’re spending).
Prevention: Test in actual user contexts, not controlled labs. Observe where, when, and how users attempt the behavior naturally.
Type 4: Scaling Failure (Works for Early Adopters, Not Mainstream)
Root cause: Early adopters have high motivation and fit; mainstream users don’t. As adoption spreads, behavior fit declines sharply.
Examples: Peer-coaching apps (works for motivated enthusiasts, fails for casual users), niche communities (high fit for enthusiasts, low fit for mass market).
Prevention: Measure behavior fit separately for early adopters (enthusiasts) and mainstream segments. Plan for declining fit as adoption spreads.
Type 5: Motivation Misconception (Extrinsic Rewards Crowd Out Intrinsic)
Root cause: Behavior is right, but solution adds incentives that undermine intrinsic motivation.
Examples: Gamification of learning (badges crowd out curiosity), pay-for-behavior programs (payment crowds out altruism), corporate wellness incentives (bonuses crowd out autonomy).
Prevention: Distinguish intrinsic motivation (desire to do behavior) from extrinsic (rewards for doing behavior). When intrinsic motivation exists, extrinsic rewards often backfire.
The Success Framework: A Practical Guide
Use this framework to evaluate any behavioral strategy initiative:
Phase 1: Behavior Selection (Problem → Behavior)
Questions to ask:
- What behavior are users already attempting (with difficulty)?
- What is the natural frequency of this behavior? (daily, weekly, episodic?)
- What is users’ current time-to-first-behavior (TTFB)? (seconds, minutes, hours?)
- What percentage of target users complete first instance naturally? (target: >20%)
- What is retention D7 and D30 without intervention? (target: >30% D7)
Red flags:
- Users rarely attempt behavior naturally.
- TTFB exceeds 10 minutes.
- Identity mismatch (behavior contradicts how users see themselves).
- Context doesn’t naturally support frequency needed.
Success signal:
- Users can articulate why they want to do this behavior.
- TTFB is seconds to minutes.
- Users perform behavior multiple times in first week.
- Behavior aligns with user’s identity and context.
Phase 2: Solution Design (Behavior → Solution)
Questions to ask:
- What are the current friction points in this behavior? (ability, motivation, environment?)
- What is the minimum viable behavior (the simplest version)?
- How can we remove 80% of friction while preserving the core behavior?
- What context or environment changes would enable natural triggering?
Red flags:
- Solution adds new steps instead of removing them.
- Relies on extrinsic incentives (badges, points) for motivation.
- Ignores actual user context or environment.
- Assumes behavior change will happen without environmental support.
Success signal:
- TTFB drops by 50%+ vs. status quo.
- Completion rate increases by 30%+ vs. baseline.
- Users report behavior feels natural, not forced.
- Retention metrics (D30, D90) show sustained engagement.
Phase 3: Validation (Solution → Market)
Questions to ask:
- Do actual users (not just enthusiasts) adopt this behavior at scale?
- Does behavior sustain beyond initial trial? (D30, D90 retention?)
- Are there unexpected context barriers we missed?
- Does the behavior create network effects or does it decay?
Red flags:
- High initial adoption but poor retention (D30 <20%).
- Behavior adoption varies wildly by segment (some adopt, others don’t).
- Context barriers emerge at scale that weren’t visible in small tests.
- Requires ongoing incentives or nudges to maintain adoption.
Success signal:
- Sustainable D30 retention >40% (higher for habit-forming behaviors).
- Behavior adoption is consistent across target segments.
- No new friction barriers emerge at scale.
- Users adopt without external incentives or nudges.
Common Anti-Patterns to Avoid
1. The Rational Actor Fallacy
Pattern: Designing for how people “should” behave (following stated preferences) instead of how they do behave (following actual context and motivation).
Example: Fitness app designed for “dedicated morning runners” when target market is “busy parents with fragmented time.” Behavior doesn’t match actual user context.
Fix: Observe actual behaviors first. Design for reality, not ideals.
2. The More-Is-Better Trap
Pattern: Adding features, behaviors, or incentives thinking it increases value when it actually increases friction.
Example: Gamification that adds badges, leaderboards, and daily challenges when core behavior (exercising) already has natural rewards.
Fix: Start minimal. Remove features, not add them. Each new feature should reduce friction by at least 30%.
3. The Early Success Bias
Pattern: Mistaking early adopter enthusiasm (high intrinsic motivation, matching context) for mainstream viability.
Example: Fitness community app works brilliantly with fitness enthusiasts but fails to generalize to casual exercisers.
Fix: Measure behavior fit separately for early adopters and target mainstream. Plan for declining fit as adoption spreads.
4. The Metric Gaming Problem
Pattern: Optimizing for easy-to-measure behaviors that don’t connect to real outcomes.
Example: Optimizing for app opens instead of target behavior completion, or for daily active users instead of behavior frequency.
Fix: Measure durable behaviors, not engagement proxies. Report Δ-B (behavior change percentage points) with clear windows and denominators.
5. The Context Blindness Error
Pattern: Ignoring environmental and social factors that override individual interventions.
Example: Wellness program assumes all employees have gym access and stable routines; misses that remote workers, shift workers, and caregivers face different constraints.
Fix: Test in actual user contexts. Observe environmental barriers qualitatively. Design for context, not for labs.
6. The Motivation Misconception
Pattern: Assuming extrinsic rewards (incentives, gamification) can drive intrinsically-unmotivated behaviors.
Example: Using points and badges to drive compliance training completion when users see no value in the training.
Fix: Validate intrinsic motivation first. If users don’t naturally want the behavior, no incentive system will sustain it.
The Path Forward: Behavioral Strategy as Discipline
What This Synthesis Reveals
- Behavior selection determines outcomes more than execution.
- Right behavior + mediocre execution > wrong behavior + excellent execution.
- Successful teams pivot early, fail cheaply, validate before scaling.
- Fit across four dimensions predicts success.
- Identity fit: Does the behavior align with user self-image?
- Context fit: Does the environment enable natural triggering and frequency?
- Capability fit: Is the behavior simple enough for users to complete?
- Motivation fit: Do users intrinsically want to do this behavior?
- Nudges are force multipliers, not behavior creators.
- Nudges add incremental optimization on top of strong fit.
- Nudges are powerless against poor fit.
- Systemic changes (context, ability, infrastructure) drive the majority of change.
- Observation beats assumption every time.
- Start by watching what users are already doing.
- Validate market fit before designing solutions.
- Measure via durable behavior, not engagement metrics.
- Failures teach us where fit breaks down.
- Conceptual failure: wrong behavior selected.
- Design failure: right behavior, too much friction.
- Context failure: right behavior, wrong environment.
- Scaling failure: works for enthusiasts, not mainstream.
- Motivation failure: extrinsic rewards undermine intrinsic motivation.
The Competitive Advantage
In an era where product execution is commoditized (design, engineering, marketing all become table stakes), behavior selection becomes the last defensible advantage.
Teams that:
- Observe users before designing
- Validate behavior market fit early
- Simplify behavior to remove friction
- Adapt context to enable natural adoption
- Measure durable outcomes
…will consistently outcompete teams that:
- Assume which behavior matters
- Build first, validate later
- Optimize interfaces instead of behaviors
- Rely on nudges and incentives
- Measure engagement proxies
This is Behavioral Strategy.