Behavioral Strategy: Research Assistant Folio

This folio orients you to our discipline, standards, and workflow so you can source, evaluate, and structure evidence and case studies that align with Behavioral Strategy as defined on this site.

1) Core POV (must internalize)

  • Fit‑first sequence: Problem → Behavior → Solution → Product (the Four‑Fit hierarchy).
  • Immediate objective: Achieve Behavior Market Fit (BMF) for a defined behavior in a defined segment, then validate Solution Market Fit and work toward Product Market Fit.
  • Field over slogans: Prioritize real‑world behaviors and contexts; lab effects are hypotheses until validated in the field.
  • Evidence discipline: Report Δ‑B (pp), Time‑to‑First‑Behavior (TTFB), and behavior retention (D30/D180) with denominators, windows, and cohort definitions.
  • Nudges/Defaults stance: Defaults are configuration; nudges often yield small effects in practice (~1–2%) and do not substitute for fit or solution enablement.
  • Replication awareness: Treat broad “loss aversion” claims and “magic word” effects with skepticism; prefer large, field‑based evidence.

Key internal pages to read now:

  • Definition: /definition/
  • Discipline Charter: /charter/
  • Measurement Standards: /standards/measurement/
  • Why Nudges Fail: /evidence/why-nudges-fail/
  • Replication & Lab Limits: /evidence/replication-and-lab-limits/
  • Behavior Market Fit: /glossary/behavior-market-fit/

2) What we are collecting

We are building a practical, field‑validated evidence base:

  • Field experiments and evaluations (public sector, healthcare, enterprise, consumer) that report behavior change with clear denominators and windows.
  • Strong case studies (company pivots, design/program changes) where target behaviors, Δ‑B/TTFB, and retention are reported or reconstructible.
  • System‑level interventions where “defaults” are often misattributed (e.g., organ donation) but process/infrastructure changes are the real drivers.

We prefer (in order): 1) Pre‑registered field RCTs and quasi‑experiments with behavior endpoints. 2) Operational program evaluations with behavior metrics (Δ‑B/TTFB/retention) and clear methods. 3) High‑quality industry reports/S‑1s/retrospectives with extractable behavioral data.

We will explicitly label illustrative numbers when exact data are unavailable.

3) Evidence quality rubric

  • High: Field RCTs/quasi‑experiments; methods + behavior endpoints + denominators/windows; reproducible measures.
  • Working: Strong program evaluations or large‑scale observational studies with clear, behavior‑level metrics and limitations.
  • Speculative: Credible narrative/case with partial metrics; used sparingly and marked “illustrative.”

Avoid/flag: Attitudinal outcomes as endpoints; lab‑only effects; proxies (clicks/toggles) unless causally tied to a target behavior.

4) Extraction template (use consistently)

For each candidate study/case, extract the following:

Title:
Domain: [technology | healthcare | public | finance | education | enterprise]
Target behavior(s): [operationally defined]
Design & sample: [RCT/quasi/observational; n; segment; setting]
Window & denominator: [e.g., 30 days; all exposed vs eligible]
Outcomes (behavior):
  - Baseline completion: [pp]  Post: [pp]  Δ‑B: [pp]
  - TTFB (median): [mm:ss → mm:ss]
  - Retention (D30/D180 behavior): [pp → pp]
Confounders/limitations:
External validity (where it travels):
BSM limiting factors (if discernible):
Replication/related evidence:
Link(s):
Recommended Evidence Ledger confidence: [High | Working | Speculative]

5) Where to look (priority sources)

  • Field/Policy RCTs: J‑PAL, IPA, 3ie, AEA RCT Registry, UK What Works Centres, US & EU evaluation clearinghouses, government “nudge unit” repositories.
  • Healthcare: Cochrane Library, NICE evidence summaries, transplant journals, national transplant organizations (e.g., Spain’s ONT).
  • Economics/social science: NBER, SSRN, OSF, RePEc, Google Scholar (filter for field/real‑world evaluations).
  • Industry: S‑1 filings, public quarterly reports, product team blogs, founder/engineering retrospectives, reputable trade press.

Always prefer sources with transparent methods and behavior‑level endpoints.

6) How to add to the site (files, formats)

1) Propose a new Evidence Ledger entry in _data/evidence_ledger.yml using the existing structure. Example entries to review: BS‑0003, BS‑0004. 2) If a study warrants a case page, add evidence/cases/<slug>.md using evidence/cases/_template.md as a model. Include a plain‑language summary, extraction, and links. 3) If a case belongs to a domain hub (e.g., healthcare, finance), add crosslinks at the bottom of the relevant application page. 4) Use the “Illustrative” label where numbers are inferred; tie to ledger IDs where verified.

Minimal front matter for a case page:

---
layout: default
title: "<Short case title>"
parent: Evidence Cases
description: "One‑line summary with behavior focus"
inLanguage: "en"
dateModified: "YYYY‑MM‑DD"
permalink: /cases/<slug>/
---

7) Measurement & reporting standards (non‑negotiable)

Read /standards/measurement/, then apply these rules:

  • Always report denominators and windows.
  • Use Δ‑B in percentage points for behavior completion, not relative % alone.
  • Report TTFB (median) when relevant; report D30/D180 behavior retention for sustained behavior.
  • Label unsourced numbers as “illustrative”; link verified numbers to sources and (ideally) ledger IDs.

Instrumentation glossary (from Standards):

  • exposure_shown → behavior_attempt_started → behavior_completed/abandoned (timestamps)

8) Anti‑nudge, pro‑fit guardrails

  • Never present defaults or opt‑out status as behavior change by itself. Surface the system/process changes enabling behaviors (coordinators, scripts, capacity, path design).
  • Avoid “press‑release science”: link claims to methods; reject headline‑only evidence.
  • Prefer small, thoughtful field pilots with clear thresholds over generic “best practices.”

9) Priority hunt list (first 2–3 weeks)

Produce 10 high‑quality candidates with extraction + 3 anchors to publish:

  • Public sector: Service completion (e.g., benefits recertification single‑session upload; appointment adherence). Behavior: complete X in one session; Metrics: Δ‑B, TTFB.
  • Healthcare: Medication adherence workflows; screening completion; form‑factor changes (e.g., patches/sprays). Behavior: complete behavior within window; Metrics: Δ‑B, D30 adherence.
  • Technology/Consumer: Onboarding to first behavior; creation/engagement behaviors (e.g., photo post). Metrics: TTFB, Δ‑B, retention.
  • Finance: Automated savings escalations/instant withdrawal; goal‑based saving. Metrics: adoption Δ‑B, savings rate changes (pp), persistence.

For each, prefer field trials with transparent methods and behavior endpoints.

10) Deliverables & workflow

Week 1

  • Read the core pages listed above.
  • Submit a list of 10 candidate studies/cases with extraction templates completed (can be brief bullets).
  • Flag 3 potential anchors to publish; draft proposed ledger entries with recommended confidence level.

Week 2

  • Draft 2–3 case pages in evidence/cases/ (use the template). Label illustrative values and link verified numbers.
  • Add corresponding entries to _data/evidence_ledger.yml with sources.
  • Submit PR with a short summary of what to publish now vs. what to hold pending more data.

11) Examples already on site (review for style)

  • Ledger: BS‑0003 (nudge effect sizes), BS‑0004 (organ donation defaults)
  • Evidence summary: /evidence/why-nudges-fail/
  • Case style: /cases/instagram-pivot/

12) Templates available

  • Fit Scorecards: /templates/fit-scorecards.md
  • Behavior Calendar (CSV): /templates/behavior-calendar-template.csv
  • User Profile (BSM) Template (YAML): /templates/user-profile-template.yaml
  • Case template: evidence/cases/_template.md

13) Style & ethics notes

  • Use plain language; avoid jargon. Define terms when first used.
  • Disclose limitations and confounders. Do not oversell effects.
  • Respect privacy/ethics; cite responsibly; no scraping behind paywalls without permission.

Questions? Open an issue titled “RA: ” with links to sources and your extraction. Include a recommendation (publish now / needs verification / drop).