In just a decade, applied behavioral science has moved from having very few conceptual tools to an overwhelming array of frameworks and approaches.

Behavioral science frameworks help guide and standardize approaches to problem-solving. They clarify—both within the organization and to partners—the steps required for a project, as well as the potential inputs and outputs, stakeholder responsibilities, and timeframes. Many organizations also construct frameworks to consolidate and categorize the knowledge and expertise they typically bring to bear. The best of them are easy to understand and can help someone grapple with a problem and generate potential solutions.

In this article, we walk the reader through the major frameworks and tools that are available for an applied behavioral scientist. There are distinct ‘types’ of tools designed for different purposes, and within each type of tool there are remarkable similarities. When we organize these tools by purpose, the overwhelming and messy literature becomes more manageable, understandable and implementable.

The Process of Applying Behavioral Science

Around the world, consensus has emerged on the process of applying behavioral science in the field. While each group has its own terminology and emphasis, the core process is remarkably similar. Specifically, behavioral problem-solving looks like this five-step process:

  • Step 1: Align on behaviors of interest.
  • Step 2: Understand and diagnose their causes.
  • Step 3: Design interventions to increase or decrease the targeted behavior.
  • Step 4: Assess, impact and iterate.
  • Step 5: Share and scale the insights.

Change a few of the words, and this is simply a general framework for problem solving. What makes this process applied behavioral science, instead of generic problem-solving or design thinking, is three-fold:

  • It builds upon a specific body of empirical literature in behavioral science that encompasses the fields of behavioral economics, cognitive psychology, experimental economics, social psychology, anthropology, sociology and others. This literature provides empirical evidence on decision-making and behavioral mechanisms, and a framework for thinking about behavior as contextual, malleable, and social. That is essential to Step 2, “Understand and Diagnose Causes.”
  • It focuses on changing behavior, instead of merely understanding behavior or changing attitudes, beliefs, or knowledge. The discipline only focuses on the extent that we can empirically drive behavior change*.* That is a core part of Step 3, the “Design” stage and enriches Step 1, the “Align” stage.
  • Behavioral science is committed to empirical revalidation and testing. Behavior change is hard, many well-meaning efforts have no effect or can backfire; hence, we need to “Assess and Iterate” and be thoughtful in our efforts to “Share and Scale.”

Other differences from a standard problem-solving approach occur in the specific techniques used at each step. For example, behavioral scientists place additional emphasis on defining and understanding key behaviors. We usually include some sort of discovery work to situate key behaviors in a broader system, rather than treating them as independent. In the assessment phase, behavioral practitioners might use prototypes, randomized controlled trials (RCTs), and other techniques and methods to better understand the effects of a new idea.

The First Tool in the Toolkit: Process Frameworks

Given this understanding of what behavioral scientists do, the first resource that a behavioral scientist has in their toolkit is a process framework for behavioral problem-solving. Almost every other tool is designed to help within one of the stages of the process.

The following are example process frameworks for behavioral problem-solving:

  • AUDAS from Busara. Designed for international development and global impact, the acronym stands for Align, Understand, Design, Assess, and Share.
  • DDDT from Ideas42. Designed for consulting, the acronym stands for Define, Diagnose, Design, Test.[1]
  • 3Bs from Irrational Lab. Focused on design and created for product development, the acronym stands for Behavior, Barriers, and Benefits.[2]
  • IDP from Matt Wallaert. Designed for product development, this tool focuses on behavioral strategy, insights, design, and impact evaluation. The acronym stands for Intervention Design Process (Wallaert, 2019).[3]
  • BASIC from OECD. This tool is focused on policy, and the acronym stands for Behavior, Analysis, Strategy, Intervention, and Change (Hansen, 2018).
  • TESTS from The Behavioral Insights Team. This tool is a general framework with a particular focus on policy. The acronym stands for Target, Explore, Solution, Trial, Scale.[4]
  • DECIDE, from Designing for Behavior Change 2nd Edition. Designed for applied research and product development, the acronym stands for Define the Problem, Explore the Context, Craft the Intervention, Implement the Solution, Determine the Impact, Evaluate Next Steps (Wendell, 2020).

The words differ and the acronyms are cute, but in the end they all provide a blueprint for behavioral problem solving. That might lead you to wonder why we need a framework at all. Most mature organizations will have process frameworks for how they solve a problem that are specific to their field and the nuances of the problems they want to solve. Since our community tends to solve similar problems, our frameworks look alike.

Frameworks provide a useful starting point for planning a project and understanding the steps involved. Even with them, organizations still rely on the expertise of practitioners to guide the process and adapt as roadblocks arise. As the authors of such frameworks ourselves, we can honestly say that it doesn’t matter which framework you use, as long as you stick with it, are clear on what it does, and adapt it to your needs.

Tools to Target or Align the Behavioral Process

Now that we have a sense of the process frameworks, we’ll review the other tools in an applied behavioral science toolkit. At each step in the process, the practitioner community has developed a set of tools that help us execute on these stages effectively. Different groups of practitioners have different tools that define their ‘special contribution’ within this problem-solving framework, but the framework and many aspects of the tools are quite similar. We’ll start with the “Align” stage.

In this stage, the behavioral community draws heavily on long-standing practice in the design field of a design brief. We tend to call it a ‘behavioral brief’, but the idea is the same: write down what we’re trying to accomplish (or that a client is asking us to accomplish) and any key constraints or considerations. This is done before any actual behavioral design work is conducted and is an essential tool to help stakeholders get on the same page.

Here are some specific tools that practitioners use:

  • Busara’s Applied Behavioral Science Brief includes the scope of the project, the target audience and behavior change, and key details of who does what and how on the project. For example, a recent project at Busara sought to encourage Bajan parents (the agent of change) to use local produce (the specific behavior change) in the meals they prepare for their kids (the target population) to order to reduce non-communicable diseases like diabetes and benefit the local economy (real world outcome). In our briefs we then add notes on planned measurement approaches (e.g., an incentive aligned RCT) and the constraints (e.g. digital-only contact via WhatsApp).
  • Matt Wallaert’s Behavioral Strategy process, in particular his “behavioral statement”: When [population] wants to [motivation], and they [limitations], they will [behavior] (as measured by [data]).
  • Designing for Behavior Change’s Actor-Action-Outcome. This tool works by identifying the Actor (who is going to do something differently to drive that outcome), the action (what is the ‘big thing’ they do differently?), and the outcome (the observable real-world impact that the process should deliver).
  • Rajesh Nerlikar and Prodify’s A Behavioral Hypothesis. This tool helps identify how helping an actor start or stop taking a specific action will create specific outcomes.
  • The BIT’s Target Statements, which help drive a specific policy objective.

When there are multiple possible target behaviors (or policies), groups like the OECD and BIT have prioritization tools to help identify which to pursue. For the OECD, their “priority filter questionnaire”, asks about each behavior in terms of its Importance, Ethics, Impact, Feasibility, Data Access, and Frequency.

4-review-behavioral-frameworks-2.png

Tools to Understand and Diagnose Causes

Once we understand what we want to accomplish, we start by trying to understand why the desired behavior isn’t happening already. What is stopping the person from taking a positive action of interest, or why are they currently taking a negative action? Practitioners (and developers of frameworks) sometimes jump straight to the next stage: what can we as applied behavioral scientists do to change that behavior—i.e., how do we design interventions? However, they are conceptually distinct, and we have distinct tools designed for each stage.

Tools to understand and diagnose causes can be divided into ‘macro models’ and ‘micro-obstacle tools. Macro-models tend to look at behavior in a single, consistent way, assuming that all behavior follows a similar trajectory. Most of these macro-models are actually long-standing models of human behavior from across the social sciences, and not unique to behavioral science at all. For example:

  • Self-Determination Theory (Deci and Ryan, 2012)
  • Health Belief Model (HBM) from US Public Health Service.
  • Theory of Planned Behavior (TPB) (Ajzen, 1991).
  • Transtheoretical Model, aka Stages of Change (Prochaska, 1997).

Micro-obstacle tools take a different approach by helping us find the micro-causes of a behavior via a four-step process:

  • Step 1: Gather data about what is currently occurring.
  • Step 2: Map out the specific sequence of actions.
  • Step 3: Find which step(s) are the problem.
  • Step 4: Identify the behavioral cause of that problem.

These tools then build on a key insight in behavioral science: small obstacles in a moment can get in the way of action, even if the macro-environment is conducive. Thus, they aim to understand the specific moment and situation in which things go awry.[5]

The first step, mapping the sequence, usually goes under the title of “behavioral maps.” Here are a few examples you can use:

  • Behavioral Mapping Case Study & Cheat Sheet from the Center for Advanced Hindsight at Duke University.[6]
  • Behavioral Profile from the Manoff Group.[7]
  • Target Behavioral Funnel from Ingenious Behavior.[8]
  • Key Behavior Workshops are sessions with stakeholders to understand the decision-making environment and identify candidate behaviors for interventions.
  • The term “journey map” is also used to describe this process (e.g. BIT TESTS framework), though that can be a bit confusing. The term is already used in the design community for a more general diagram that shows the sequence of stages a person goes through along a process of interest.

The step to identify the problematic moment can be either qualitative or quantitative by examining the data that shows where in the funnel people drop off, or simply asking them which micro-behavior is a problem. Once one has prioritized a key moment for intervention, then there are obstacle diagnosis tools that help the practitioner determine why that moment is a problem.

Here are a few additional tools you can use:

  • COM-B: Capability + Opportunity + Motivation -> Behavior. This is the most widely used framework for identifying obstacles (Michie, Van Stralen, & West, 2011).[9]
  • CREATE: Cue, Reaction, Evaluation, Ability, Timing, Experience.[10]
  • Barriers and Levers: Busara’s Framework for identifying specific types of psychological and environmental factors that block individuals, and the resources (levers) that can encourage it.
  • The OECD’s ABCD Framework: Attention, Belief Formation, Choice and Determination.[11]
  • Pressure Mapping, which is part of the Intervention Design Process from Matt Wallaert.[12]

Applying these frameworks to a target behavior allows us to identify gaps that might prevent someone from taking action. For example, the COM-B model could reveal that while a person has the Capability (they know what to do) and Opportunity (they have time to do it), they lack sufficient Motivation (the desire to take action) and the desired behavior does not occur.

These same tools that help the practitioner determine why a particular moment is a problem can also be used to audit a potential user experience and identify potential behavioral obstacles. This process, not surprisingly, is known as a “behavioral audit.”

Tools to Design Interventions

With an understanding of why there is a behavioral problem, next up we have tools to select interventions. Obstacle diagnosis tools often come with an intervention-selection tool as part of the package, for example:

  • The Behavior Change Wheel, linked to COM-B (Michie, Van Stralen, & West, 2011).
  • Busara’s mapping of interventions to specific structural and psychological Barriers and Levers.
  • The table of CREATE Interventions, linked to CREATE (Wendell, 2020).
  • The menu of strategies within the OECD’s BASIC framework (mapped to the same ABCD obstacle model used in the prior step)[13]

What makes those tools special is that they are directly tied to specific behavioral barriers. A variety of other tools are available that more generally seek to support a new behavior but aren’t directly tied to a specific cause. These more general, less targeted, tools include:

  • EAST: Easy, Attractive, Social, Timely. From the UK BIT (BIT, 2014).
  • MAP: Motivation—Ability—Prompt, the Fogg Behavior Model from BJ Fogg (Fogg, 2019).
  • CAR: Cue—Action—Reward (a model of habit formation)
  • Hook Model: Trigger Action, provide a (variable) reward, then increase investment (Eyal, 2014).[14]

No matter what framework one uses, often there will be too many intervention ideas than can be implemented in practice. The Designing for Behavior Change, the BIT’s TESTS framework, and Busara’s AUDAS each provide tools to prioritize the intervention ideas. These prioritization tools are also generally quite similar: they ask a series of questions to assess which interventions are likely to be most cost-effective and ethical.

Tools to Assess Impact

Practically speaking, behavioral science practitioners assess impact just like any other practitioner would, with techniques ranging from randomized control trials (experimental) to difference in difference models (statistical) to baseline-endline comparisons (algebraic) to embedded observation and process tracing (qualitative).

A wealth of literature exists on these research methods, from books on the practicalities of field experiments (e.g., Gerber and Green) and A/B testing, to more theoretically-based analyses of causality (e.g., Pearl). Similarly, a wealth of tools help practitioners use the following:

  • Tools to design and implement experiments. These are medium-specific, and good professional packages can be found for email experiments, website testing, app testing, and non-digital field work.
  • Tools to analyze the results of experiments or other causal data, often general-purpose statistical tools like R, Python, Stata, and such, and their various packages. For example, at Busara, there is an internal code repository to help with the impact assessment process, with samples for R and Stata.

These techniques are rarely specialized for “behavioral interventions” versus other types of impact assessment. The field is well developed and has existed for much longer than applied behavioral science. Behavioral science practitioners rightfully draw upon these existing lessons and tools.

Tools to Iterate, Share and Scale Up

Guidance on how, practically, to share and scale up behavioral science research is largely underdeveloped: there are few ready to use tools for this process. Sharing and Scaling is often just a command in current frameworks, as in: “you should really do this”. The BIT provides some practical guidance with its SCALE acronym: Sponsorship, Cost/benefit, Accountability, Logistics, Evidence. Overall however, this is an area that would benefit from new thinking and codification.

A recent addition to the scholarship in this space, John List’s The Voltage Effect can offer advice and serve as a starting point for practitioners. Some considerations when scaling include:

  • False Positives
    A false positive is an initial signal that an idea is successful, but that turns out to be incorrect when scaled up. For example, a small-scale trial of a new program might show promising results, but when the program is implemented at a larger scale, the results might not hold up. To avoid false positives, replicate successful trials multiple times to ensure that the results are robust and not driven by chance.

  • Representativeness of the Population
    The success of an idea often depends on the specific population it is designed for. For example, a program that works well for wealthy families might not work as well for lower-income households. To avoid scaling an idea that won't work for everyone, ensure that test groups at smaller scales reflect the larger population you're aiming to reach, and adjust your expectations or approach if necessary.

  • Representativeness of the Situation
    The success of an idea can depend on specific circumstances that might not be easily replicable at larger scales. For example, a program that relies heavily on a few talented individuals might not be scalable if those individuals are difficult to find or retain. To avoid this, identify the core drivers of success and make sure they can be replicated in different situations and with different people.

  • Unintended consequences and misaligned incentives
    An idea may be piloted successfully but fail to scale due to misaligned incentives and unintended consequences. Consider, for example, an organization that has a customer experience team, a collections team, and a leader tasked with increasing revenue. The customer experience team may recognize that the collections experience is quite negative for customers. In working to improve it, a behavioral scientist develops a successful intervention to reduce the number of customers going into debt collection. It may be a success for two parties, but the leader in charge of revenue may shelve the idea because it impacts the money collected via fees. To mitigate this issue, consider the unintended consequences of scaling and work to ensure alignment across the system as part of a pilot.

Conclusion

Throughout this article, we’ve discussed applied behavioral science as problem-solving. That is the dominant paradigm, and there are a wealth of tools and frameworks available for behavioral problem solving.

However, some alternatives are starting to appear. The most exciting work is likely on systemic or structural behavioral science. We can apply behavioral insights to better understand a dynamic system of interacting individuals, organizations and resources, and the opportunities for change.

The furthest along of these comes from Ruth Schmidt at the Institute of Design, and her work on “choice infrastructure” and “behavioral brittleness.” At this point, the non-problem-solving approaches to applied behavioral science are still in their infancy. We hope that soon enough our community will develop and share tools and resources like the well-established problem-solving resources described here.

References

  1. Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50(2), 179-211.
  2. BIT. (2014). EAST: Four simple ways to apply behavioural insights. Behavioural Insight Team, London.
  3. Deci, E. L., & Ryan, R. M. (2012). Self-determination theory. Handbook of theories of social psychology, 1(20), 416-436.
  4. Eyal, N. (2014). Hooked: How to build habit-forming products. Penguin.
  5. Fogg, B. J. (2019). Tiny habits: The small changes that change everything. Eamon Dolan Books.
  6. Hansen, P.G. (2018). Basic toolkit and ethical guidelines for policy makers-draft for consultation. In Western Cape Government-OECD Behavioural insight conference, Cape Town, South Africa. Also available online at: http://www.oecd.org/gov/regulatory-policy/BASIC-Toolkit-Draft-for-Consultation.pdf Search in.
  7. Michie, S., Van Stralen, M. M., & West, R. (2011). The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implementation Science6(1), 1-12.
  8. Prochaska, J. O., & Velicer, W. F. (1997). The transtheoretical model of health behavior change. American Journal of Health Promotion, 12(1), 38-48.
  9. Wallaert, M. (2019). Start at the End: How to Build Products That Create Change. Penguin.
  10. Wendel, S. (2020). Designing for behavior change: Applying Psychology and Behavioral Economics. O'Reilly Media.

  1. More information available at: https://www.ideas42.org/wp-content/uploads/2019/10/I42-1152\_ChangingBehaviorPaper\_3-FINAL.pdf ↩︎

  2. More information available at: https://irrationallabs.com/content/uploads/2022/07/3B-Framework-2022.pdf ↩︎

  3. Available also at: https://mattwallaert.com/startattheend/ ↩︎

  4. More information available at: https://www.bi.team/wp-content/uploads/2022/11/BIT-Handbook-How-to-run-simple-BI-projects.pdf ↩︎

  5. See the worksheet at https://www.bi.team/wp-content/uploads/2022/11/BIT-TESTS-worksheets.pdf ↩︎

  6. Access the case study at advanced-hindsight.com/blog/introducing-the-behavioral-mapping-case-study-cheat-sheet/ ↩︎

  7. Access here: thinkbigonline.org/behavior_profile_p ↩︎

  8. See the Behavioral Profile explanation at www.behavioraldesignmodels.com/about ↩︎

  9. Explore the Barrier Identification Tool at https://www.bitbarriertool.com ↩︎

  10. Free workbook available at http://behavioraltechnology.co/workbook ↩︎

  11. Framework document available at https://www.oecd.org/gov/regulatory-policy/BASIC-Toolkit-Draft-for-Consultation.pdf ↩︎

  12. Available athttps://mattwallaert.com/startattheend/ ↩︎

  13. Available at: https://www.oecd.org/gov/regulatory-policy/BASIC-Toolkit-Draft-for-Consultation.pdf ↩︎

  14. Further explanation available at: https://www.nirandfar.com/how-to-manufacture-desire/
    ml, ↩︎