The Bescy Practitioners Group recently explored the intersection of AI and behavioral science to consider how artificial intelligence is reshaping the way we understand and influence human behavior.

Our panel was moderated by Scott Young, hosted by Grace Lewallen, and featured Deelan Maru (BIT); James Healy (The Behaviour Boutique); and Samuel Salzer (Behavioral AI Institute). Drawing upon their experiences spanning public policy, organizational consulting, and behavioral design, we received rich insights about the opportunities and risks associated with integrating AI into behavioral science practice. Seven key themes emerged that bear insights for how practitioners can navigate this rapidly evolving space:

  1. Recognize Common Origins of AI and Behavioral Science

While AI is synonymous with technology, people don’t often recognize its foundations in social science naturally. Early AI pioneers also drew directly from psychology and cognitive science, using models of perception, memory, and reasoning to guide their algorithms. Understanding this shared lineage helps us to see AI as an extension of our long-standing effort to model human decision-making. By remembering this link, behavioral scientists can bring a critical perspective to AI efforts: recognizing where machines replicate positive aspects of our thinking, and where they risk magnifying human error or misjudgments at scale.

  1. Use AI to Enhance (Not Replace) Human Insight

If wielded correctly—as a tool to strengthen human judgement, rather than act as a substitute for it—AI can make behavioral science more powerful. It can accelerate research, surface patterns across massive datasets, and enable rapid testing of interventions that once took months to refine. Tools that summarize literature, synthesize qualitative feedback, or model behavioral outcomes can expand the capacity of behavioral teams and free up more time for creative and strategic thinking. However, when used uncritically, these same tools risk flattening nuance and oversimplifying complex motivations. AI’s real promise lies in augmenting human sense-making, by helping practitioners see more, test faster, and scale what works. Yet humans should be kept at the core of interpretation and application.

  1. Avoid Automating Inefficiency

There’s a growing temptation to deploy AI wherever possible; to speed up reports, draft communications, or generate ideas on command. However, there is a danger of supercharging old habits instead of inspiring new ones. Automating inefficient systems doesn’t actually make them better; it just makes them faster at being wrong. True innovation comes from asking what should be redesigned; so practitioners should pause to ask, “What problem are we actually solving?” before introducing AI into workflows. The goal should be to eliminate low-value tasks altogether, not just automate them. Otherwise, we risk creating a future where advanced technology amplifies organizational noise rather than insight.

  1. Preserve Cognitive Friction

Behavioral science has consistently demonstrated that while effortless experiences may aid convenience, they aren’t good for making the most optimal decisions. In our relationship with AI, this insight becomes critical. When systems make decisions or generate answers instantly, they remove the small moments of friction that prompt us to reflect, question, or learn. This means that we should build AI tools that encourage users to think alongside the machine, not defer to it. This might involve prompting reflection, showing confidence levels, or explaining reasoning in human terms. As AI becomes a collaborator in decision-making, good design should preserve just enough friction to keep humans alert, engaged, and accountable for their choices.

  1. Anticipate Emotional and Social Consequences

AI is not only a cognitive force, but also an emotional one. Systems designed to respond empathically or mirror human tone can shape how we feel, not just what we do. Behavioral science helps illuminate the subtle effects of these interactions: such as how constant affirmation can provide comfort, while insincere responses can lead to the erosion of trust. In everyday use, AI tools that are endlessly sycophantic risk distorting how people relate to others in the real world. They can train us to expect frictionless relationships, ones without disagreement, effort, or repair. Practitioners can help anticipate these social and emotional effects, ensuring that the technologies designed to support human wellbeing don’t unintentionally undermine it.

  1. Embed Behavioral Principles in AI Development

AI systems are only as human as the data, objectives, and constraints that we design into them. Behavioral science can play a central role in making those design choices more reflective of real human needs and values. Incorporating behavioral insights can improve not just what AI optimizes for, but how it interacts with people. For instance:

  • Applying decision science to improve how uncertainty and risk are communicated
  • Using behavioral design to craft interfaces that promote reflection (rather than impulse)
  • Embedding prosocial and ethical considerations early (rather than as afterthoughts)
By working directly with data scientists, engineers, and policymakers, behavioral experts can help ensure that AI development is consistently human-centered. 
  1. Build Ethical Guardrails Through Behavioral Design

Even well-intentioned AI systems can produce unintended consequences. Behavioral design offers concrete methods to anticipate and mitigate those effects. This might mean adding feedback loops to catch bias early, testing how users interpret AI-generated content, or designing transparent opt-in processes that promote informed consent. These interventions can turn abstract ethical considerations into tangible practice, by recognizing that people’s understanding of fairness, accountability, and agency often depend on how choices are framed and communicated. By applying behavioral thinking, teams can help develop and design systems that align with how people actually perceive information, gauge harm, and build trust.


Closing Thoughts

Artificial intelligence offers behavioral science an unprecedented opportunity to integrate our understanding of human behavior into the digital systems that increasingly shape us. Yet, that opportunity comes with responsibility. If AI simply mirrors human thinking, then all of our biases, blind spots, and misjudgments will all be magnified and reflected back at us. 

The challenge for behavioral science practitioners is to guide and shape that reflection with care. This includes embedding evidence, empathy, and ethics into the algorithms that increasingly define our world. In doing so, we can reaffirm and reinforce what behavioral science has always been about: applying our understanding of humans to help shape better outcomes. If employed with thought and care, AI provides us with a powerful new collaborator in that effort.