Something major happened in Sydney recently. The Sydney Behavioral Science Summit was not just another academic conference; it was an ambitious attempt to unite the very best behavioral science minds in the region, from world-class researchers at leading Australian universities to practitioners in the country's most influential organizations, and connect them with BESCY's growing global network.
The energy in the room reflected exactly that ambition.
The Summit's focus on the intersection of behavioral science and AI could not have been more timely. As organizations across Australia grapple with how to harness the potential of artificial intelligence, the need for behavioral scientists at the table has never been clearer. The day's discussions produced insights that were both thought-provoking and practically useful. Speakers and participants challenged each other to move beyond simplistic notions of ‘AI adoption,’ and instead ask the harder, more important questions of ‘deep adoption’ and desired outcomes. The group explored the conditions under which AI augments, rather than replaces, human capability, the role of trust and psychological safety in driving meaningful adoption, and the risks of cognitive surrender, in a world where AI is becoming increasingly pervasive.
These are not abstract questions. They are the challenges that Australian organizations are wrestling with right now—and they are exactly the issues that behavioral science is uniquely positioned to help answer.
6 Presentations from Academics & Practitioners
The day was highlighted by six thought-provoking presentations from both academic leaders and active practitioners.
- Dr. Jason Collins of the University of Technology Sydney spoke about how companies need to build trust in AI tools, starting with their processes for disclosure. After sharing a negative example—in which a company required 337 words to explain a financial product—he argued that behavioral scientists should be working alongside legal and communications teams to promote cognitive simplicity and help ensure that customers understand what they are buying and/or the AI systems that they are employing.
- Dr. Alex Gyani of The Behavioral Insights Team (BIT) complemented Collins’ presentation by sharing BIT’s use of familiar frameworks like COM-B to help identify and overcome barriers to AI adoption within organizations. Importantly, he emphasized the distinction between “deep” (i.e., embedded) and “shallow” (i.e., ad hoc and rote) adoption of AI, with the former dependent upon organizations re-engineering work processes to utilize AI tools selectively and at scale.
- Dr. Juliette Tobias-Webb picked up on the theme of organizational transformation, while sharing a positive vision of human-machine teaming. She spoke about how numerous organizations are revisiting their business processes and customer journeys with a focus on finding the right balance of human and machine input and interface. For example, she shared how Australian rules football is using AI within refereeing decisions to drive better consistency of in-game penalties and kick judgements and calls, while still allowing for human oversight.
- Professor Ben Newell of the University of New South Wales also addressed the challenge of finding the right balance between humans and machines—while algorithms are better than humans at certain tasks, people are often suspicious of fully AI-based solutions (sometimes called “algorithmic aversion”). He emphasized the challenge of finding the sweet spot on the continuum between simplicity and complexity in driving adoption, and between reliance and assistance in promoting appropriate use of AI tools. Ben also spoke more broadly about the need to acknowledge and embrace uncertainty, given the enormous pace of technological change at the moment.
- Mallory Avery from Monash University explored the question of how technology is affecting recruiting and workforce outcomes by sharing numerous studies on the matter. While AI can remove bias in initial screening—which could help level the playing field for women and racial minorities in job hunting—Avery showed that there remain mixed patterns and outcomes within the workplace. Specifically, it appears that women are more likely (than men) to be perceived as “cheating” when they use AI in the workplace, so they are less likely to employ these tools. Mallory also confirmed that junior and entry-level employees are at greatest risk of job loss from technology, while noting that younger people are often early adopters may lack the context to use AI in selective or strategic ways.
- James Healy of The Behavior Boutique delivered an earnest plea that behavioral scientists play an important role as a counterbalance to corporate and political influence. After sharing numerous negative examples (of technology being employed in a misguided and manipulative manner), he spoke of the need to ‘elevate the human’ in the development and deployment of AI—and to help ensure that tools are serving the best interests of their users.
We should also note that the day concluded with a beautiful tribute to Sam Tatam, who we sadly lost from our community in 2025. The tribute was written and read by his former Ogilvy colleague Dan Bennett. Sam was a world leading practitioner in our industry, the author of Evolutionary Ideas, a true Aussie, and a great dad, husband, friend, and colleague.
5 Discussion Themes
As one might expect, these presentations sparked a wide variety of discussion, both across the full group and within smaller breakout groups. While it’s challenging to boil it down, here are several major themes that emerged:
The need for a tighter definition
Throughout the day, it became apparent that even experts were using the term “AI” quite loosely to describe a wide range of different technologies, applications, and challenges. Just as behavioral scientists pride themselves on defining desired changes quite specifically, we need to apply a similar mindset to the myriad issues tied to managing technological change. For example, we need to employ a terminology that distinguishes between different types of AI technologies, applications, and the behavioral challenges they create.
The importance of selective, strategic AI adoption
Because it is precisely the kind of behavior change challenge we're trained to address, we initially noticed the question of how to increase overall AI adoption at the personal and organizational level. However, one of the most significant moments of the day came when we collectively challenged ourselves to step back from that framing.
Rather than treating adoption as the goal in itself, we began asking a more fundamental question: What outcomes are we actually trying to achieve, and how can AI help us get there?
In Nathalie Spencer’s words, we need to move beyond “adoption for adoption’s sake” and instead focus on using AI and AI-powered tools to selectively to drive better customer and commercial outcomes. To this end, Zarak Khan of Bescy offered a helpful three-tiered framework, which distinguished between strategic applications, tactical workflow efficiencies, and new capabilities ("what can I do now that I really couldn't do before?").
The centrality of trust
Many speakers circled back to the issue of trust, as a foundational element for promoting engagement, among both users who may be intimidated by new technology and employees who may fear that their jobs are at risk. Certainly, we heard that there are effective strategies to promote psychological safety, build trust, and encourage AI exploration.
However, beneath the conversation about trust lay a more fundamental principle: AI is most effective, and most likely to be embraced, when it is framed and deployed as an augmenting tool rather than a replacing one. When people feel that AI is there to enhance what they can do, rather than act as a substitute for them, resistance decreases.
The key, as multiple speakers emphasized, is finding and clearly conveying the optimal balance between machine and human, both in employee roles and customer experiences. Organizations that get this balance right are not just more likely to see successful adoption—they are more likely to see AI used in the selective, strategic ways that actually deliver meaningful outcomes.
The personal element
Perhaps the most surprising insight from the Summit was the extent to which many individuals are using large language models (LLMs) to support their social and emotional needs. Participants noted that people are forming intimate relationships with AI, using it as a therapeutic tool or as social replacement.
For practitioners, this signals a shift from viewing AI purely as a work tool to recognizing it as a companion that can influence behavior in highly personal ways.
The risk of “cognitive surrender”
However, the more personal—and potentially pervasive—use of AI also pivoted the discussion toward the broader implications of technology on critical thinking. Nathan Wang-Ly raised concerns about cognitive surrender—the tendency for individuals to give up critical thinking in favor of AI-generated outputs. Clearly, this phenomenon has profound implications for both the workforce and education. For example, if students rely on AI to perform tasks traditionally used to build “cognitive muscles,” the long-term impact on skill mastery remains unknown.
3 Major Takeaways
Many of us emerged with three inter-related takeaways:
- It’s clear that behavioral science has many constructive roles to play in helping individuals, organizations, and society manage technological change.These range from the more specific (e.g., promoting clarity of disclosure information), to the strategic (e.g., helping companies transform processes to allow for “deep” AI adoption) ,and the existential (e.g.,elevating the human and championing positive user outcomes in the face of corporate and political pressures).
- Collectively, we still have a great deal of work to do in defining our role(s), conveying our value/importance, and seizing this opportunity to positively influence AI development and deployment. Acting quickly and decisively is particularly important, given the astounding rate at which this field is developing.
- More broadly, the field of applied behavioral science appears to be at an inflection point, both in Australia and globally.
The field has moved well beyond its first wave of hype, through a period of rigorous self-examination, and has arrived at a moment of finding balance. The challenge now is not to prove that behavioral science works—that case has been made. The challenge is to communicate its value clearly and compellingly to the organizational leaders, executives, and policymakers who have the power to resource it, embed it and act on it.
So, was the Sydney Behavioral Science Summit a success?
Our answer is a resounding yes, as we believe that bringing together the most brilliant behavioral science minds in the region, building bridges between them, and discussing pressing topics (like massive technological change) is a win-win-win.
SBSS wouldn’t have been possible without an amazing venue and we’d like to thank the team at University of Technology Sydney (most notably, Professors Adrian Camilleri & Elif Incekara Hafalir) for coordinating and hosting the event.
So, what’s next?
Clearly, we view this Summit as a beginning, not an end in itself. Bescy is committed to helping build and support a thriving behavioral science community in Australia which connects practitioners with researchers, gives early-career scientists a pathway into the field, and ensures that Australian voices are heard in global conversations.
So, if you are a behavioral scientist in Australia or New Zealand—whether you are leading a team inside a major organization, running your own consultancy, teaching the next generation of practitioners, or just starting out in the field—we would love to hear from you. Just reach out and let us know what interests you and how you would like to contribute.
Following the Summit, Bescy hosted a virtual panel discussion titled ‘Leveraging Technology & Maximizing Impact: Insights from the Sydney Behavioural Science Summit,’ where Nathalie Spencer (IAG), Zarak Khan (Bescy), and Nathan Wang-Ly (UNSW) shared their reflections on the event and discussed the evolving role of behavioural science in the age of AI along with career pathways for emerging practitioners.
If you would like access to the recording of this session, please reach out to Scott Young at scott.young@bescy.org