AI in financial services needs behavioural foundations

The focus should be on more than just outputs, writes Oxford Risk’s Greg B Davies

3–5m

By Greg B Davies, head of behavioural finance at Oxford Risk

The Mills Review looks towards 2030 and beyond, asking how advanced AI – including agentic systems and behaviour-aware personalisation – could reshape retail financial services, consumer outcomes, and regulation. It signals that the FCA is treating AI as a structural shift, not a short-term technology trend.

Amid the focus on governance, explainability and accountability, there is a risk of missing the fundamental challenge: AI in financial services is, at its heart, a behavioural problem. AI is accelerating analysis. Advice is not analysis.

The FCA flags concerns about “unconscious manipulation”, reduced consumer agency, and eroded financial literacy as AI becomes more embedded in how people access and act on financial guidance. These are behavioural risks: they sit in the design of journeys, the framing of choices, and the subtle ways systems can steer decisions. Better engineering helps, but it doesn’t solve the problem on its own. Trust lives in the implementation.

The 3% problem

The average investor loses out on around 3% returns on their total investible assets per year over time. Not because they have the wrong portfolio, but because they make emotionally driven decisions at the wrong time. They disengage during volatility, over-react to short-term noise, or never invest at all because the process feels overwhelming. Most investors don’t need a new portfolio. They need a calmer relationship with the one they already have.

AI has genuine potential to close that gap, delivering timely, personalised behavioural guidance at scale. But generic AI tools, trained on transactional data and optimised for engagement, are just as capable of widening it.

See also: Fidelity’s Tennant: Understanding your emotional biases is incredibly important 

Two distinct failure modes

The first is delivering the right solution in the wrong way. Financial decisions are inherently probabilistic, and consumers routinely misinterpret the statistics involved. Presenting risk as “a 20% increase” versus “from 5 in 100 to 6 in 100” produces very different emotional responses. A sound recommendation delivered in an emotionally mismatched way increases anxiety, reduces follow-through, and ultimately produces worse outcomes. Emotional comfort is not an optional extra, it is essential for long-term financial wellbeing.

The second failure mode is less visible, but more serious: delivering a solution that is unsuitable for that person. Behavioural capacity can change what is suitable. Composure under pressure, impulsivity, and financial confidence are dimensions of financial personality which, alongside financial circumstances, can legitimately change what the appropriate solution actually is, not just how it should be explained. AI systems that lack this understanding will systematically misjudge suitability, particularly for less experienced or behaviourally vulnerable consumers.

See also: FCA announces review into AI impact on retail markets

The data problem behind the model problem

A prior question deserves equal attention: what data are AI systems drawing on?

Transactional data show what consumers did. Interaction data show how their decision journey unfolded. Financial personality data capture stable behavioural traits that explain why people behave differently and predict stress responses. Systems that rely on the first two without the third will systematically misinterpret behaviour, especially during periods of market volatility when the stakes are highest.

A framework for behaviourally safe AI

The regulatory conversation should focus not just on what AI outputs, but on the behaviours it reliably produces over time. That requires a clear separation of roles: deterministic suitability models should anchor what is financially appropriate; AI should orchestrate how that insight is communicated, sequenced and personalised. AI should not invent suitability. It should contextualise and deliver it.

As systems become more agentic, the risk shifts from the wrong output to the wrong action, executed quickly, which raises the bar for bounded mandates and escalation. What must be true in production is deterministic anchoring where suitability is at stake, clear limits on what systems can do, and audit trails with clear provenance of which data informed outputs, and what information reached the consumer.

It also requires moving beyond one-size-fits-all consumer protection. Stronger guardrails for less engaged or more vulnerable individuals, with greater autonomy progressively unlocked as consumers demonstrate capability over time.

The UK has a genuine opportunity to lead not just on AI adoption, but on the standards that govern it. AI that can’t explain its reasoning in ways that deliver genuine fairness and accountability shouldn’t be making decisions on consumers’ behalf. AI that can, and is built on validated behavioural science from the ground up, has real potential to transform retail financial services for the better.

See also: More than half of new financial services board appointees have tech/AI experience