Dynamic Planner’s Abhi Chatterjee: AI is a real game changer

The chief investment strategist says the advent of DeepSeek is directly challenging the established order of AI and transforming the running costs, too

Abhi Chatterjee
4 minutes

The recent disruptive emergence of DeepSeek feels like it has the makings of a potential AI revolution.

While I continue to watch and form my opinion, it’s apparent that in a field often defined by sheer computational muscle, DeepSeek’s solution offers an elegant approach, albeit an innovation which faces critical challenges.

Day in, day out, I write code to process vast amounts of data and perform complex computations. What I truly desire is an elegant solution – a neat ‘hack’ that achieves efficiency within resource constraints. Taken on face value, DeepSeek shows us this is not just a dream, it can be possible.

The Chinese AI app has found ways to make models learn more like we do: by trying things out, getting feedback and learning from mistakes. It has also created ‘experts’, so instead of throwing the entire resource available at every little problem, DeepSeek sends each part of the puzzle to the ‘expert’ who’s best at it, which makes the models work more efficiently and use less power.

DeepSeek’s open-source release is a direct challenge to the established order of AI. By making its code freely available, it fundamentally democratises development, shattering the perception that cutting-edge models are the exclusive domain of tech giants.

This marks a profound shift towards a more open and competitive AI landscape, fostering collaboration and empowering a wider range of innovators. A disruption of the status quo promises to lower entry barriers, intensify competition and redistribute power, reshaping the future of the AI industry.

Challenging the status quo

The arrival of DeepSeek throws into question the orthodoxy that AI progress is solely dependent on vast financial resources and immense computing power. It is prompting a re-evaluation of AI development strategies, with a greater emphasis on efficiency and cost-effectiveness. This results in repercussions beyond the tech world – including for the new power couple of energy and AI.

Goldman Sachs Research estimates the power usage by the global data centre market to be around 55 gigawatts (GW), which is expected to rise to 84 GW by 2027. To put this in perspective, 1 GW can power between 700,000 and a million households for a year. Energy consumption is such a core component of input costs, Google and Amazon are investing in development of small modular nuclear reactors.

In Europe, Goldman Sachs expects that power requirements from the data centre pipeline over the next 10-15 years could amount to 170 GW – a third of the region’s power consumption. Given that data centres are the backbone of the training of large language models, the costs of running these to support future iterations of training can be expected to add up significantly. DeepSeek could be a step in the right direction to limit costs.

The bedrock of AI – data and computational power – is undergoing a dramatic transformation. Traditionally, processing units were designed for sequential tasks: a bottleneck for the parallel demands of deep learning’s massive datasets and complex calculations. The game-changer was the pivot to graphics processing units (GPUs), originally crafted for graphics rendering’s simultaneous computations. This unleashed an AI explosion, cementing Nvidia’s dominance with Cuda and its optimised AI hardware as the industry norm.

Breaking down barriers

DeepSeek’s breakthrough introduces a seismic shift. By enabling efficient AI training without reliance on proprietary hardware, it challenges the status quo. Will enterprises now embrace alternative computing strategies, slashing infrastructure costs and redefining procurement? Nvidia’s Cuda ecosystem, and the software support from AMD and Intel, face a potential disruption if so.

If DeepSeek’s models demand fewer resources, enterprise AI investments could be radically altered. Cost barriers, particularly for large-scale AI deployment, may crumble, opening new avenues for AI adoption, especially in rapidly developing regions. Cloud providers, seeking to diversify their offerings, are poised to explore DeepSeek’s cost and efficiency advantages.

This shift is not without its complexities. Long-term AI infrastructure investments and the need for proven stability create inertia. Nevertheless, the drive for control over AI workloads may fuel cloud providers’ investment in custom chip technologies, mirroring Google’s TPU and Amazon’s Trainium.

The era of unchallenged GPU dominance is being questioned, signalling a potential reconfiguration of the entire AI computing paradigm. The ripple effect of DeepSeek’s innovations has the potential to extend far beyond the technical realm, influencing economic models, energy consumption patterns and the very structure of the AI industry.

From an investment perspective, we’ve already seen significant valuation impacts, including the fading of US exceptionalism and better sentiment on China, which turns out to be less dependent on US chips and therefore less vulnerable to US tariffs.

The full extent of the change is yet to be realised, but the direction of travel is clear: a more open, efficient and democratised AI future.

This article originally appeared in April’s Portfolio Adviser magazine