Is the AI hype machine losing steam?

Markets are overlooking some glaring issues with AI that could result in dire consequences, writes Philip Gottschalk

|

By Philip Gottschalk, portfolio manager at Trium Capital

Tech leaders have touted AI as more profound than the harnessing of fire and a seismic moment in the history of technology, and the hype is reflected in the lofty valuations of AI companies.

However, after the initial wow-factor, ChatGPT usage has plateaued, raising questions about the pace of adoption and areas of deep impact. Are we in an AI bubble? Could large language models end up as mere glorified stenographs? Or might public backlash arise if generative AI enables mass manipulation, blackmail and sophisticated cybercrime?

Is AI adoption slowing?

Consumer product innovations integrating AI have been limited so far. At CES 2024, Samsung’s new personal assistant robot equipped with a built-in projector was a main attraction, but it was still at the prototype stage without a clear roadmap for commercialisation.

There were some useful innovations aimed at vulnerable populations, such as smart wheelchairs and AI-powered assistive robots for those with mobility impairments. However, they appeal to niche markets. In entertainment, innovations like AI-generated artwork or an AI DJ system appeared gimmicky, not revolutionary.  

Most promising were AI optimisations in the automotive space, such as a collision-avoidance platform using AI to detect dangerous driving or an AI-powered analysis to assess vehicle damage. But for those expecting large language models to make technology human and provide magical consumer experiences around digital assistants, smarter search and creative tools, the CES may have been underwhelming.

Another reason for slow adoption may be accountability questions. AI contained in the virtual realm saw rapid adoption in areas such as coding, creative content, office work, and marketing.  However, liability and safety concerns may slow adoption when it can impact human lives physically. For example, who is responsible when an AI system contributes to a mistake in a medical surgery?

The opportunity… and the danger  

The hype around AI echoes the dot-com bubble in the early 2000s, where internet companies were assumed to hold massive profit potential. However, transformative technologies do not always guarantee lucrative consumer markets. AI may follow a similar pattern.

The most significant monetisation opportunities could be in less visible B2B implementations that increase efficiency behind the scenes, such as in research and development for drug discovery, lights-out manufacturing, or risk management. Our recent conversations with IT services companies which deploy these technologies at scale confirm this view.

AI is about to enter an innovation cycle similar to the development of the cloud fifteen years ago. Accenture confirms that only 10% are AI-ready. To reap the full benefits of the technology, companies should have an exploitable dataset, which implies all sorts of work, sometimes structural, to be achieved beforehand. This could prove a gold mine for Accenture and their competitors and a source of first mover advantage for the companies that already have leadership in IT.

A new study by Cognizant revealed that generative AI could inject up to $1 trillion into the US economy by 2032, increasing GDP by over 3 percentage points annually as it increases labour productivity through automation and data insights.

Political ramifications

The labour market is not the only aspect of our society to be profoundly changed. The rise of generative AI has raised concerns about mass manipulation and cognitive warfare. The elections in Taiwan at the beginning of this year provided a preview of the dangers ahead.  

There, pro-China actors allegedly used generative AI to create fake polls and fabricate documents to discredit candidates before disseminating them on social media. Fake accounts controlled by bots can instantly spread false narratives to millions, working together to game algorithms and elevate harmful content. Generative AI makes these bots more capable of original, personalised messaging that appears human. Deepfakes of candidates created by AI manipulate visual media.  

The content is designed to go viral by stoking outrage and triggering engagement. To avoid fact-checking and censure on social media, disinformation campaigns have been shown to subtly integrate propaganda with the truth. They reference real events and then twist narratives in China’s favour, like framing it as a peacemaker while portraying the US as warmongering. This blending of fact and fiction manipulates public opinion while evading scrutiny.

Lessons to learn from social media

Unlike social media, AI may not enjoy a lengthy grace period before regulatory scrutiny. The hands-off approach to regulating social media was a misstep with dire consequences. Social media have precipitated alarming mental health declines, especially among teenagers.  

Rates of depression, anxiety, loneliness, and suicidal ideation have surged in tandem with the proliferation of smartphones and social media addiction. In the US, 10% of teens see friends in person once a month or less. Indeed, 42% of high school students report persistent sadness and hopelessness, while 22% have seriously considered suicide. These disturbing statistics have risen sharply in just the past few years.

Generative AI amplifies the perils of social media to new heights by enhancing tools for engagement and addiction. Rather than providing a playbook, social media sounds an urgent alarm for regulating big tech. Geopolitical factors may further propel policymakers to act as AI amasses vast datasets, including politically sensitive areas such as education, the military, critical infrastructure, employment and product safety.

Taking all this into consideration, we believe rapid regulatory proposals and escalating litigation signal gathering headwinds for tech giants in 2024. With unease mounting over social media and uncontrolled AI, assertive government and court interventions now seem to be the major risk source for big tech investors.

Will AI deliver on its potential? Possibly. Will AI deliver on its potential with its dangers mitigated for? That will be a far greater challenge.