The AI Bubble “Makes AI Bubble”, AI = Deficient Technology

All Global Research articles can be read in 51 languages by activating the Translate Website button below the author’s name (only available in desktop version).

To receive Global Research’s Daily Newsletter (selected articles), click here.

Click the share button above to email/forward this article to your friends and colleagues. Follow us on Instagram and Twitter and subscribe to our Telegram Channel. Feel free to repost and share widely Global Research articles.

Give Truth a Chance. Secure Your Access to Unchained News, Donate to Global Research.

***

A Bubble

The AI Bubble is in full swing, sucking all attention and trillions of dollars into itself.

The CEO Sam Altman from OpenAI travels the World to ask for $ 7 trillion for investment into AI and related technologies, including new chip factories, humongous datacenters, land, and even nuclear power to run AI’s insatiable appetite for energy. Yes, $ 7 trillion. This is more than the entire US federal budget in 2023, which at “only” $ 6 trillion is already too big to finance sustainably for the USA, the biggest economy in the World (measured in GDP).

The stock value of NVIDIA, the leading designer of number-crushing chips for AI in a matter of months just tripled its value to $ 2 trillion.

The $ 2 trillion is more than the GDP of most countries, and NVIDIA makes only chips – in fact, NVIDIA primarily makes only one kind of chips, which dominating use js for AI. Something is clearly out of proportion here. Microsoft has reached a market value of $ 3 trillion, also mostly based on Microsoft’s connection with OpenAI and investors’ hopes and dreams that Microsoft’s world-conquering program to build AI data centers to control the Globe will become profitable. And so we can go on. Amazon. Meta-Facebook. Google. Oracle. IBM. Their stock values all ride the AI bubble – promising trillions and trillions at the end of the rainbow.

What we see before our eyes is an investment bubble of historic proportions, all driven by the AI narrative.

The AI narrative is that AI is a wizard technology which is going to grow at unprecedented speed – and grow forever – to make everybody extremely rich (except those who lose their jobs, of course).   

AI = Deficient Technology

AI can do a lot of things, and often surprisingly so.

But the positive surprises brought by AI hide the fact, that AI is a completely deficient technology today.

You just cannot trust AI in a professional context for a lot of purposes, probably even for most purposes.

Who cares greatly about AI suggesting you a cookbook recipe, a workout program, or a little short story? You can get this in so many other ways, including by using the internet already available and your own imagination. The big promise of AI is the tantalizing narrative that it will revolutionize EVERYTHING – especially everything in technology, business, military etc. And in that – so far – AI falls far from the mark.

Let’s just pick a few of the grotesque examples how AI underwhelms and becomes even dangerous if you trust it.

We could start softly with the image generator of Microsoft’s Bing. Ask it to “paint an image of French President Macron as a French king in the style of Picasso”.

Immediately, the “liberal” nanny & censorship state running the US and Microsoft kicks in and finds that your request is “offensive”.

Come on, this is clearly within the freedom of speech allowed by the US Constitution – and actually, it is only a very mild irony, perhaps not even negative, but tongue-in-cheek.

But nope, AI decides that YOU are not allowed to do it.

Okay, ask Microsoft’s image creator to draw other things – and you find out at every turn and bend of the road, that the AI image creator draws grotesque features into every image, spooky hands, elements that don’t belong, faces which are twisted etc. All things, which you sometimes can be lucky to fix manually in an image editor, but then, after all, what’s the point of AI image creation in a professional context, if you always have to fix the obvious errors it makes?

Then personal assistant? I subscribed as a test-user of the AI-assistant of Excel spreadsheet, but it never worked.

It now turns out, that even though it doesn’t work, Microsoft wants to charge massively for this AI assistant feature. This is outrageous – especially given the fact that Microsoft for decades charges an exaggerated annual fee for its Office Package without adding any significant new features whatsoever. A functionable AI assistant as a free addition to Excel would only have been a small compensation for all the excess money I have had to spend over the past decades for Microsoft’s passive monopoly rent on owning the Office franchise.

Then take something like AI research.

Perplexity has been hailed as the next big thing to replace Google.

Perplexity builds on OpenAI’s system supported by Microsoft, purportedly the best in the World. Well, probably Perplexity is the best AI application of its kind, but as it turns out, that doesn’t say anything. You can be lucky, of course.

I asked Perplexity to research the background of a new person appointed to the Russian Ministry of Foreign Affairs. It couldn’t.

Only because I kept driving Perplexity on and on with follow-up questions, Perplexity finally coughed-up with some very useful information in the form of a Curriculum Vitae for the person I was researching. I couldn’t have gotten that information any other way, but getting there with AI was not easy either, so this was only half-a-point scored by AI. But then, Perplexity fell completely flat. I recently asked Perplexity about who presented the Arab League case at the ICJ hearings on Israel’s occupation of Palestine. First, Perplexity denied that there was any such person or information existing. Upon my insistence that this was false, Perplexity then came up with the wrong name, a barrister at the ICJ, but one I found out was not representing the Arab League. As Perplexity failed completely, Google fortunately could quickly help me find the right name of the person representing the Arab League at the ICJ, it was Prof. Ralph Wilde. A friend of mine has had similar problems with Perplexity AI. He asked Perplexity for all investment-grade low-income and lower-middle-income countries. It gave a partial list. He said, “How about Indonesia?” It apologized and said “Yes, also Indonesia.” If a human assistant was as incompetent and inconsistent as Perplexity with OpenAI, that person would be fired. Perplexity, while occasionally giving very useful results, should also be fired as an advisor to be even half-way trusted. Use Perplexity sometimes, but don’t trust it.

What about AI in war?

Well – AI may in many instances be extremely dangerous in war, but not to the enemy, only to the army using it.

Palantir is perhaps the leading US company in AI for use in military and policing.

Palantir’s CEO Alex Karp boasts a high level of reliability for Palantir’s military and police products.

Palantir has military planning and execution systems which you feed with operational information (aka “intelligence”) and whoops, out pop ready-made plans and orders for your troops to follow – just let an officer sign off on it, and off they go to victory.

Well, in reality, troops guided by AI may go off, but not to victory. Palantir boasts that its military connections trust it enough to give it “access to the battlefield”. Early into the Ukraine war, Palantir’s CEO Alex Karp went to Kiev and signed a cooperation agreement with President Zelenskyi personally. Palantir then boasted about assisting Ukraine’s troops in their military endeavors in southern Ukraine – endeavors better know as Ukraine’s “counteroffensive” of 2023, which was nothing but a huge military disaster for Ukraine. And we speak not just of one single “mishap” of AI supported military operations by Ukraine. We probably talk about ALL of these Ukrainian operations in the South. AI designed by Palantir “assisted” Ukraine’s military forces on the ground, and this AI led to nothing but endless Ukrainian losses of lives – and defeat.

Trust AI, and pay with losing your fortune, your country  – and your life.

I will strongly recommend all governments and companies, big corporations and down to small entities, to NOT invest too much into AI for the foreseeable future. Perhaps China is underinvesting, but the US is definitely overinvesting. And contrary to primitive logic, gross overinvestment may not result in any safe margin but only in hugely added risks.

AI Bubble Makes AI Bubble

With this kind of fundamental and serious problems in AI, it will take years – not months – for a reliable and thus useful AI to emerge in a lot of fields.

Yes, trillions are being invested into AI. Huang Jensen, the CEO of NVIDIA, speaks of $ 1 trillion already being invested into AI related computing services. And that amount being doubled with another $ 1 trillion soon. The resulting computer centers consume electricity at an unimaginable (and seeming unsustainable rate). In Ireland, cloud computing already consumes more electricity than all private households combined.

The dot-com bubble of 2000 comes to mind.

The narrative about the ever-expanding internet drove the shares of technology to ever higher levels. Just like with AI, the internet is a reality, and like the internet, AI is also going to expand and expand.

But as we saw with the 2000 dot-com bubble, the fantasies about technological expansion soon overtook reality by several orders of magnitude. The expansion of the internet and the profitability of the technology just couldn’t even remotely honor the deluded fantasies about how much it would all be worth. That is where a bubble starts to make its own bubble.

This is where we seem to be with AI today.

AI is a potentially an immensely powerful technology.

AI is also a technology, which will keep expanding enormously.

But where are we actually?

How fast will this happen? And with which big setbacks on the road?

The examples above indicate beyond doubt, that AI is not going to be as transformative as believed for the next couple of years.

Even a corporation like Microsoft may still get itself seriously burned.

Microsoft is executing plans for billions if not more than a trillion dollars to expand its global AI cloud computing centers beyond belief.

What if private enterprise loses billions of dollars on AI investments which even incur insane losses? Widespread disappointment with AI can soon kick-in and result in a serious global backtrack on AI. As fast as customers were attracted to AI and wanted to ride the AI-wave to be safe with the “development”, even faster private and public customers may decide to skip lots of huge AI programs (and skip AI stocks) for a significant period to be on the safe side, not to risk the farm for a failed AI venture.

If that happens, many if not most of Microsoft’s AI cloud computing centers may become worthless – not useable and after a while obsolete and overtaken by the next new chips technology. In that scenario, which is absolutely possible, even Microsoft could get itself into deep financial trouble with AI. And not only Microsoft – the whole IT and AI industry could be sucked down in an enormous AI-maelstrom as well.

In the long term, in spite of booms and busts, AI will continue – but not all corporations and investors involved may survive.

*

Note to readers: Please click the share button above. Follow us on Instagram and Twitter and subscribe to our Telegram Channel. Feel free to repost and share widely Global Research articles.

Karsten Riise is a Master of Science (Econ) from Copenhagen Business School and has a university degree in Spanish Culture and Languages from Copenhagen University. He is the former Senior Vice President and Chief Financial Officer (CFO) of Mercedes-Benz in Denmark and Sweden.

He is a regular contributor to Global Research.

Featured image is from the author


Comment on Global Research Articles on our Facebook page

Become a Member of Global Research


Articles by: Karsten Riise

Disclaimer: The contents of this article are of sole responsibility of the author(s). The Centre for Research on Globalization will not be responsible for any inaccurate or incorrect statement in this article. The Centre of Research on Globalization grants permission to cross-post Global Research articles on community internet sites as long the source and copyright are acknowledged together with a hyperlink to the original Global Research article. For publication of Global Research articles in print or other forms including commercial internet sites, contact: [email protected]

www.globalresearch.ca contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to our readers under the provisions of "fair use" in an effort to advance a better understanding of political, economic and social issues. The material on this site is distributed without profit to those who have expressed a prior interest in receiving it for research and educational purposes. If you wish to use copyrighted material for purposes other than "fair use" you must request permission from the copyright owner.

For media inquiries: [email protected]