Prediction Made By “Revolutionary” AI Goes Down in Flames

A so-called “revolutionary” AI made a prediction back in January. That prediction failed to materialize.

One of the things that AI (Artificial Intelligence) sales people and AI doomers have in common, it’s that AI is either infallible or more infallible than humans could ever be. This myth has been more or less repeated over and over again despite a growing body of evidence that the power of AI today has been greatly exaggerated.

I’ll repeat one of my long-standing points here again given how off the rails so much of this debate has gotten. AI has it’s uses. It can translate from one language to another, summarize existing material, and help people write with better grammatical precision. It can help with the writing process, but it, by no means, can replace humanity in tasks like writing, decision making, or various other tasks. What’s more, it certainly isn’t going to cause humanity to go extinct by any means (that was always a laughable talking point).

Indeed, the growing body of evidence has shown this time and time again. There were multiple attempts to replace lawyers with AI. That failed spectacularly on more than one occasion. News organizations attempted to replace their news writing staff with AI. Those efforts ended badly on multiple occasions.

What’s more, even investors, a major aspect in fuelling the AI hype, have started questioning whether the benefits of AI are overblown or not, asking where their return on investment is.

All of this leads to a single conclusion: the so-called “revolution” in AI has, thus far, proven to be almost all hype and little substance. Sure, these are fun times for people marketing the AI, but the benefits beyond that are, at best, highly questionable.

Which, of course, leads us to the latest example of why AI is more hype than substance. You might remember back in January that I did a write-up on one of the countless “AI startups”. In that article, a company known as AI Shark was one of many companies vowing to bring in a new revolution. They called their product “revolutionary AI-enhanced technology”.

To prove that their technology is the real deal, they were bold enough to offer a prediction (fuelling fears that AI is replacing journalists). That prediction revolves around Nintendo’s up and coming console that would succeed the Nintendo Switch. The company predicted that Nintendo was, indeed, going to name their next console the Nintendo Switch 2. Yeah, so far, nothing that can’t be disproven. What’s more, they predicted when this “Nintendo Switch 2” was going to be released to the public. They pegged the release date to be September of 2024.

Oops.

For those who are totally not familiar with the gaming landscape, September of 2024 has arrived and, so far, there is absolutely no sign that Nintendo is on the verge of releasing their next console. Yes, we still technically have about two and a half weeks left in the month, so it is technically possible that Nintendo could completely shock the entire gaming landscape, foil all predictions currently floating around, and release the console later this month. The thing is, that is so extremely unlikely, you could safely put money on that not happening. Most, if not, all guesses have pegged the console to be released sometime next year – something that is certainly much more realistic.

What’s funny is that this is what I wrote back in January when these bold claims first hit the media:

It’s very obvious that this is simply a guess. I could write a whole series of articles proclaiming that Nintendo is going to release the next console on different months spanning March to December. If one of them happens to be right, it doesn’t make me a super genius. It just means I happen to get a guess that was luckily accurate. Anyone else out there can take a guess at what month they think the next console is going to be released and it would probably be about as good of a guess as this AI. If the AI happens to be correct, it got lucky rather than being super accurate in its predictive technology (or whatever the marketers want to go for here). As the saying goes, your guess is as good as any.

Either way, guessing on the release month of the next console this far out is pointless. Maybe it’s interesting to say which half of the year you think it’s going to happen, but to narrow it down to a single month is just plain unproductive guesswork at this stage. Heck, for all we know, the next console could get released in 2025 instead. That’s also entirely possible as well.

My comments, today, ended up being so accurate, it is kind of spooky at times. Yet, despite things like this going on all the time, mainstream media still seems to be of the opinion that AI is this perfect technology and that it would be foolish to question any of it.

For instance, yesterday, the CBC asked people working on the AI, Ask Polly, to get results from the recent US presidential debate. An employee, more or less speaking on behalf of Ask Polly, gave various answers saying which points of the debate resonated with which kind of voters. The reporter conducting the interview didn’t really question any of it and, instead, simply hung on the employees every word as if whatever output Ask Polly had was gospel.

In another recent example, mainstream media was writing as if the fate of journalism has been sealed and that the death of the profession is all but certain. Here’s The Guardian:

For several hours a week, I write for a technology company worth billions of dollars. Alongside me are published novelists, rising academics and several other freelance journalists. The workload is flexible, the pay better than we are used to, and the assignments never run out. But what we write will never be read by anyone outside the company.

That’s because we aren’t even writing for people. We are writing for an AI.

Large language models (LLMs) such as ChatGPT have made it possible to automate huge swaths of linguistic life, from summarising any amount of text to drafting emails, essays and even entire novels. These tools appear so good at writing that they have become synonymous with the very idea of artificial intelligence.

But before they ever risk leading to a godlike superintelligence or devastating mass unemployment, they first need training. Instead of using these grandiloquent chatbots to automate us out of our livelihoods, tech companies are contracting us to help train their models.

Working for an AI company as a writer was therefore a little like being told you were going to be paid a visit by Dracula, and instead of running for the hills, you stayed in and laid the table. But our destroyer is generous, the pay sufficient to justify the alienation. If our sector was going up in smoke, we might as well get high off the fumes.

OK, AI doomer.

Obviously, the article in question doesn’t make a whole lot of sense. A big reason is that by hiring many writers, you are basically being reactive to the world around you, not proactive in any way. What’s more, there is always going to be a lag of current events. As the people behind ChatGPT themselves admit, that gap of data is measured in years. If you are a journalist and it takes you two years to do a writeup on a current event, and it’s not a retrospective article, then you’re doing your job wrong. I feel bad when I do a writeup on something that happened two weeks ago, let alone, say, two years ago. That, alone, actually tells you just how safe the profession of journalism is in the first place. Yet, the article in question literally had in their headline that journalism, as a profession, is knocking on deaths door because AI is somehow taking over. It’s laughable and should have been relegated to the realms of satire.

A basic level of critical thinking should have prevented the headline, “‘If journalism is going up in smoke, I might as well get high off the fumes’: confessions of a chatbot helper” from ever being written. Yet, mainstream media outlets, seemingly staffed by the most gullible and technologically illiterate staffers, eats up every word by any bullshit artist and snake oil salesman says after slapping “AI” to whatever shoddy product they are attempting to sell. It’s kind of pathetic, really.

Ultimately, the profession of journalism is safe. If mainstream media insists on hyping up whatever scam artist wants to push because he used the AI buzzword, that is their choice. I’ll be over here writing news that revolves around the real world.

Drew Wilson on Mastodon, Twitter and Facebook.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top