The list of AI fails just keeps growing and even summarizing actual journalistic content is proving to be difficult.
If there is one persistent and prominent myth about technology that is out there these days, it’s that generative AI is a mature technology that is either completely perfected and awaiting broad deployment, or that we are months away from this happening as it’s just a matter of ironing out some minor bugs in the system. This has led to two major trains of thought I see repeated over and over again.
The one train of thought is that generative AI is going to bring about the next industrial revolution and it’s only a matter of trying to somehow get into the better side of AI to become enormously wealthy.
The other train of thought is that this is a disaster for humanity. Jobs will evaporate overnight sooner or later as AI completely takes over several different tasks. Others take the AI doomer train of thought to even greater heights and argue that this will mean that humans will go extinct or that AI is already becoming sentient.
The problem with both trains of thoughts is that they are both built on a very faulty foundation. Generative AI is by no means a mature that can do pretty much anything and everything a human can do. We are greatly misunderstanding what many generative AI systems are actually doing today.
Generative AI – specifically text based AI – is predicting what the next word in a sentence should be. So, if it is being asked to locate, fictitiously, Bob’s Barbershop in your local town. The AI will rely on known information and create a sentence. First, it might predict that it makes sense to start a sentence with “Bob’s Barbershop”. Then, it will predict the next few words will likely be “is located at”. At this point, it will rely on whatever information database there is to locate a general location. It might accurately find the actual street address and say, “125 Young Street.” So, it will string together the sentence “Bob’s Barbershop is located at 125 Young Street”.
For the AI, its job is done and is awaiting the next prompt. The sentence sounds like it was written by a human. There is, of course, just one problem here. Did the AI get the location correct? The reality is that the AI could care less if it got the location right. In fact, Bob’s Barbershop is probably a chain shop with many locations. There may not be a “Young Street” in your city. If anything, it might have grabbed that information from a Google search result. In fact, your city may not even have a “Bob’s Barbershop” at all. The AI doesn’t care about this sort of thing. You gave it a prompt and it spat out an answer that sounds like a well crafted sentence.
This is more in line with what generative AI actually does today. It’s not some all knowing entity that is fully conscious. In fact, there is a reason why some critics refer to it as a glorified auto-complete. It creates sentences that sound good, but that’s the extent of its overall capabilities. Understanding facts and separating fact from fiction is way outside of its capabilities. This really is where most people completely misunderstand what generative AI actually is doing today.
As a result, some people get some pretty silly ideas of what to do in response to AI. Some out there think that they just have to get out ahead of others and get generative AI to, for instance, write whole news articles and fill their sites with AI before someone else does this. I’ve seen many people wring their hands about how generative AI is capable of writing hundreds of pages within minutes to create an entire magazine, but fail to mention the quality of the work (probably for obvious reasons). Because of this bad thinking, I’ve repeatedly seen some pretty hilarious epic fails out there for the last several months.
Back in October, a site that uses AI to generate news articles based on information shared on specific social media posts ended up falsely reporting that a DA was charged with murder. Another epic fail was one site pushing their generative AI to predict the future. They were so bold that they predicted that Nintendo’s next console would, indeed, be called the Switch 2. What’s more, the console would be released in… September of 2024. Yeah, that was one epic fail.
Google, for their part, has pushed out their Overview AI which summarizes content found on the web and generates an AI response while searching. Indeed, it is possible for an AI to summarize existing content, and you would think that Google would be a company more than capable of churning out an AI that is up to the task. Yet, users were surprised when it started recommending people eat rocks or use glue to keep the cheese sliding off of pizza. Yet another fail to say the least.
CNET also decided to replace its staff with AI writers for its news section at one point. The experiment ended badly when the AI reporting wound up being highly error prone. Gannet did something similar, but decided to completely hide the fact that they were using AI for their journalists. When errors were found all over the place, that ended up blowing up even harder in the media companies face.
Lawyers even found themselves in hot water on top of it all. One lawyer decided to use ChatGPT to write their legal briefs. This resulted in the judge asking the lawyers why their legal brief contained fake caselaw. Another service, known as DoNotPay, tried creating an AI designed specifically to file small claims. Long story short, things ended so badly for them that they were forced to shut down their services.
Yet, despite seeing fail after fail after fail for those trying to automate writing content by leaving it to an AI, the myth still persists that AI has already perfected all of this. I look at the history and remain dumbfounded as to how this myth can still persist.
At this point, I can’t pretend adding one more case to the pile will make a difference in changing minds, but I do find it notable nevertheless. Apple is now in hot water over the AI they have incorporated into their operating systems. The AI apparently can summarize news articles for readers, meaning that it is basing its information on actual journalism sources. Even then, like so many other AI systems out there today, it apparently is highly error prone. From the Register:
Press freedom advocates are urging Apple to ditch an “immature” generative AI system that incorrectly summarized a BBC news notification that incorrectly related that suspected UnitedHealthcare CEO shooter Luigi Mangione had killed himself.
Uh, yeah, it should go without saying that Luigi Mangione did not, in fact, shoot and kill himself. Apparently, Reporters Without Borders is calling on Apple to discontinue the AI in the wake of this humiliating mistake:
Reporters Without Borders (RSF) said this week that Apple’s AI kerfuffle, which generated a false summary as “Luigi Mangione shoots himself,” is further evidence that artificial intelligence cannot reliably produce information for the public. Apple Intelligence, which launched in the UK on December 11, needed less than 48 hours to make the very public mistake.
“This accident highlights the inability of AI systems to systematically publish quality information, even when it is based on journalistic sources,” RSF said. “The probabilistic way in which AI systems operate automatically disqualifies them as a reliable technology for news media that can be used in solutions aimed at the general public.”
Because it isn’t reliably accurate, RSF said AI shouldn’t be allowed to be used for such purposes, and asked Apple to pull the feature from its operating systems.
It’s kind of impressive that it took less than 48 hours for Apple Intelligence to fail that hard. Indeed, this is being produced by Apple – a pretty big name in the tech world to say the least. What’s more, this is the second tech giant to deploy a generative AI feature only to watch it fail in spectacular fashion. If that doesn’t say generative AI large language modules (LLMs) are not all that great with facts, I don’t know what will.
Still, there will always be those out there insisting that generative AI LLMs are impervious to mistakes and that we should be excited or afraid of this. No amount of facts will dissuade this silly belief and all the fails going around today won’t change very many minds at this point. Personally, if you are looking for a real threat to journalism, look no further than the incoming US president set to take office on January 20th. There’s already been plenty of instances of the next president trying to crack down on journalists and free speech – and that’s something that is likely only going to get worse once Trump takes power.