Another report published by AI is proving, yet again, that it is no replacement for an actual human journalist.
One of the things I’ve been personally observing lately is web hosting companies saying how easy it is to set up a website (it’s not – especially when you get to maintaining it). Specifically, though, automated tools can automatically make a design for you (one size fits all templates are an overall bad idea most of the time) and that you don’t even have to write anything because you can let AI handle it all. The last pitch is also a really bad idea especially when it comes to not bothering with any sort of human intervention.
What strikes me about these ads is that it reinforces the myth that AI has gotten so good these days that it can handle anything you throw at it. Just push a button and AI will handle everything now. The reality is that nothing could be further from the truth. At minimum, you have to review the output to determine if the content makes sense for you. Chances are, a couple of prompts will result in words that make you rethink why you even tried. By the time you refine your prompts and get it to write something half decent, you probably wasted so much time that you were better off just writing the darned content yourself.
Some out there, however, insist that AI is perfect. Whether that is the launching point to say how AI will mean the end of humanity or bring about the next industrial revolution, the thinking that AI is perfect today is so critically flawed that it makes both positions on the subject highly laughable.
Every time someone tries to replace their work with AI, it invariably ends in disaster. There was the lawyers that tried using ChatGPT to write their legal briefs only to nearly derail their entire case when the output was horrendous. Another example was a company called DoNotPay that tried to fully automate small legal claims only to face serious consequences which resulted in its eventual shut down of services.
CNET, a once highly looked upon news outlet for tech news, tried replacing their entire staff with AI only to find out that AI was just highly error prone. As a result, they had to hire back their journalists just to fix the massive mess left behind by the AI. Gannet, for their part, decided that the problem with CNET wasn’t that they used AI that caused so many reporting errors, but rather, that they were open about it. So, they secretly replaced their journalists with AI. That ended even more badly when they were accused of being deceptive with their use of AI over top of the many errors left behind by the AI they deployed to write their content.
Another company that thought that deploying AI to replace journalism was so bold about their “revolutionary” AI, that they decided that it could predict the future. The laughable claims resulted in the company predicting that the Switch 2 is coming out in September of 2024. This while proclaiming that the next Nintendo console will, in fact, be called the Switch 2. For those paying attention to where we are on the calendar, yes, those predictions also went down in flames.
In the world of investing, NVidia recently began struggling to hold its value as investors questioned whether or not AI has any actual return on investment.
Yet, despite failure after failure after failure of trying to use AI to replace humans for things that, for example, involve writing, there’s still a contingent of people out there that insist that AI really is replacing all of humanity. I really don’t know what it will take to convince them that the AI hype is overblown, but I recently found out about another example of how AI is screwing up news articles.
A publication called Hoodline apparently uses AI to publish news articles. That ended about as well as you would expect. Apparently, it managed to mangle a social media post by the District Attorney and managed to write a headline saying that the District Attorney in question has been charged with murder. From TechDirt:
But apparently, some others don’t much care, including “Hoodline.” We actually mentioned them a few months ago in one of our stories about AI sludge news sites.
A few days ago, I was reading the news recommended by Google News, and a story caught my eye, claiming that the San Mateo County DA had been charged with murder!
So, what happened? Well, the Nafnlaus account on Bluesky figured it out.
It was, in fact, trying to make a story out of a tweet by the San Mateo County DA’s ExTwitter account. But when the AI parsed it, it merged the name of the account “San Mateo County District Attorney” with the start of the tweet, which begins with the guy being charged. So merged together, it looks like that name is the DA.
This is exactly why human intervention is so important when using AI. You can’t expect it to do a great job all the time. At minimum, a human needs to review the output to make sure it isn’t making stuff up (you know, the famous “hallucinations” that have made the news from time to time). For some companies, though, quality of content doesn’t matter and publishing anything will just lead to clicks without any consequences. I suspect people’s minds will start changing when the legal threats start cropping up.
What people seem to repeatedly forget is that most, if not, all Large Language Modules (LLMs) were designed to generate text to make it appear as though a human wrote it. That’s what AI apps like ChatGPT were designed to be. There is a huge difference between writing something that sounds like a human wrote it (which is an impressive accomplishment, don’t get me wrong) and writing content that has been properly fact checked and accurate. AI, for its part, generally doesn’t understand the latter and is nowhere near capable of handling fact-based material.
If AI struggles with scenario’s involving imperfect information (such as Poker), then it will struggle with publishing facts because you’ll invariably wind up with conflicting pieces of information that you have to figure out which has better value and which doesn’t. These are things news writers like myself deal with all of the time. In fact, this article alone parses a lot of the bad information from the good to produce something reasonably accurate – something numerous other human writers struggle with.
Either way, this is just the latest example of why AI isn’t replacing journalists any time soon. I don’t expect this latest example being added to the pile to change many minds. Mainstream media journalists and the talking heads they employ will still do their usual handwringing about AI because that’s what they feel generates clicks – not necessarily producing content that is even close to being accurate. After all, a good moral panic story sells much more to audiences than actual fact-based journalism. I’ll continue to roll my eyes at them and be grateful they are keeping me employed. After all, someone’s got to set the record straight, right?