Skip to main content
Blog Sitemap

Published

Generative AI is kind of bad

Can we get straight to the point here?

Can we get straight to the point here?

Generative AI is kind of bad. Bad as in it’s not good, for basically 99% of use cases that people keep using it for.

Large language models work entirely on mathematical probabilities. They will never readily produce something that isn’t the washed-out average of what the majority of the human-provided training data could be summarised to.

Generative AI art is, frankly, not art at all. Art is about so much more than the art itself: it’s about intent, process, history, and interpretation. Generative AI cannot make art because it doesn’t have the ability to have intent, to devise and iterate upon a process, or to practically understand what it’s doing.

There’s a reason that generative AI art is so often either a mangled attempt at photorealism, or a pale imitation of someone else’s work. (And not just because it’s often trained on stolen content.)

Generative AI’s attempts to write are so often painfully average or incredibly exaggerated as to be unreadable. It’s attempts to summarise information are often totally incorrect because it doesn’t understand the semantics of anything that is input or output.

My vague attempts to use coding assistants like GitHub Copilot and Cursor (because I’m going to try and use a tool before decrying how useless it is) didn’t even last half an hour, because they would only ever suggest the wrong thing. Having incorrect suggestions appearing constantly, each time requiring me to stop to review them to make sure they were still incorrect, massively slowed down doing any actual work.

Taking an existing thing and slapping an ‘AI chat’ interface onto it is not an act of innovatation, it’s a tascit admission that your existing thing was too rubbish for people to be able to use it as intended.

That AI chat bot probably needs a disclaimer pointing out that it’s going to be wrong a bunch of the time anyway, because, again, generative AI does not understand the semantics of what it’s doing or saying.

Generative AI only works properly when it has a sizeable data set to train from, and that has inevitably meant either stealing content from the open web in flagrant violation of copyright law, or farming the content of willing participants and leaking personal details and corporate secrets in the process.

There is an incredible environmental cost to training AIs in the use of electricity, water, and the resources needed to build the data centres and millions of high-performance servers necessarily. All this seems to be utterly ignored, even against the backdrop of a rate of climate change that’s still far from being controlled.

And there is the human cost too. Generative AI has poisoned people, killed people, made a mockery of the legal system, negatively affected mental health and perceptions of reality, enforced discrimination, proliferated conspiracy theories, stolen jobs, and replaced human relationships in some rather unhealthy ways. It doesn’t even make you more productive.

You could argue that these issues derive from the people and not the tool, but that ignores that the makers of these tools actively obfuscate that generative AI is sycophantic and stupid by design. They’re promoted as being omniscient assistants, not digital yes-men with memory banks filled with Reddit comments.

I don’t think there’s a person in my life who isn’t already sick of generative AI being forced on them at work, on their phone and computers, by random websites and software that never needed it before now. People in multiple industries, with different levels of technical ability, have all tried these things and uncompromisingly found them to be useless.

They’re tired of every company that has a profit motive to push and every person who is trying to sell themselves as a changemaker is repeating the mantra that “This is the future, you’ll be left behind if you don’t use it,” like suckers who have already dumped all their savings into a grand pyramid scheme.

You know, the same thing they said about the metaverse. And NFTs. And cryptocurrencies.

They probably said the same thing about microwave ovens in the 70s. Why on earth would you cook food yourself, spending hours toiling over a recipe book and hot stove to make a lasagne, when microwaving a frozen lasagne is so much faster and more efficient?

Because, like generative AI, microwave lasagne is a washed-out average of the source material it’s based on. Like generative AI, it lacks intent, process, history, and interpretation.

Because, like generative AI, microwave food is kind of bad.

Generative AI is not the future, at least not in the form that everyone pushing it down your throat says that it is. It’s shit. It’s a net contributor to human mystery and environmental collapse. Can we please stop pretending like it’s actually good for anything?

The sooner that Big Tech’s bubble of delusion bursts, the better off we’ll all be for it, and I’ll happily be one of those holding a pin.