The ethics of using AI in creative writing

By Willem Steenkamp, senior writer and editor at Flow Communications

At Flow Communications, like many companies, we grapple with the ethics of using artificial intelligence (AI) in our work, which for us is creating brilliant campaigns for our clients.

Over the past year or so, since AI really made itself felt, we’ve hashed and rehashed so many questions, especially around content generation: can we use AI to produce copy for our clients? Should we? If so, how should we? What should we tell them about using AI, if anything? And why should they pay us to use AI when they can do it themselves – for free?

Don’t get me wrong, we’re not anti-AI Luddites; we think it’s tremendously exciting. We see the things that generative AI – such as ChatGPT, which can produce copy to instruction – excels at, which are useful and save time and costs: summarising long or complicated copy, generating listicles, brainstorming ideas or doing research (within limits), for example.

Just not creative copywriting, because generative AI isn’t a writer. Rather, it’s a writer’s resource, joining their existing reference arsenal of dictionary, thesaurus, books, spellchecker and Google.

Simply, AI cannot “write”, at least not well. In fact, it does it rather badly: bland, unimaginative and frequently overwritten, and always the same. Always. A blaring giveaway is “unwavering commitment”, often linked with “resilience” and “support”. Lists are invariably in threes, like “unwavering commitment, resilience, and support” (with the Oxford comma). Superlatives will abound. Sentences will sound smart but mean zip.

As an editor, I can tell you that writers have a kind of writing fingerprint. Edit their copy once or twice, and you’ll get to know their peculiarities and habits; their writing tics, if you will. Generative AI has them, too, and they’re not pretty. Like that of a mediocre scribe who thinks he’s all that, you can spot an AI’s turgid, repetitive output a mile off.

In a recent internal training session, we examined ChatGPT-created copy for a hypothetical invitation. In the space of only two paragraphs, we identified no fewer than 15 different verbal, punctuation and syntactical tells that revealed it had not been produced by a human. Imagine, then, how easy it is for anyone to spot AI-generated copy.

Photo credit: Andrea De Santis on Unsplash

Happily, our clients are coming to the same conclusions. On a few occasions now, we’ve had clients ask us to fix copy they’ve busked themselves using AI. As dissatisfied as we are with the outcomes, they know it can’t replace human writers – not yet at least.

As a nascent technology, AI will get better. But it’s worth remembering that AI cannot replace us: it is a simulation of human intelligence processes. As Flow Communications chief technology officer Richard Frank puts it, AI really is “a super-duper autocorrect, a mimicker of humans – and it’s excellent at that, especially when we’ve taught it poorly”.

The reality is, AI-generated content is here to say. But just like a human writer’s, AI’s copy is fallible and must be managed and edited. Here’s the rub: considering the time you’ll have to spend editing, reworking, fact-checking and humanising any kind of creative copy that AI has generated for you in mere seconds, you might as well write it yourself.

And editing is a must. Because AI-generated copy is a mishmash of everything we’ve ever written, it also contains our biases, lies, errors, fake news, conspiracies, misogyny, racism and so on – and it can’t distinguish between good and bad, right and wrong. Imagine the consequences, such as presenting a client with a campaign that’s racist or sexist, or denies the Holocaust.

There’s one other important thing about AI and writing, and that’s that AI is crooked. It’s a thief and a con artist, and it makes you an accessory after the fact.

Let’s look firstly at a time-honoured literary (mal)practice: plagiarism. AI, in its light-speed cobbling together of copy for you, actively mimics, borrows and outright steals others’ intellectual property – without attribution. Sometimes it’s not quite plagiarism, but it’s like plagiarism; it’s still you presenting others’ work as your own.

Now imagine you do this with a client, in anticipation of getting paid. That’s perhaps not quite fraud – criminal deception for financial or personal gain – but it’s like fraud; it’s still being less than honest about how creative was derived, in return for payment. Think about it: would you be OK with this if you were the client?

Let’s recap those questions we keep asking ourselves at Flow Communications.

Can we use AI for clients? Yes, we can, in the sense that it’s possible.

Should we? No. Heck, no. AI is a lousy creative writer that’s incapable of a brainwave, showing human warmth or employing flair.

What should we tell clients about using AI? The truth. Whenever we employ it, we should declare that in the interests of honesty and transparency.

Why should clients pay us to use AI for creative work? They shouldn’t, but not because they can do it badly themselves; they should simply accept no imitations when it comes to creativity.

So is it ethical to use AI in creative writing, then?

Sure, if you can be honest enough to tell a client that you don’t do your own work. But do you really want to do that?

Ed
Follow Me
Scroll to top