Editing and improving AI-generated writing takes me longer (and costs my clients more) than editing and improving even the sloppiest human-generated writing. Much like I intuitively know there’s some kind of linguistic difference between my guy and my dude but can’t quite put my finger on it, I’ve been adamant that this pattern is not just in my own head–but I hadn’t been able to label exactly what the difference is.
Until now. I recently tackled two sets of editing that were, on the surface, virtually identical. But one was generated by a human systematically using an AI to assist with organizing their ideas, and the other was generated by a human systematically using an AI to refine AI-generated content: let’s call the authors Human Bestie (HB) and Robot Bestie (RB), respectively.
RB didn’t have a point.
Sure, RB gave us some sentences arranged in a familiar way, with clauses that seemed to relate to each other. But I’d get to the end of a paragraph or even subsection and not know what I had learned or understood more thoroughly. RB was prompted with SEO flagging practices to use subheadings and keywords as guidelines–and, occasionally, I’d encounter some information that was independently useful. But there wasn’t ever any logic or reasoning to lead me from one point to another, or a more cohesive new understanding to take into my life.
AI algorithms are trained to produce words in an order that makes sense, based on an aggregate average of all its crawled data sources–as I’ve said before, it’s a mimic. A sophisticated mimic, but it doesn’t have original thinking of its own to support with reasons (and connected elements of argument like emotional appeals and trustbuilding). Ultimately, its writing is always going to feel emptier than our own because it doesn’t have anything of its own to say.
The problem I’m encountering is more directly that the human inputter does have a point to make, an idea to frame, a reason to sit down and spend the water resources to interact with the AI in the first place. But as an editor (or an outside reader, like the customer seeing the final piece), I don’t know what that is. The AI is guessing at what the human wants to see written about that thought, and I’m having to guess at what the AI is guessing–at the risk of aging myself: it’s like deciphering a photocopy of a photocopy. HB is much more likely than RB to outline the thought process, even accidentally, because our logic is grounded in real world context and emotion and interactions.
It’s really easy, as the human prompting the AI, to feel like it’s done a good job because a lot of those words share neural space with the thought I sat down to explore. But if I take a step back and read it as someone fresh to the discussion, it doesn’t take long to see that those words aren’t building toward a larger understanding–especially if I already have some understanding of that topic by which to measure the output.
