As I understand it, AI is trained on a Pavlovian-style feedback: Yes this works (yay!) or no it doesn’t, please try again. I know it’s more complicated than that, but in a core sense most of the training input seems to come down to yes, I’m happy with this result or no, I’m not, please* try again.
In some senses, this can also happen with writing, which is a form of communication that can in turn be framed in successful/not successful terms. What gets a bit tricky in both instances is determining the qualifications for happy or success–and since writing involves more actual people, an AI is likely to struggle with helping us meet those qualifications.
For most kinds of business writing, success is a matter of whether the writer has effectively connected with and convinced the reader of something. Doing that requires all kinds of understanding, trust, and flexibility that’s very hard to establish with a system fundamentally seeking to please us (via a good output that doesn’t require another attempt).
If a tool is built to give us only what we most want, it’s not as likely to help us understand what our reader most wants–or how to balance those two value sets. Communication involves some meeting in the middle; complete capitulation usually doesn’t go over well even if you’ve started on a strong foundation of shared understanding.
It’s entirely likely that as AI improves, this limitation will ease a little. But for right now, part of the uncanny valley many of us experience with AI-generated text is that it’s not offering any contention. It doesn’t have its own value set or context, and is merely reflecting ours. And practicing your speech in front of a mirror is rarely as effective as practicing in front of real people.
*Look, if we’re going to anthropomorphize our interactions with machines, we can at least be polite about it.
