Inhuman Intelligence
I was heartened to read Eric Naiman’s angry essay about students handing in ChatGTP-generated papers about The Brothers Karamazov. He sees in this trend a resurfacing of the Grand Inquisitor’s ruses, a cowardly relinquishing of responsibility, be it just for having an opinion, for formulating reasoned statements. Added to that, there’s the humiliation of being expecting to honour the pseudo-creations of a robotic engine: ‘Taking stock of the queasiness and rage that was overcoming me as I looked at my mounting pile of AI compositions, I understood how nauseatingly insidious the work of the machine has become.’ Some students – even at Berkeley – regard the machine’s output as a standard exceeding their own. I was struck by Naiman’s observation: ‘Eventually students who work with ChatGPT may become so adept at understanding what “good writing” looks like that they will not even need to use it: they themselves will become artificially intelligent. That won’t be an improvement, because an essay that sounds as though it were written by a computer is no better than an essay actually written by one.’