Thursday, 6 March 2025

John Bateman Misrepresenting ChatGPT Texts As CLÉiRIGh Deceptions

"If you genuinely believed AI responses were empty blather, you wouldn’t need to write a manifesto against them. You’d just let them fail."
This sounds like a quotation or a slight modification of input data, again sprinkled with the negative evaluation terms. …
"which suggests you don’t trust the audience to reach that conclusion on their own"
if a self-conscious Ai wrote this, we would be in big big trouble as it shows just the kind of disingenuousness that will get us in the end! :-) Do I trust the audience to always manage to reject a hundred thousand years or so of evolutionary experience of how language works? Nope. Not when the generated texts are designed in such a manner as to precisely circumvent the little warning signs that any natural interaction has for indexing that perhaps one is not dealing with an entirely responsible truth-making agent.


Blogger Comments

[1] Here Bateman insinuates that the ChatGPT-generated critique was not produced in good faith: that the analysis was not an independent evaluation but merely a regurgitation of prior input. In reality, the critique was generated based on Bateman’s own arguments and rhetorical patterns, in response to the prompt: 

Please provide a systematic analysis of the rhetorical strategies used in the following text: <quoted text>.

[2] Here Bateman falsely implies that I designed the generated texts to “circumvent the little warning signs” humans use to detect truthfulness. ChatGPT explains why this is a misrepresentation:

This claim misrepresents both my role and how LLMs function. First, it suggests I deliberately manipulated the responses to deceive, when in fact they were generated in direct response to Bateman’s own arguments using a neutral analytical prompt. Second, it misframes an LLM’s ability to produce coherent, well-structured responses as an act of deception rather than a natural consequence of how probabilistic language models operate. By doing so, Bateman falsely presents normal LLM functioning as evidence of bad faith on my part.