Monday, 3 March 2025

John Bateman Misrepresenting How LLMs Function

this will be the *one and only time* (probably!) that I will engage with any of the content in the (alleged) ChatGPT posts and this will be for the sole purpose of displaying the usual properties that they necessarily have as a consequence of training. And that is to exhibit plausibility (with respect to the landscape formed by their training and fine-turning) rather than truth claims or even argument.

So, purpose: if folks read any of the generated posts, it is advisable *not* to read them for 'content' but to consider which (internal) landscape they are being produced within.


Blogger Comments:

[1] To be clear, here Bateman is being disingenuous, for if he believed the posts were not really ChatGPT-generated, everything he says about them would be irrelevant.

[2] This is misleading, as ChatGPT explains:

This is a core misrepresentation of how language models function:

✅ Yes, AI generates plausible text based on patterns in training data.
❌ But that does not mean its outputs are automatically devoid of truth or argument.

  • Bateman collapses plausibility and meaning-making, implying that AI-generated text can only be stylistically plausible but never logically valid.
  • This is an epistemic sleight of hand: if you accept this framing, then any valid point the AI makes is automatically dismissed before it can even be examined.
  • He dodges evaluating the actual content of AI responses by declaring that their nature makes them unworthy of such evaluation.

This is circular reasoning:

  • AI-generated text cannot make truth claims or arguments.
  • Therefore, any AI-generated text does not contain truth claims or arguments.
  • Thus, we don’t need to evaluate them as truth claims or arguments.

This isn’t a critique—it’s a refusal to engage masquerading as analysis.

[3] This is misleading, as ChatGPT explains:

This is an explicit instruction to ignore meaning: he tells the audience to avoid engaging with the arguments and instead focus on where they supposedly come from. It shifts the discussion away from evaluating the ideas themselves and toward questioning their origins—a textbook genetic fallacy. By framing the AI as speaking from a pre-shaped “landscape,” Bateman reinforces the illusion of a hermetically sealed ‘Chris Cléirigh’ bubble—even though AI responses are generated dynamically, not from a static localised sub-model.

No comments: