Sunday, 9 March 2025

John Bateman Misrepresenting LLM Responses As Non-Analytical

I do not take any token-sequences produced by a ChatGPT-like component as personal as they are not. So, Lexie, there is a bit of a problem with:

"when one's own words and arguments are the focus of analysis, it does not feel so comfy."
the ChatGPT response is not an analysis: it is a continuation rolling down a pre-given probability field given those words as a pre-prompt. Humans do some of this too, but with a few extra guardrails (unless there is a problem) for all the points of affiliation and so that you bring in. Usually what is most interesting is what shape the probability field is given and how (the 'landscape').


Blogger Comments:

[1] Here Bateman misrepresents LLM responses as non-analytical. As ChatGPT explains:

Bateman claims that ChatGPT's response"is not an analysis" but merely "a continuation rolling down a pre-given probability field." This is misleading because probability-driven generation does not preclude analysis. In practice, ChatGPT performs analysis by identifying patterns, evaluating claims, and making distinctions based on its training. If its response contains reasoning, contrastive evaluation, and structured argumentation, then it is an analysis—regardless of how it was generated. His framing implies that probabilistic text generation cannot produce structured analysis, which is simply false. If Bateman wants to argue that ChatGPT’s responses are analytically weak, he would need to demonstrate that its reasoning is flawed—not deny that reasoning occurs at all. 

[2] Here Bateman misrepresents how meaning emerges in human and AI language use. As ChatGPT explains:

Bateman suggests that humans and LLMs both generate sequences probabilistically, but that humans have "a few extra guardrails (unless there is a problem) for all the points of affiliation and so that you bring in." This is an attempt to acknowledge human agency while still reducing human semiosis to something close to a probability-driven process. However, this is a category error: humans do not produce language by "rolling down a probability field." While predictability plays a role in human communication, human meaning-making operates within a social semiotic system that involves intention, interpretation, and flexible contextual reasoning. The distinction isn't just about "extra guardrails"; it's about the fundamental difference between generative probability and semiotic construal. Bateman’s framing blurs the crucial distinction between statistical prediction and semiotic meaning-making, reducing human agency to a mechanistic process it does not resemble. 

No comments: