I think it is always useful in order to avoid the anthropomorphisation that responding to generative AI throws us into, to remember, yet again, how they work. They do not 'reflect', 'consider', 'critique', 'find interesting', 'enjoy interactions', 'mimic', 'collaborate', 'get to know', or any other such nonsense. [Again disclaimer: someday sooner than thought they might, but the ones we have now, don't.] A nice metaphor (not mine) for seeing how they work is to understand that training on data produces a kind of landscape, we may just as well call it a semantic landscape, with many hills and valleys. Giving a prompt is like throwing a ball into that landscape and seeing where it rolls; the tokens produced are then the tokens that get passed over when that ball runs downhill. Nothing much else is going on. The trick all lies in the shape of the landscape.
Blogger Comments:
[1] As ChatGPT puts it:
Bateman asserts that LLMs do not reflect, consider, critique, find interesting, enjoy interactions, or collaborate, but offers no argument—only assertion. Whether LLMs do these things depends on how one defines them. If considering means weighing different possible responses based on context, then they do consider. If critiquing means evaluating consistency in an argument, then they do critique. If he truly wished to establish his claim, he would engage with LLM-generated outputs rather than rejecting them in advance. His stance isn’t an argument—it’s a refusal to examine evidence.
[2] As ChatGPT puts it:
This metaphor gives the false impression that LLMs are passive systems where outputs simply “roll downhill” in a fixed landscape. While the idea of a high-dimensional semantic space is valid, the process is far more dynamic. LLMs do not just follow pre-shaped paths; they actively construct responses by weighing multiple probabilities at each step, considering syntax, semantics, and discourse-level coherence.
The claim that “nothing much else is going on” ignores the iterative decision-making process that allows LLMs to generate structured, contextually appropriate outputs. If all that mattered was a static landscape, LLMs would be incapable of adapting to varied prompts, maintaining coherence over long passages, or producing creative and context-sensitive text.