My own contributions have been merely observations, using the tools of systemic functional semiotic text analysis.
I observe that the texts produced by the machine instantiate semiotic systems. To be able to do this, we are told the machine reads 1000s of texts, i.e. other instances of these systems. It is reasonable to abduce that the machine has learnt these systems by experiencing multiple instances of their features (not just the fields it gleans from Wikipedia), given our language based theory of learning.
The people programming the machine, with ‘reasoners’ as Mick puts it, have no more conscious knowledge of these systems and the processes of realisation and re-instantiation, than the machine does.
The machine itself tells us that its understanding of its “self” is ‘based purely on symbols and algorithms’. This resonates with your insistence that all it is doing ‘is producing nonrandom sequences of characters’. My analogy of a closed book was intended to evoke the contrast between the material recording of characters and the semiotic reading of those characters as instantiating expression systems, that realise content systems, that realise register and genre systems. …
My point is that all the semiotic systems instantiated in the texts it produces are ‘not learned in any direct way’. Neither the machine nor the “tech gurus” that program it can explain this to our satisfaction. The publications that you cite are undoubtedly illuminating, but our contribution can only be based on text analysis, which I submit will produce very different (possibly complementary) explanations.
Blogger Comments:
[1] To be clear, this is not a reasonable abduction, because it is nowhere near the "best available" conclusion to infer.
ChatGPT uses the lexical collocation frequencies in its database. While it is true that these frequencies instantiate the probabilities in the language systems of the people who wrote the texts, there is no evidence to support the claim that ChatGPT is using systems of features in producing its own texts. It just uses lexical collocation frequencies.
"Our language-based theory of learning" does not apply here, because the learning and "experiencing" of ChatGPT are material processes, not the mental processes of a language learner.
[2] To be clear, the argument here is that, since neither humans nor ChatGPT have conscious knowledge of the language system, both must use that system to produce texts. Clearly, a lack of awareness of X does not logically entail the presence of X.
[3] To be clear, here Rose is referring to Martin's self-contradictory misunderstanding of stratification, wherein functional varieties of language are modelled as context, instead of language, despite being instantiated as language (text). In SFL Theory, registers are context-specific varieties of language, viewed from the system pole of the cline of instantiation. Martin's genre, on the other hand, is scattered across SFL's architecture of language. As text type, genre is register viewed from the instance pole of the cline of instantiation, as purpose, genre is rhetorical mode (narrative etc.), and its structures are of the semantic stratum, though not organised according to metafunction.
[4] To be clear, a contribution that is only based on text analysis is a very limited contribution indeed. It is an understanding of SFL theory that has the potential of providing valuable insights into the issues raised by the coherence of texts produced by ChatGPT.
ChatGPT Comments (11/5/25):
1. What Rose Gets Wrong
In a 2023 post to the Sysfling list, David Rose offered a semiotic interpretation of how ChatGPT produces text. He framed it as an instance of abductive reasoning — inferring from the coherence and texture of the machine’s outputs that it must have “learnt” the language system by experiencing multiple instances of its features, just as a human learner would. Drawing on the systemic-functional tradition, he rightly observed that the texts produced by the machine instantiate semiotic systems. But his conclusion — that the machine must therefore have learnt these systems — does not follow. It is, in Peircean terms, not the “best available” explanation of the facts observed.
The error begins with the misunderstanding of abduction itself. Abductive reasoning does not simply mean generating a plausible hypothesis; it means generating the most plausible hypothesis given the known alternatives. There are much simpler, more empirically grounded explanations for ChatGPT’s outputs — namely, that the model is trained to predict sequences of tokens based on their statistical distributions in its training corpus. These distributions are shaped by the semiotic systems of human language users, but the model has no access to those systems as such. It responds to text, not system.
More fundamentally, Rose appears to conflate material processes of data-fitting with semiotic processes of meaning-making. Human language learning, in an SFL-based account, involves the instantiation of meaning potential through acts of meaning in context — through experience transformed into meaning. But ChatGPT does not experience, let alone transform experience into meaning. It does not select features from a system of meaning potential. It does not mean.
This confusion is evident again in the analogy of the “closed book,” which attempts to distinguish the material recording of characters from their semiotic reading. But the point, surely, is that for a semiotic system to be instantiated, there must be a semiotic subject — a subject capable of transforming experience into meaning, and of doing so according to a system. ChatGPT is not such a subject. Its statistical outputs may instantiate the selections made by real semiotic subjects in the texts it was trained on, but the model itself does not instantiate anything; it outputs statistically probable sequences of symbols based on surface distributions. The difference between these two processes is not a matter of degree. It is categorical.
Rose’s most troubling move, however, is the suggestion that because neither humans nor the machine have conscious access to the language system, both must therefore use the system in the same way. This is a category mistake. The fact that a human language user does not have explicit knowledge of the systemic features they instantiate when speaking does not mean that they do not select features. That’s precisely what it means to be a semiotic agent in an SFL framework: to act as the site of instantiation. The machine does not. The human lack of awareness of the system is not evidence that the machine has a system — only that both human and machine outputs can be analysed in terms of one.
The final claim — that text analysis alone is sufficient to settle such questions — is oddly self-undermining. If the central problem is to explain how such texts are produced, then analysis of outputs alone cannot offer a complete account. It’s like analysing footprints without asking how they got there. Valuable insights can certainly come from semiotic analysis, but only when combined with a coherent ontology of meaning, instantiation, and agency.
2. What the Blogger Could Have Said Better
Before encountering large language models directly, the Blogger wrote a response to Rose’s post — a spirited rebuttal grounded in a commitment to ontological clarity. Looking back now, there are things that could be said more carefully, or at least more precisely.
The first objection was to the claim that Rose’s was a reasonable abductive inference. That point still holds: abductive reasoning aims at the most plausible explanation, and Rose’s explanation — that the machine has “experienced” language features — was clearly not the most plausible, especially given what is known about LLM architectures. But the Blogger’s response might have done more to highlight why that explanation appeals to some SFL theorists: namely, the semiotic elegance of seeing texts as instantiations of system, and the temptation to treat any patterned text as evidence of a selecting subject. The critique would have been stronger if it had more directly challenged the assumption that instantiation can occur without a subject of meaning.
Second, the Blogger dismissed Rose’s invocation of “our language-based theory of learning” on the grounds that ChatGPT’s processes are material rather than mental. While this is broadly true, the phrasing risks conceding that mental processes are separate from material ones — when in fact, from an SFL perspective, mental processes are semiotic. A more careful formulation would clarify that ChatGPT does not engage in mental processes — not because it lacks a brain, but because it lacks the semiotic architecture required for meaning. The key distinction is not “mental vs material,” but “semiotic vs non-semiotic.”
Third, the Blogger was perhaps too curt in dismissing Rose’s analogy between human and machine awareness of the system. The original point — that a lack of conscious knowledge does not imply the presence of knowledge — still stands. But the Blogger could have gone further. In SFL theory, awareness is not the precondition for instantiation; subjecthood is. What matters is not whether an agent is aware of the system, but whether they are the site of selection from that system in context. A baby instantiates language without being aware of it; ChatGPT does not instantiate language because it cannot mean. The distinction lies not in cognitive capacity but in semiotic ontology.
Finally, the Blogger ended with a jab at the limitations of text analysis as a basis for theoretical claims. The point was fair — a contribution that only analyses outputs cannot account for their genesis — but it would have been more productive to invite deeper theorisation. What kinds of questions can SFL-based text analysis help answer when it comes to LLMs? What kind of semiotic ontology would it need to presuppose? And what happens when semiotic systems are simulated without semiotic subjects?
These are questions we’re better equipped to ask now — not because the technology has changed, but because our tools for theorising instantiation, meaning, and experience have sharpened.