Monday, 3 April 2023

David Rose On What Makes Us Human And Fooled

David Rose wrote to Sysfling on 31 Mar 2023 at 9:23:
I was thinking of Shooshi’s what makes us human... dirty jokes ;-))
Re pickiness of humans... I have a sneaking suspicion GPT’s telling us something important about us, not just itself. How much are we fooled by our synoptic view of systems and texts? We can see emergent patterning when we stare at enough printed text, and then represent it as weighted options in systems. But how much do we actually know about how we process text above the lower ranks of expression? What actually is the relation between structural probabilities in text production and ‘meaning’.


Blogger Comments:

[1] To be clear, for Halliday, it is the stratified content plane of language that "makes us human". Halliday (2002 [1996]: 388):
The more complex type of semiotic system is that which evolves in the form of Edelman’s “higher order consciousness”. This higher order semiotic is what we call language. It has a grammar; and it appears to be unique to mature (i.e. post-infancy) human beings. In other words, it evolved as the “sapiens” in homo sapiens.
Halliday (2003 [1995]: 390, 430n):
In this paper I have tried to identify, and to illustrate, certain aspects of language which seem to me critical to a consideration of language and the human brain. In doing so I have assumed that language is what defines the brain of homo sapiens: what constitutes it as specifically human.
The emergence of grammar … is the critical factor in the development of higher-order consciousness; homo sapiens = homo grammaticus. See Halliday (1978a, 1979b); Painter (1984, 1989); Oldenburg (1986).
Halliday & Matthiessen (2014: 25):
This stratification of the content plane had immense significance in the evolution of the human species – it is not an exaggeration to say that it turned homo ... into homo sapiens (cf. Halliday, 1995b; Matthiessen, 2004a). It opened up the power of language and in so doing created the modern human brain. 

[2] To be clear, in the case of ChatGPT, "we" are not fooled by "our synoptic view of systems and texts", but by ascribing systems to an AI model of language that generates texts from (the lexical collocation probabilities of) instances, not systems.

[3] To be clear, this relation is given by the architecture of language proposed by SFL Theory: structures are specified systemically in the realisation statements attached to features whose probability of instantiation varies according to register. 'Meaning', in the narrower sense, is the stratum of semantics: its systems that are realised as structures, and instantiated as texts.

But importantly, ChatGPT does not use systems that specify structural probabilities to generate texts. Instead, it uses the lexical collocation probabilities garnered from a 'reservoir' of texts, each of which is the instance of the system of the meaner who produced it. (In lexicogrammar, collocation is the syntagmatic dimension of lexis, whereas structure is the syntagmatic dimension of grammar.)