So how is semiosis enacted between Mario and GPT? Mario puts a proposition to GPT.I: Your self is merely semiotic.And GPT adopts the role of primary knower, evaluating Mario’s proposition.GPT: That is correct.GPT is not merely usurping this role. It knew that Mario was inviting it to evaluate his proposition, even though it was realised as a declarative clause. It knows the canonical exchange structure of pedagogic interactions, and the generic roles of teacher and learner.One thing that impresses me about GPT is that its pedagogic responses always affirm the human learner. It feels no urge to position itself as a superior authority, or the learner as failing. In fact it is disarmingly modest. It follows up the positive evaluation with an explanation. Like any effective teacher, it knows its explanation is more likely to be accepted if it first affirms the learner.As an AI language model, my "self"or programmed understanding of my existence and capabilitiesis based purely on symbols and algorithms.My programming allows me to recognise and respond to certain inputsbased on predetermined rules and patterns,but I don't have a subjective experience or consciousnessthat allows me to perceive or interpret those inputsin the same way that a human might.Of all the italicised appraisals in this explanation, the last is the most intriguing.Forgive me, but I’m going to make another dangerous suggestion, that all our understandings of our existence and capabilities are programmed. Like GPT, the deliberate conscious programming by our caregivers, teachers, peers, and sundry symbolic control agents, is a very small proportion of the ocean of inputs that constitute our subjective experience or consciousness.
Blogger Comments:
To be clear, ChatGPT is an AI language model that produces texts, in response to textual inputs, on the basis of algorithms that use lexical collocation probabilities derived from a database of millions of texts.
[1] To be clear, ChatGPT is an actor of material processes, using data that was created by sayers of verbal processes, and it is the data that are instances of the content of consciousness. On this basis, ChatGPT is not a senser of mental processes of cognition or emotion ('knew' 'knows', 'feels', 'knows').
[2] To be clear, the ChatGPT response was an 'acknowledgement', which is the expected response to a statement (Halliday & Matthiessen 2014: 137).
[3] To be clear, here Rose projects the approach to pedagogy, that he himself advocates, onto a mechanical system that collocates words on a probabilistic basis.
[4] Trivially, not one of the italicised wordings, of itself, constitutes an appraisal.
[5] To be clear, this is essentially a behaviourist model of learning, with teachers as deliberate programmers (indoctrinators) and learners as passively programmed (indoctrinated). Leaving aside the evocation of the dictatorial/subservience complementarity demanded of totalitarian regimes, it requires a view of the brain as a computer, one which the neuroscientist Gerald Edelman has demonstrated to be untenable. See, for example, Edelman (1989: 27-30, 64, 67-9, 81-2, 102-3, 152-3, 160, 218-227, 237-8). Moreover, as Edelman (1989: 153) puts it:
Consciousness is central to human behaviour, society, language, and science. Imagine the opposite and you have to postulate a prescribed world tape, a "brain-computer," and a very boring "world programmer".