Thursday, 30 March 2023

David Rose On Individuation, Affiliation And ChatGPT

We’ve been arguing about realisation and instantiation, but from your discussion here, and posts by Mick and others on the machine ‘fooling’ us into interpreting its texts as human-like, the issue is actually individuation.

It can appear to negotiate affiliation, but the machine itself is not affiliated with any scale of community. Only perhaps the people who program it and feed it text corpora are so affiliated, and the users focusing its tasks.

I think this is crucial because we can use and grow our individuation toolset to analyse this dimension. As I think you are suggesting, our realisation and instantiation tools are inadequate on their own for describing it. That is why all I can see when I look at its texts is instantiation of systems at each stratum. I am told this is wrong but I haven’t been given any textual evidence against it – only the authority of its designers and their community.

When the machine tells
I don't have a subjective experience or consciousness that allows me to perceive or interpret those inputs in the same way that a human might.
...it’s talking about individuation.


Blogger Comments:

[1] To be clear, Martin has proposed two models of individuation, one in which meaning potential is individuated (derived from Bernstein), and one in which meaners are individuated (derived from his student Knight). Neither model applies to ChatGPT because its texts are not instances of a system of meaning potential individuated through ontogenesis. Instead, ChatGPT uses the collective ("unindividuated") lexical collocation probabilities derived from millions of instances (texts) to produce new instances. In Bernstein's terms, ChatGPT is not a 'repertoire of potential' but a 'reservoir of instances'. ChatGPT is thus not an individuated meaner producing texts as instances of an individuated system of potential.

[2] To be clear, the underlying principle of affiliation is different from that of individuation, but Martin confuses the two in his Knight-derived model. Where individuation is a hyponymic taxonomy (an elaboration of types), affiliation is a meronymic taxonomy (a composition of parts). The affiliation model does not apply to ChatGPT because it applies to individuated meaners producing texts as instances of an individuated system, and as demonstrated above, ChatGPT is not an individuated meaner producing texts as instances of an individuated system.

[3] To be clear, it is not that "our realisation and instantiation tools are inadequate", but that they are misapplied if ChatGPT does not operate with a model of stratified systems of potential.

[4] To be clear, this is wrong because there is no evidence whatsoever that any text produced by ChatGPT is "the instantiation of systems at each stratum". The texts are generated from other instances, not a system, and only use the graphological realisations of probabilistically collocated lexical items, not a stratified model of language.

[5] To be clear, here ChatGPT is telling anyone who would listen why it is not an individuated meaner.