Nevertheless, I am still somewhat concerned by certain ways I see on the list of relating to the alleged GPT-generated results - and this was the intended force of my very original comment way back asking Chris why he would think that such a post is interesting to anyone. This was a serious question; it has not been answered. To be scientifically interesting, any such output must be described properly so that one knows just how it is produced: otherwise one has no basis for even thinking about what is going on in the apparent text. That means: what model, what parameters, what prompts (including in-context learning material)? Without this, one can literally do nothing with the result. ChatGPT is not ChatGPT is not ChatGPT. In short: such posts may be interesting as DATA for analysis; just as posting a difficult sentence or a dada poem might be.
Blogger Comments:
[1] Leaving aside the fact that the question has been answered on a blog (here), the answer has been demonstrated by the course of the Sysfling discussion.
[2] To be clear, this is misleading, because it is untrue. As ChatGPT explains:
Bateman asserts that GPT-generated output cannot be "scientifically interesting" unless every detail of its model, parameters, and prompts is known. This is a flawed requirement. Scientific inquiry does not demand complete system transparency to be meaningful. Linguists analyse naturally occurring language without knowing every cognitive and social factor involved in its production. Likewise, textual analysis can be done on GPT-generated output without knowing its full internal workings—just as one can analyse a sentence without knowing every neural process that produced it in a human.
[3] Here Bateman unwittingly contradicts himself. As ChatGPT explains:
Bateman concludes that GPT-generated posts "may be interesting as data for analysis"—but only in the way that a "difficult sentence or a dada poem might be." This contradicts his earlier claim that "without full system description, one can literally do nothing with the result." If one can analyse GPT output as one would a dada poem, then it is clearly interpretable as text, and one does not need full knowledge of its inner workings to examine its structure or effects.