Wednesday 14 March 2018

Jim Martin On Lexical Density And "Reference"

The proportion of generic to specific reference as far as participant tracking is concerned is relevant, since with generic reference identity chains re-initiate more often, pushing up lexical density; I’m not sure, never having counted, but where the generic reference involves technical terms, this may push up the repetition and lexical density even further
Also, I don’t think the stylistic prescriptive taboo in English against repetition of lexical items holds for specialised or technical terms. 
In terms of mode, it is more sensitive to measure lexical density per ranking clause, rather than depend on context/function work ratios.

Blogger Comments:

[1] To be clear, the distinction between generic and specific reference is not a distinction in the cohesive system of reference. It is Martin's (1992: 103-10) invention only, and forms a system within his IDENTIFICATION network. Martin's system of identification is his rebranding of Halliday's system of cohesive reference, which he misunderstands and relocates from (non-structural) lexicogrammar to (structural) discourse semantics. Martin's misunderstandings in this regard are identified in explanatory detail here.

In short, Martin confuses the textual system of referring with the experiential meanings of the referred to, the referents, as shown by the (unstructured) unit of this system, the participant.  This confusion also shows up in the distinction between generic and specific reference, where what is presented as a distinction of reference is actually a distinction in the referents.  Martin (1992: 103):
Generic reference is selected when the whole of some experiential class of participants is at stake rather than a specific manifestation of that class.
[2] Martin's 'participant tracking' derives from the notion of participant identification — introduced by the Hartford stratificationalists (Martin 1992: 95).  It is Martin's attempt to integrate their ideas with Halliday's notion of cohesive reference that leads to the confusion of the reference with the referent in Martin's model.  Martin's IDENTIFICATION is concerned with chains of instantial participants, and this naturally leads to confusion between reference chains of participants and lexical strings (documented here), the latter derived from Martin's misunderstandings of Halliday's lexical cohesion (as demonstrated here).

[3] This is a bare assertion, unsupported by argument or data.  The claim is that chains of a class of participant (e.g. 'human') restart in a text more often than chains of a specific manifestation of a class (e.g. 'Donald Trump').  Readers are invited to test this claim for themselves.

[4] This is another bare assertion, unsupported by argument or data.  The claim is that the more often reference chains restart, the higher the lexical density.  It can be seen that the former does not necessarily entail the latter, since the number of lexical items per clause can be entirely independent of how often a chain of participants restarts.

More to the point, the critical factor in increasing lexical density is ideational metaphor.  Halliday (2008: 163):
This [ideational metaphor] is a designed, or at least semi-designed, extension of the “experiential” way of looking at phenomena. It suits the “crystalline”, written mode of being; and in particular, as already said, it suits the elaborated discourses of organised knowledge, because it is good to think with — it enables you to build well-ordered conceptual structures and to spin tangled skeins of reasoning. High lexical density is the price to be paid.

[5] This is speculation based on a non-sequitur.  See [4].

[6] This is another bare assertion, unsupported by argument or data.

[7] To be clear, Martin misconstrues mode as a dimension of register, rather than a dimension of context that is realised in texts, registers and language as a whole (the cline of instantiation).

[8] This is a false dichotomy.  Lexical density is a measure of the number of lexical items per ranking clause, and nothing else.  Halliday & Matthiessen (2014: 727):
To measure lexical density, simply divide the number of lexical items by the number of ranking clauses.

No comments: