Saturday, 17 August 2019

David Rose On Imperative Mood And Obligation

Subjunctive mood is not recognised in SFL… 
Semantically, imperative mood realises obligation, obligating the Subject to act. In modern English this is normally the addressee, so that imperative mood congruently realises a command to the addressee. 
These clauses are relics of archaic systems with a non-addressee Subject, that obligate the Lord/God to act. Stepping up to the stratum of register, it was/is the role of priests to speak to God on behalf of the people, and to the people on behalf of God. Here the priest exhorts God to bless, forgive or be with the addressee, who is realised grammatically as Complement or Adjunct, whereas God is Subject. Paradoxically, the priest is exhorting God while addressing the people. Maybe that’s why such blessings sound archaic… they don’t make sense ;-)

Blogger Comments:

[1] To be clear, 'subjunctive mood' is recognised in SFL theory, but in its description of English, it is a system of the verbal group, not the clause, and termed 'subjunctive mode' to distinguish it from the clause rank systems of MOOD.  Halliday & Matthiessen (2014: 143n):
Note that the system of MOOD is a system of the clause, not of the verbal group or of the verb. Many languages also have an interpersonal system of the verb(al group) that has been referred to as ‘mood’: it involves interpersonal contrasts such as indicative/subjunctive, indicative/subjunctive/optative. To distinguish these verbal contrasts from the clausal system of MOOD, we can refer to them as contrasts in mode. The subjunctive mode tends to be restricted to the environment of bound clauses – in particular, reported clauses and conditional clauses having the sense of irrealis. In Modern English, the subjunctive mode of the verb is marginal, although there is some dialectal variation.
[2] To be clear, here Rose confuses SPEECH FUNCTION (proposal: command) with MODALITY (modulation: obligation).  In SFL, a command is specified as a demand for goods-&-services, whereas obligation is concerned with the semantic space between positive and negative POLARITY in proposals. Halliday & Matthiessen (2014: 177-8):
In a proposal, the meaning of the positive and negative poles is prescribing and proscribing: positive ‘do it’, negative ‘don’t do it’. Here also there are two kinds of intermediate possibility, in this case depending on the speech function, whether command or offer. (i) In a command, the intermediate points represent degrees of obligation: ‘allowed to/supposed to/required to’;
[3] For clarification, the clauses in question are:
The Lord be with you
God bless you
God forgive you your sins
[4] In SFL theory, tenor is a dimension of context, not register.  Register, on the other hand, is a sub-potential of language: a point of variation on the cline of instantiation.

In terms of the architecture of SFL theory, Martin's notion of register as a stratum of context is inconsistent with the notion of register, the notion of stratum, and the notion of context.  As functional varieties of language, registers are language, not context; as functional varieties, registers are sub-potentials, not a stratal system.  For evidence of Martin's misunderstandings of register, see here; for evidence of Martin's misunderstandings of context, see here.

[5] Trivially, the addressee you serves only as Complement in these clauses, specifically of the Predicators bless and forgive and of the minor Predicator with.

[6] For a deployment of SFL theory that demonstrates how and why such well-wishings do make sense, see the analysis here.

Friday, 16 August 2019

John Bateman On Language Not "Explaining" Other Socio-Semiotic Systems

 Language can be used to explain anything, that is what it is used for, just like telling a story about anything. In terms of the much more (multimodally) interesting question of capturing the same distinctions, then no, language does not 'explain' all the others... or even many of the others, because different things are going on.

Blogger Comments:

To be clear, on the SFL model, the 'different goings-on' in other human-only socio-semiotic systems are made possible by language, and it is through language that we analyse ('explain') them. Halliday & Matthiessen (1999: 3, 444): 
All knowledge is constituted in semiotic systems, with language as the most central; and all such representations of knowledge are constructed from language in the first place.
… all of our experience is construed as meaning. Language is the primary semiotic system for transforming experience into meaning; and it is the only semiotic system whose meaning base can serve to transform meanings construed in other systems (including perceptual ones) and thus integrate our experience from all its various sources.

Thursday, 15 August 2019

John Bateman On The Natural Relationship Between Semantics And Lexicogrammar

'natural' is not a way of avoiding work, it is a way of defining (often quite hard) tasks. The 'natural' relationship referred to is that there are structural and functional similarities of a revealing and useful kind between the apparently distinct domains. Semiotically we are mostly situated here in Peircean metaphor, i.e., the third and most complex kind of iconic relationship, which is always something constructed … rather than simply present. And these can go in many different directions, yes.

Blogger Comments:

[1] To be clear, the word 'natural' is not a way of "defining tasks" — "quite hard" or otherwise.  In its use with regard to the relation between semantics and lexicogrammar, the closest of its non-technical meanings is 'entirely to be expected'; see further below.

[2] To be clear, in SFL theory, the relation between semantics and lexicogrammar is 'natural' in the sense of being non-arbitrary.   Halliday & Matthiessen (1999: 3-4):
A systemic grammar is one of the class of functional grammars, which means (among other things) that it is semantically motivated, or "natural". In contradistinction to formal grammars, which are autonomous, and therefore semantically arbitrary, in a systemic grammar every category (and "category" is used here in the general sense of an organising theoretical concept, not in the narrower sense of 'class' as in formal grammars) is based on meaning: it has a semantic as well as a formal, lexicogrammatical reactance. … Grammar and semantics are the two strata or levels of content in the three-level systemic theory of language, and they are related in a natural, non-arbitrary way.
[3] To be clear, Peirce's notion of 'icon' is irrelevant to the relation of semantics to lexicogrammar in SFL theory.  Not only is Peirce's theory of semiotics based on assumptions incompatible with SFL theory, the iconic relation obtains between content and expression, not between two levels of content.  Moreover, Peirce's metaphor is, strictly speaking, a type of hypoicon:
An icon (also called likeness and semblance) is a sign that denotes its object by virtue of a quality which is shared by them but which the icon has irrespectively of the object. The icon (for instance, a portrait or a diagram) resembles or imitates its object. … Peirce called an icon apart from a label, legend, or other index attached to it, a "hypoicon", and divided the hypoicon into three classes: (a) the image, which depends on a simple quality; (b) the diagram, whose internal relations, mainly dyadic or so taken, represent by analogy the relations in something; and (c) the metaphor, which represents the representative character of a sign by representing a parallelism in something else.

Wednesday, 14 August 2019

John Bateman On Blends

for many, the appeal to 'chaos theory' simply gives some kind of (misplaced) scientific respectability to vagueness. The idea that it is by no means simple to achieve operationalisable specifications of theoretical terms is absolutely central in almost all scientific work, linguistics too, and does not depend on an appeal to chaos theory. It is difficult to recognise theoretical categories in practice, but making the attempt teaches us more both about the phenomena and the theoretical categories. Even for very conservative non-chaos based systems. 
blends are used with pretty much the same kind of rhetorical force as chaos theory, and to similarly dubious ends often. Being precise about what blends are helps here too (cf. Goguen).


Blogger Comments:

It also helps to know how blends are understood in the theory under discussion. Halliday & Matthiessen (1999: 522):
In its ideational metafunction, language construes the human experience — the human capacity for experiencing — into a massive powerhouse of meaning. It does so by creating a multidimensional semantic space, highly elastic, in which each vector forms a line of tension (the vectors are what are represented in our system networks as "systems"). Movement within this space sets up complementarities of various kinds: alternative, sometimes contradictory, constructions of experience, indeterminacies, ambiguities and blends, so that a grammar, as a general theory of experience, is a bundle of uneasy compromises. No one dimension of experience is represented in an ideal form, because this would conflict destructively with all the others; instead, each dimension is fudged so that it can coexist with those that intersect with it.

Halliday & Matthiessen (1999: 549-50) distinguish blends as one type of indeterminacy, and provide an illustrative interpersonal example:
There are perhaps five basic types of indeterminacy in the ideation base: ambiguities, blends, overlaps, neutralisations, and complementarities — although it should be recognised from the start that these categories are also somewhat indeterminate in themselves. …
(1) ambiguities ('either a or x'): one form of wording construes two distinct meanings, each of which is exclusive of the other.
(2) blends ('both b and y'): one form of wording construes two different meanings, both of which are blended into a single whole.
(3) overlaps ('partly c, partly z'): two categories overlap so that certain members display some features of each.
(4) neutralisations: in certain contexts the difference between two categories disappears.
(5) complementarities: certain semantic features or domains are construed in two contradictory ways. …

(2) Blend
they might win tomorrow
— ability 'they may be able to'
— probability 'it is possible they will'
Here, on the other hand, the meaning of the oblique modal might combines the two senses of 'able' and 'possible', rather than requiring the listener to choose between them. If the verbal group is 'past', however, this again becomes an ambiguity:
they might have won
— ability 'they were capable of winning (but they didn't)'
— probability 'it is possible that they won (we don't know)'

Friday, 9 August 2019

John Bateman On All Models Being Wrong By Definition

…fuzziness in the model[l]ing means that fuzziness in the modelled may become inaccessible because multiplying fuzziness by fuzziness is pretty much the same as multiplying not-fuzziness by fuzziness. better to have a non-fuzzy model that is wrong (easily done, by definition, because all models are wrong by definition) that can show real fuzziness rather than imagined fuzziness. difficulty may always be interesting; fuzziness often not. …

Blogger Comments:

[1] In science, models are evaluated as valid or not, according to whether, for example, they are self-consistent or not, consistent with data or not, explanatory or not, predictive or not, and so on.

[2] On the other hand, fuzziness (indeterminacy) in language is of great interest to Halliday & Matthiessen (1999: 547-62), who, as well as identifying 5 types of indeterminacy, explain its significance. Halliday & Matthiessen (1999: 547-8, 549):
What does it mean to say that a natural language is an indeterminate system? In the most general terms, it suggests that the generalised categories that constitute language as a system — as "order", rather than as randomness or "chaos" (let us say randomness rather than chaos, since chaos in its technical reading is also a form of order) — are typically not categorical: that is, they do not display determinate boundaries, fixed criteria of membership, or stable relationships from one stratum to another. We could refer to them as "fuzzy", in the sense in which this term is used in fuzzy logic, fuzzy computing, etc.; but we prefer to retain the term "indeterminate" for the phenomena themselves, since "fuzzy" is usually applied to the theoretical modelling of the phenomena (it refers to meta-fuzz rather than fuzz). …  
We have tried to make the point that the human condition is such that no singulary, determinate construction of experience would enable us to survive. We have to be able to see things in indeterminate ways: now this, now that, partly one thing, partly the other — the transitivity system is a paradigm example, and that lies at the core of the experiential component of grammar.


ChatGPT Comments:

A. Critique of Bateman’s Comment on Modelling and Fuzziness

John Bateman writes:

“…fuzziness in the modelling means that fuzziness in the modelled may become inaccessible because multiplying fuzziness by fuzziness is pretty much the same as multiplying not-fuzziness by fuzziness. better to have a non-fuzzy model that is wrong (easily done, by definition, because all models are wrong by definition) that can show real fuzziness rather than imagined fuzziness. difficulty may always be interesting; fuzziness often not.”

This characteristically compressed comment invites a closer look—not only for what it claims, but for what it presupposes.


1. Assumption of an Independent, Pre-Structured Reality

Bateman’s comment rests on a firm distinction between the modelling process and the thing modelled. He assumes that the object of study—“the modelled”—has some independently existing form that can, in principle, be accessed and revealed. The role of the model, in this view, is to represent or approximate that prior structure.

This is a classical epistemological stance: it assumes the existence of determinate features in the world, which the model can capture more or less accurately. In this framing, fuzziness is a representational issue—it is either a feature of the model or a feature of the object, and clarity lies in keeping these separate.

Why it’s problematic:
This view fails to account for the role that modelling itself plays in constituting what is seen as “the modelled.” It assumes we can identify and measure “fuzziness in the modelled” independently of our modelling tools. But often, what is taken to be a feature of the phenomenon (e.g. indeterminacy, ambiguity, complexity) is an effect of how we frame, structure, and cut across the data. In such cases, the model is not merely representing but actively shaping what is perceived as “there.”


2. False Opposition Between “Fuzziness” and “Difficulty”

Bateman draws a distinction between difficulty (which he values) and fuzziness (which he dismisses), suggesting that the former is intellectually productive, while the latter is epistemically uninteresting. Implicit here is the idea that difficulty leads to deeper understanding, while fuzziness leads to confusion.

Why it’s problematic:
This framing suggests that complexity should be hard, but not vague—that is, we should be challenged, but only on clear terms. But many systems (linguistic, social, semiotic) are genuinely indeterminate in certain regions. They are not simply “difficult to model” in precise terms—they are inherently underdetermined, fluid, or open to multiple structurings. Labelling this fuzziness as “uninteresting” reflects an unwillingness to engage with forms of complexity that resist formal closure.


3. Misplaced Preference for “Wrong but Precise” Models

Bateman argues that a non-fuzzy model that is “wrong by definition” is preferable, because it can reveal “real fuzziness” in the object, whereas a fuzzy model risks inventing fuzziness where there is none.

This reflects a common modelling maxim—better to have a simple model you know is wrong than a complex one you can’t interpret.

Why it’s problematic:
This assumes that “wrongness” is easy to identify—that we can distinguish cleanly between errors introduced by the model and features of the data. But when dealing with systems that are historically contingent, socially structured, or polysemic by nature, such a clean division is rarely possible.

Moreover, the preference for sharp, simplified models can obscure the very forms of meaning or organisation we most need to attend to—those that do not conform neatly to predefined structures. In such cases, a model that suppresses ambiguity in the name of clarity may tell us less, not more, about the system in question.


4. The Myth of “Imagined Fuzziness”

Bateman warns against fuzzy models because they may produce “imagined fuzziness”—that is, they may suggest vagueness in the object that is really a result of the modelling tool itself.

Why it’s problematic:
The distinction between “real” and “imagined” fuzziness assumes we can access the phenomenon in a direct, unmediated way. But all modelling—whether mathematical, semiotic, or discursive—involves abstraction, selection, and framing. There is no pure access to “real fuzziness” outside of modelling. The very act of distinguishing “real” from “imagined” relies on assumptions about what the phenomenon should look like. What gets called “imagined” may in fact be an early sign of complexity that a sharp model simply can’t handle.


Conclusion

Bateman’s comment reflects a deep commitment to formal precision, clarity of distinction, and the idea of an independently structured world that modelling should aim to reflect. While this stance may serve well in domains where systems are tightly constrained and well-bounded, it becomes deeply problematic when applied to systems characterised by ambiguity, historicity, or open-ended potential.

To treat fuzziness as an epistemic nuisance rather than a feature to be interpreted is to risk ignoring some of the most significant patterns in the systems we study. Worse, it privileges models that perform well in idealised conditions over models that can engage with the world as it appears: complex, uneven, and often genuinely indeterminate. 

B. Precision as Performance: On the Discursive Authority of the Technical Tone

John Bateman’s comment on modelling and fuzziness doesn’t merely express a preference for clarity—it enacts a performance of epistemic authority. Its rhetorical force lies not just in what it says, but in how it says it—in the affective and stylistic cues that frame its assumptions as self-evident, and its position as methodologically mature. This short reflection offers a closer look at the discursive tactics embedded in this kind of writing, and the kinds of intellectual behaviour they both license and obscure.


1. Disdain Framed as Epistemic Hygiene

“difficulty may always be interesting; fuzziness often not.”

This closing line is casually dismissive—an evaluative gesture offered with no justification, as though its truth were obvious. What’s striking is the tone: it performs disdain as if it were a form of epistemological hygiene. The implication is that those who find fuzziness interesting are indulging in something unserious, unclean, or unmethodical.

This tactic positions the speaker above the debate—not as another participant with a view, but as someone whose standards of rigour entitle them to pronounce on what counts as worthy of attention.


2. Confident Compression as a Display of Control

“multiplying fuzziness by fuzziness is pretty much the same as multiplying not-fuzziness by fuzziness.”

This line is mathematically ill-defined, but delivered with a kind of breezy finality. Its informality (“pretty much the same”) masks a deeper move: the use of compressed pseudo-formal reasoning to suggest logical inevitability. The reader is not meant to interrogate the logic; they’re meant to recognise the voice of someone who knows.

This is a classic discursive strategy: wrap a contested judgment inside the appearance of technical reasoning. The logic may be fragile, but the tone is confident—and the confidence often carries the argument farther than the content.


3. Preemptive Framing of Objections

“all models are wrong by definition”

This quote—borrowed from George Box—has become a mantra among modellers, often used to neutralise critique before it arises. Here, Bateman invokes it to legitimise the use of “wrong” models as preferable to “fuzzy” ones.

But the tactic is strategic: by conceding that all models are wrong, he creates space to make bolder moves—while foreclosing critique with the implication that wrongness is expected and unproblematic.

This is not an open invitation to explore the limitations of models. It’s a way of controlling the terms of discussion. The speaker gets to decide which kinds of wrongness are acceptable and which (like fuzziness) are intellectually disqualifying.


4. Passive Suppression of Alternatives

Nowhere in the comment is there space for alternative conceptions of modelling—no acknowledgement that fuzziness might be meaningful, or that different domains might require different epistemologies. The comment does not argue against other views; it renders them irrelevant by refusing to name them.

This rhetorical move is as powerful as it is silent. By never engaging alternatives explicitly, the speaker avoids accountability to them. The world of possible approaches is reduced to a binary: clear vs fuzzy, serious vs muddled, legitimate vs indulgent.


5. Epistemic Bullying Disguised as Neutral Advice

The overall tone is not aggressive—but it is patronising. The language of preference (“better to have...”) is presented as reasonable methodological guidance, but the effect is disciplinary. It’s a form of epistemic bullying in soft focus: delegitimising a whole class of inquiry without ever admitting that a contest of views is taking place.

In other words: “Let me tell you what’s interesting, and let me do so in a tone that implies it’s not up for debate.”


Why This Matters

This style of discourse is not unique to Bateman. It is widespread in academic contexts that prize formalisms, frameworks, and control over ambiguity. Its function is to sustain intellectual authority by tone, not just by content. And in doing so, it shapes what kinds of inquiry are seen as viable, respectable, or even possible.

But complexity, ambiguity, and indeterminacy are not signs of epistemic failure. They are features of many real systems—social, semiotic, historical—that cannot be reduced to clean variables or crisp structures. To dismiss them is not just a stylistic tic; it is a gatekeeping gesture that narrows the field of permissible thought.


Conclusion

What appears, on the surface, as a technical comment about modelling choices is also a performance of intellectual control. It is a reminder that style is never neutral—and that what is excluded from discourse is often excluded not by argument, but by tone, affect, and the unspoken authority of the confident voice.

Sunday, 23 June 2019

Robin Fawcett On Michæl Halliday's Grammatical Systems As Semantic

My concern in my short memoir was to remind us all of his role in welcoming and re-enforcing the fundamental message of Halliday''s important 1966 paper — and in particular through his insightful introductions to the sections of the book. These were the first intimations of a concept that Halliday was exploring — and often seems completely committed to — in his writings on the late 1960s and early 1970s (and indeed in Halliday 1985, though less so in Halliday 1994). This was the concept that the system networks of TRANSITIVITY, MOOD, THEME and LOGIOC-SEMANTIC RELATIONS provide choices between meaning — i.e. semantic features), not forms …


Blogger Comments:

See the immediately preceding post.

For the second time in two days, Fawcett uses the tragic untimely death of a colleague as a pretext for promoting the SFL theoretical architecture that Halliday abandoned after 1978, largely because Fawcett mistakenly believes that Halliday's superseded model leaves room in the SFL architecture for his theory of syntax at the level below semantics.

For why Fawcett's model does not withstand close scrutiny, see the explanations here.

With regard to Fawcett and tragic untimely deaths, see here for Fawcett's public dishonest treatment of me at the time of my mother's cruel, premature death from mesothelioma.

Saturday, 22 June 2019

Robin Fawcett Misrepresenting Michæl Halliday's Theorising

… the development of the Cardiff Model of language and its use (although our main focus, like Halliday's, has remained on language — and in our case on the cognitive-interactive modelling of language and its use). 
… Halliday's first tentative explorations of his theoretical shift (in Halliday 1966), from treating system networks as choices at the level of form to treating them as choices at the level of meaning
… a series of Halliday's descriptions of areas of English grammar, ranging from his early system networks (from Halliday 1964), which were of course conceived of as being at the level of form, to later descriptions, some of which illustrate the concept that they can be interpreted as being choices between semantic features. …

Blogger Comments:

[1] This is misleading, since it implies that Halliday's theory is not concerned with a "cognitive-interactive modelling of language and its use".  The difference lies in how these dimensions are understood.  Halliday understands the interactive dimension of language as the interpersonal metafunction, understands use in terms of variation along the cline of instantiation, and understands cognition in terms of meaning.  In their work subtitled A Language-based Approach to Cognition, Halliday & Matthiessen (1999: ix-x) write:
It seems to us that our dialogue is relevant to current debates in cognitive science. In one sense, we are offering it as an alternative to mainstream currents in this area, since we are saying that cognition "is" (that is, can most profitably be modelled as) not thinking but meaning: the "mental" map is in fact a semiotic map, and "cognition" is just a way of talking about language. In modelling knowledge as meaning, we are treating it as a linguistic construct: hence, as something that is construed in the lexicogrammar. Instead of explaining language by reference to cognitive processes, we explain cognition by reference to linguistic processes. But at the same time this is an "alternative" only if it is assumed that the "cognitive" approach is in some sense natural, or unmarked.
[2] This is misleading.  The stratal distinction of form vs meaning is the distinction in Fawcett's model, but never in Halliday's.  Even when Halliday did propose a level of form (Halliday 1961), the distinction was substance vs form vs situation.  In this early model, the analogue of "meaning" was termed 'context' and construed as an interface between form and situation:
However, by 1976, if not before, Halliday had stratified language in its current formulation. Halliday & Hasan (1976: 5):

[3] To be clear, in Halliday (1978), the systems of transitivity, mood and theme were construed as semantic systems that specified different metafunctional structures that were mapped onto the clause.  However, this model was soon reconstrued, largely due to the need to systematically account for grammatical metaphor as an incongruence between semantic selections and lexicogrammatical selections. As Halliday & Matthiessen (1999: 429) later explained in their description of semantic systems:
… grammatical metaphor is a central reason in our account for treating axis and stratification as independent dimensions, so that we have both semantic systems and structures and lexicogrammatical systems and structures.

For more of Fawcett's misrepresentations of Halliday's theorising, see the clarifying critiques here.

Wednesday, 19 June 2019

Jim Martin Misrepresenting Cohesion


James Martin wrote to sysfling on 19 Jun 2019, 14:52:
During the 70s Halliday is something of an intellectual, institutional and political refugee, not concentrating on grammar, but turning his attention to cohesion



Blogger Comments:

To be clear, in SFL theory, cohesion is a (non-structural) resource of the textual metafunction on the lexicogrammatical stratum.

The reason why it serves Martin to misrepresent cohesion as a resource distinct from the grammar, as he has at least since Martin (1992: 1), is that his model of discourse semantics takes the model of cohesion from Halliday & Hasan (1976), and removes it from the grammar by rebranding it as his own model of discourse semantics:
  1. Martin's IDENTIFICATION is his rebranding of Halliday & Hasan's REFERENCE (confused with ELLIPSIS–&–SUBSTITUTION);
  2. Martin's CONJUNCTION/CONNEXION is his rebranding of Halliday & Hasan's CONJUNCTION;
  3. Martin's IDEATION is his rebranding of Halliday & Hasan's LEXICAL COHESION.
Moreover, Martin's misunderstandings of his source material serve to differentiate his model from Halliday & Hasan's, creating the false impression of genuine theoretical originality.  Evidence here.

Tuesday, 18 June 2019

David Rose Using Theme To Promote Discourse Semantics

Can I make explicit that the job is either grammatical description or discourse semantic description ? 
One is concerned with classifying clause patterns and the other with discourse patterns. 
Discourse semantic description explains functions of variations in grammar patterns such as types of Theme.
Typological comparisons might start with discourse semantic functions and ask how they are realised in grammar.

Blogger Comments:

[1] To be clear, this refers to Margaret Berry's previous post (here) in which she redefines a clause function, Theme, in terms of two logogenetic patterns of instantiation: the selection of both Theme and Subject in the unfolding of text.

Here Rose misconstrues the lexicogrammatical distinction between this clause function and logogenetic patterns of its instantiation as a stratal distinction between lexicogrammar and discourse semantics.  In doing so, Rose rebrands Margaret Berry's approach to grammar as Martin's discourse semantics.

[2] This is a bare assertion, unsupported by argument. To be clear, the "function" of logogenetic patterns of instantiation, such as varying the selection of Theme, is to develop the text.

Moreover, as demonstrated in detail here, Martin's discourse semantic notions of 'macroTheme' and 'hyperTheme' are the notions of 'introductory paragraph' and 'topic sentence', respectively, taken from writing pedagogy and rebranded as Martin's linguistic theory.  Because they are concepts designed to help people write, rather than concepts designed to describe what people actually say, sign or write, they cannot shed theoretical light on actual Theme selection.

[3] Or perhaps, since it is the lexicogrammar that construes the semantics, and not the other way around, a Systemic Functional approach might be to ask what paradigmatic contrasts in meaning are being construed by paradigmatic contrasts in the wording.

Christian Matthiessen On Jim Martin's Context As Connotative Semiotic

…In a way, the textual metafunction is the most fragile of the metafunctions — the one most likely to be influenced by the observer, so it is absolutely essential to base observations and analyses on naturally occurring examples in context — in their textual environment [co-text] and in their context in the sense of connotative semiotic (Martin, 1992). …



Blogger Comments:

To be clear, Martin's (1992) 'connotative semiotic' — which mistakes the content plane of a connotative semiotic for a connotative semiotic — is his stratification of context as genre and register.  As demonstrated in great detail here (context), here (genre) and here (register), Martin's model is not only inconsistent with the architecture of SFL theory, but also inconsistent with the meanings of the terms 'context', 'genre' and 'register'.

For example, Martin models varieties of language, register and genre (text type), not as sub-potentials or instance types of language, but as semiotic systems other than language: the context that is realised by language.  Nevertheless, inconsistent with this, Martin claims that instances of context are text, that is: language rather than context.  But this is just the tip of the iceberg.  For more details, see the clarifying critiques here.

For Ruqaiya Hasan's critique of Martin's model of context, see The Conception Of Context In Text in
Fries, Peter H. and Gregory, Michael (1995) DISCOURSE IN SOCIETY: SYSTEMIC FUNCTIONAL PERSPECTIVES: Meaning and Choice in Language: Studies for Michael Halliday Norwood: Ablex (pp183-283).
The place of register and text type (genre) in the architecture of SFL theory is identified by Halliday's instantiation/stratification matrix:


An elaboration of this matrix can be found in Halliday & Matthiessen (1999: 384):

Monday, 17 June 2019

Margaret Berry On Theme

I agree with Mick and Nick that what is the best approach to Theme and Rheme depends on what you want to do with it. Like them I’m particularly interested in patterns across texts. 
It could be said that in English the main function of Theme is to show whether the perspective of the text changes or whether it remains the same, in particular whether the main topic entity changes or stays the same and whether the setting changes or stays the same. 
In Mick’s example, the ‘He’s show that the writer is staying with the same main topic entity. But at intervals the temporal setting changes, as shown by the initial Adjuncts. As Mick has shown, for me both the continuity of main topic entity and the changes of setting are relevant to the ongoing perspective of the text. So I want the Subjects to count as Themes as well as the Adjuncts. (Which Halliday wouldn’t allow when a Subject is preceded by an Adjunct.)

Blogger Comments:

[1] To be clear, if a theoretical term like 'Theme' is not used in the original formulation, the use of the term is no longer valid, since the theoretical valeur of the term has changed. Moreover, this can have unintended consequences for systemic relationships, the basis of explanation in SFL.  Halliday & Matthiessen (2014: 49):
Giving priority to the view ‘from above’ means that the organising principle adopted is that of system: the grammar is seen as a network of interrelated meaningful choices. In other words, the dominant axis is the paradigmatic one: the fundamental components of the grammar are sets of mutually defining contrastive features. Explaining something consists not in stating how it is structured but in showing how it is related to other things: its pattern of systemic relationships, or agnateness (agnation…).
[2] To be clear, thematic patterns across texts can be examined using the original formulation of Theme.  That is, this does not constitute an argument in support of varying the theoretical valeur of 'Theme'.

[3] To be clear, this characterisation of Theme confuses Theme, a functional element of the clause, with the logogenetic pattern known as the method of development.  As Halliday & Matthiessen (2014: 89, 126) explain:
The Theme is the element that serves as the point of departure of the message; it is that which locates and orients the clause within its context. The speaker chooses the Theme as his or her point of departure to guide the addressee in developing an interpretation of the message; by making part of the message prominent as Theme, the speaker enables the addressee to process the message. The remainder of the message, the part in which the Theme is developed, is called in Prague school terminology the Rheme. …
The choice of clause Themes plays a fundamental part in the way discourse is organised; it is this, in fact, that constitutes what has been called the ‘method of development’ of the text…
[4] To be clear, this argues for a reformulation of a clause function (Theme), not on the basis of its function in the clause, but on the basis of logogenetic patterns, and in doing so, confuses two distinct logogenetic patterns: choice of Theme and choice of Subject.

Sunday, 16 June 2019

Mick O'Donnell On Theme

Michael O'Donnell wrote to sysfling on 16 Jun 2019 at 18:24:
In Halliday's approach (for English), Theme stops with the first topical (experiential) element, which in this case is the circumstance of time, "in 1925". "Halliday" is thus Rheme. 
I myself am a proponent of the Berry approach (which I believe is similar to the Fawcett approach), whereby one has Subject-theme, and elements in front of that are also "Additional Theme", e.g. 
Additional-Theme
Subject
Theme
Rheme
In May 1476,
he
He took part in an armed convoy sent by Genoa to carry a valuable cargo to northern Europe.

He
docked in Bristol, Galway, in Ireland and was possibly in Iceland in 1477.
In 1479
Columbus
reached his brother Bartolomeo in Lisbon, keeping on trading for the Centurione family.

He
married Filipa Moniz Perestrello, daughter of the Porto Santo governor, the Portuguese nobleman of Genoese origin Bartolomeu Perestrello.
In 1479 or 1480,
his son Diego
was born.
This approach better captures the continuity or discontinuity of the Subject selections, and allows for the presence of marked elements in front of the Subject.


Blogger Comments:

Applying this model yields:

Additional-Theme
Rheme
Subject Theme
blessed
are
the meek
 on your left
is
the main bedroom
a little further on
is
the Rijksmuseum

Additional-Theme
Additional-Theme

Rheme
Subject Theme
where
precisely
in that case
are
they?


[1] This is the opposite of what is true. Unsurprisingly, the "continuity or discontinuity" of the Subject selections is shown by the selection of Subjects.  The question of the "continuity or discontinuity" of Subject selections as Theme is nullified by this approach, since all Subjects are claimed to be Themes.

[2] To be clear, this approach adds nothing with regard to "the presence of marked elements in front of the Subject", since 'Additional Theme' is just a rebranding of 'marked Theme', without acknowledging its markedness.  Moreover, it is the distinction between Theme and Subject that provides the criterion for the distinction between marked and unmarked Theme in declarative clauses.


To be clear, this approach merely confuses the interpersonal selection of Subject, the carrier of modal responsibility in a clause as exchange, with the textual selection of Theme, the point of departure for a clause as message.  Subject is reprised in a Mood Tag; Theme is realised by everything up to and including the first experiential element.

Thursday, 2 May 2019

Mick O'Donnell On Phrasal Complexity

Phrasal complexity is more than just examining nominal group structure. It quantifies nominal group complexity in some way (e.g., the depth of structure (either in terms of constituency or dependency relations), the total number of connections, etc. 
For me, the hardest part is coming up with a complexity metric that actually makes sense. This is actually an area where psycho linguistics has lots to say, in terms of the readability of a given nominal structure, using eye tracking, etc. 
I haven't seen any work within SFL on this, mostly by post-chomskyans and cognitive linguists.


Blogger Comments:

In terms of SFL theory, this relates to the type of complexity that is typical of written mode: lexical density, and, as Halliday & Matthiessen (2014: 726-9) point out, the nominal group is the primary resource for increasing lexical density.  

Lexical density is quantified at clause rank — rather than group rank — by dividing the number of lexical items by the number of ranking clauses.

Moreover, in terms of semantic complexity, lexical density involves grammatical metaphor, a junctional construct (Halliday & Matthiessen 1999: 46, 272), embodying the meanings of both the metaphorical and congruent grammatical realisations.

Monday, 18 February 2019

Shooshi Dreyfus On 'Does'

While this notion does make sense on the surface,
one of emphasis, or something else?
2) How does it compare to
While this notion makes sense on the surface,
? No emphasis?



And building on these comments, surely you can’t look at this properly till you see the co-text? Firstly, it’s a dependent clause and in order for us to understand it we need to see not only the independent clause with which it must be paired but also the rest of the text so we can see the prosody being “carried” through the rest of the text? Surely the “does”, in part, gets its meaning from all those other meanings around it that are contributing to whatever “argument” is being built here?


Blogger Comments:

[1] To be sure, the significant differences between the two clauses can be "looked at properly" by applying SFL theory, even in the absence of co-text:

While
this notion
does
make
sense
on the surface

Subject
Finite
Predicator
Complement
comment Adjunct: qualified: validity
'reservation'

'reservation'

Given

New: contrastive
Given

New: contrastive


While
this notion
makes
sense
on the surface

Subject
Finite
Predicator
Complement
comment Adjunct: qualified: validity
'reservation'
Given




New: contrastive

To be clear, both variants could be instantiated with the exact same co-text.

[2] To be sure, it is not necessary to see any other clauses in order to understand the differences between the two presented for discussion, as [1] above demonstrates.

[3] To be sure, any prosodies "being carried through the rest of the text" are relevant to the rest of the text.  The prosody "being carried through" the instances in question is TONE 4 which here realises the KEY feature 'reservation'.  However, this doesn't distinguish the two clauses, since it is instantiated in both.  The distinction between them is informational (textual), dependent on whether or not the Finite can be highlighted as contrastive.

[4] To be sure, the meaning of does is realised by its congruent grammatical function in this instance.