Halliday and Multimodal Semiotics

Kay O`Halloran

Halliday’s (1978) social semiotic theory provides the basis for the study of semiotic resources other than language (e.g. images, architecture, music, mathematical symbolism, gesture, clothing etc) and, significantly, the interaction of semiotic resources in a field known as multimodal analysis or multimodality (e.g. Jewitt, 2009; Machin, 2007; O’Halloran, 2011) .  Indeed Halliday’s view of culture as ‘a set of [inter-related] semiotic systems’ (Halliday & Hasan, 1985: 4) is the major platform for research in multimodal studies today, as evidenced by foundational works in the field (Kress & van Leeuwen, 2006 [1996]; O’Toole, 2011 [1994]).

Halliday’s social semiotic theory provides a framework for moving beyond ‘running commentaries’ about multimodal phenomena (see Bateman, 2008) to empirical validation of claims because the theory is concerned with the underlying design (or ‘grammar’) of semiotic resources and their relations with each other, specified as inter-related semantic systems which are seen to fulfil four functions: to construe our experience of the world (experiential meaning); to create logical relations  between experiential meanings (logical meaning), to enact social relations (interpersonal meaning) and to organise meanings into coherent messages in text (textual meaning). In this way, the Hallidayan framework accounts for multiple strands of meaning with semiotic resources and their underlying systems as the tools for meaning-creation.

The ability to model intra-semiotic meaning (within a single semiotic resource) and inter-semiotic meaning (across different semiotic resources) within a common framework afforded by Hallidayan theory allows for the investigation of semantic shifts and metaphorical expansions of meaning which occur as semiotic resources interact and combine (e.g. in mathematics, O’Halloran (2005) and science (Lemke, 1998)).  We may also see how semantic clusterings vary according to context and culture, and how these patterns are reinforced (e.g. through technologies such as Microsoft Powerpoint) and subverted (e.g. in works of art) (van Leeuwen, Djonov, & O’Halloran, forthcoming). We can also track semiotic change across individuals and cultures (e.g. Halliday’s (2006) account of the semantic shift arising from the scientific view of the world).

Significantly, Halliday’s theory lends itself to computational approaches (Halliday, 2005; O’Donnell & Bateman, 2005) which are currently being developed to advance the theory and practice of multimodal analysis.  For example, software for the analysis, search and retrieval of multimodal semantic patterns is being developed in the Multimodal Analysis Lab in the Interactive & Digital Media (IDMI) at the National University of Singapore1 to move beyond page-based methods of multimodal transcription and analysis (O’Halloran, Tan, Smith, & Podlasov, 2011; Smith, Tan, Podlasov, & O’Halloran, 2011). The software can be used to analyse text, images, sound and videos (hypertext is to be included in the next software development phase) by annotating the media files using choices from system networks, coded as time-stamped annotations and visual overlays. The analysis is stored in a database for later search and retrieval. A crucial aspect of the design of this software, informed by systemic functional theory, is the capacity to integrate the full range of semiotic analyses, across ranks, strata and meta-function, within an empirically-derived holistic view on communication. The multimodal analyst can develop, test and apply different theoretical approaches and methodologies to code the analysis; and automated tools (e.g. shot detection, facial recognition, optical flow) provide further support to users of the software, extending the human capacities for perception and analysis.

In one case study, the software is used to investigate the bias in new reporting of climate change debate (Boykoff, 2011) in a video clip from ‘Happening Now’, a Fox News Corporation programme aired on 25 November 20092 (O’Halloran, Podlasov, Chua, & E, in press 2012). In this video, Jon Scott interviews Dr. Kevin Trenberth, a Distinguished Senior Scientist in the Climate Analysis Section at the National Center for Atmospheric Research in Colorado, and Mr. Myron Ebell, Director of energy and global warming policy at the Competitive Enterprise Institute in Washington DC. The interview took place immediately after the Climatic Research Unit email controversy involving the hacking of a server at the Climatic Research Unit at the University of East Anglia on 20 November 2009.

In the screenshot of the analysis in Figure 1, we see the close-up shot of interviewer Jon Scott looking directly at the audience with his professional attire (with contrasting red tie) and studio background. An electronic distortion guitar sound plays from the start of the video for about two seconds before fading out, then Jon Scott begins with the single word ‘Hackers’ from his first sentence “Hackers broke into the email accounts of several prominent scientists who were working on climate change”.  The word ‘hackers’ is significant in terms of its grammatical functions, which have been coded in the coloured strips in the bottom right hand side of Figure 1. That is, ‘hackers’ is significant in terms of textual meaning (it is the point of departure for what follows), interpersonal meaning (it is the subject) and experiential meaning (it is the agent for the action). It is perhaps for these reasons that the word is also mapped as a single tone unit (Halliday & Greaves, 2008).  The stage is set for a dramatic recount of events.

Figure 1. Multimodal Annotation of Fox News ‘Happening Now’:

 

 

Figure 2. Multimodal Choices. Scientists and Media and Business Professionals:


(a) Dr Trenberth

 

(b) Mr Myron Ebell

 

In Figure 2(a), we see that Dr Trenberth is framed by the Skype logo and a laptop (which rotates) on the left, his gaze is disengaged (he looks around constantly during the interview) and the background is an untidy bookshelf.  Dr Trenberth’s contributions to the interview are undermined with such multimodal choices, particularly when he has to respond to questions from Jon Scott, such as ‘That must feel pretty outrageous, huh?” in reference to the hacking of his emails. However, the multimodal choices for Mr Myron Ebell are similar for interviewer Jon Scott, except the background is Capitol Hill in Washington DC, and he is smiling (a smile which persists for much of the interview), as shown in Figure 2(b). From this brief overview, it is apparent that the multimodal choices for the media professional and the business person differ from those for the climate scientist, particularly in the realm of the interpersonal (e.g. engagement with the viewer), often in ways which work against the interests of the scientist.

The search results with the matching annotation units can be highlighted in the software, providing a visual overview which makes it possible to detect patterns in complex texts simply from viewing the annotations. However, many patterns are too complex for the human eye to process. Therefore, there is an export function in the software which permits the data to be imported into third-party software designed for large-scale data visualisation. For example, in Figure 3, the time-stamped annotation from the Fox News video has been imported into Matlab3 ready for analysis of recurring patterns across the different speakers (coded red, pink and black for Jon Scott, Dr Trenberth and Mr Myon Ebell respectively) for the linguistic choices, camera angle, gaze and framing. In this way, the software produces data for further analysis to identify patterns and trends in multimodal phenomena (Podlasov, Tan, & O’Halloran, accepted for publication; Tan, Podlasov, & O’Halloran, forthcoming).

 

Figure 3. Time – stamped Multimodal date (in Matlab):


Saussure, at the beginning of the twenty-first century foresaw the need for a holistic approach to the study of language and other sign systems, advocating a ‘science that studies the life of signs within society’ (Saussure 1974 [1916]: 16). Halliday’s social semiotic theory is a comprehensive response to that challenge, which provides powerful theoretical and descriptive resources for the study of meaning and communication. By undertaking sustained systemic analyses afforded by Halliday’s social semiotic theory, we can begin to see and thus understand how semiotic resources combine to create distinct semantic patterns and clusters in what Lotman (2005) calls ‘the semiosphere’, our world of abstract meaning known as society and culture.

Websites

1. See ‘Multimodal Digital Semiotics’ for a report on the Multimodal Analysis Lab, Interactive & Digital Media Institute (IDMI), National University of Singapore in Semiotix XN-4 (2011)

https://semioticon.com/semiotix/2011/02/multimodal-digital-semiotics/

2. http://video.foxnews.com/v/3945521/illegal-act

3. http://www.mathworks.com/products/matlab/index.html

References

Bateman, J. (2008). Multimodality and Genre: A Foundation for the Systematic Analysis of Multimodal Documents. Hampshire: Palgrave Macmillan.

Boykoff, M. T. (2011). Who Speaks for the Climate? Making sense of Media Reporting on Climate Change. New York: Cambridge University Press.

Halliday, M. A. K. (1978). Language as Social Semiotic: The Social Interpretation of Language and Meaning. London: Edward Arnold.

Halliday, M. A. K. (2005). Computational and Quantitative Studies: Collected Works of M. A. K. Halliday (Volume 6). London and New York: Continuum.

Halliday, M. A. K. (2006). The Language of Science: Collected Works of M. A. K. Halliday (Volume 5). London & New York: Continuum.

Halliday, M. A. K. (2009). Collected Works of M.A.K. Halliday (10 Volumes). London & New York: Continuum.

Halliday, M. A. K., & Greaves, W. S. (2008). Intonation in the Grammar of English. London: Equinox.

Halliday, M. A. K., & Hasan, R. (1985). Language, Context, and Text: Aspects of Language in a Social-Semiotic Perspective. Geelong, Victoria: Deakin University Press  [Republished by Oxford University Press 1989].

Halliday, M. A. K., & Matthiessen, C. M. I. M. (2004). An Introduction to Functional Grammar (3rd ed, revised by C. M. I. M Matthiessen ed.). London: Arnold.

Jewitt, C. (Ed.). (2009). Handbook of Multimodal Analysis. London: Routledge.

Kress, G., & van Leeuwen, T. (2006 [1996]). Reading Images: The Grammar of Visual Design (2nd ed.). London: Routledge.

Lemke, J. L. (1998). Multiplying Meaning: Visual and Verbal Semiotics in Scientific Text. In J. R. Martin & R. Veel (Eds.), Reading Science: Critical and Functional Perspectives on Discourses of Science (pp. 87-113). London: Routledge.

Lotman, Y. (2005). On the Semiosphere. Sign System Studies, 33(1), 201-229.

Machin, D. (2007). Introduction to Multimodal Analysis. London & New York: Hodder Arnold.

Martin, J. R., & Rose, D. (2007). Working with Discourse: Meaning Beyond the Clause (2nd ed.). London: Continuum.

O’Donnell, M., & Bateman, J. (2005). SFL in Computational Contexts: A Contemporary History. In R. Hasan, C. M. I. M. Matthiessen & J. Webster (Eds.), Continuing Discourse on Language: Volume 1 (pp. 343-382). London: Equinox.

O’Halloran, K. L. (2005). Mathematical Discourse: Language, Symbolism and Visual Images. London and New York: Continuum.

O’Halloran, K. L. (2011). Multimodal Discourse Analysis. In K. Hyland & B. Paltridge (Eds.), Companion to Discourse Analysis (pp. 120-137). London: Continuum.

O’Halloran, K. L., Podlasov, A., Chua, A., & E, M. K. L. (in press 2012). Interactive Software for Multimodal Analysis. Visual Communication, Special Issue: Methodologies.

O’Halloran, K. L., Tan, S., Smith, B. A., & Podlasov, A. (2011). Multimodal Analysis within an Interactive Software Environment: Critical Discourse Perspectives. Critical Discourse Studies, 8(2), 109-125.

O’Toole, M. (2011 [1994]). The Language of Displayed Art (2nd ed.). London & New York: Routledge.

Podlasov, A., Tan, S., & O’Halloran, K. L. (accepted for publication). Interactive State-Transition Diagrams for Visualization of Multimodal Annotation. Intelligent Data Analysis.

Saussure, F. de (1974 [1916]). Course in General Linguistics. (trans. Wade Baskin). London: Fontana/Collins.

Smith, B. A., Tan, S., Podlasov, A., & O’Halloran, K. L. (2011). Analyzing Multimodality in an Interactive Digital Environment: Software as Metasemiotic Tool. Social Semiotics, 21(3), 353-375.

Tan, S., Podlasov, A., & O’Halloran, K. L. (forthcoming). Re-Mediated Reality and Multimodality: Graphic Tools for Visualizing Patterns in Representations of On-line Business News.

van Leeuwen, T., Djonov, E., & O’Halloran, K. L. (forthcoming). “David Byrne Really Does Love PowerPoint”: Art as Research on Semiotics and Semiotic Technology.

Be the first to comment

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.