Conceptual Blending

Conceptual blending

Conceptual Blending Theory (CBT), also known as Conceptual Integration Theory or Blending Theory, is a methodological framework for analyzing the process of creation of composite meanings. Initially, the framework was used primarily to account for linguistic phenomena, especially metaphorical expressions, but soon its application was extended to the analysis of non-linguistic data. Nowadays, CBT researchers hold that conceptual blending is a crucial cognitive process central to human thought, manifesting itself in numerous aspects of human activity.

There are at least two significant sources of inspiration for Conceptual Blending Theory. The first is Mental Spaces Theory (MST) developed by Gilles Fauconnier and Conceptual Metaphor Theory (CMT) devised by George Lakoff and Mark Johnson. Nonetheless, despite the fact the two theories proved to be highly influential, CBT is characterized by greater theoretical complexity and sophistication as well as wider scope of practical application.

The process of conceptual blending involves a network of mental spaces, in which semantic content is transferred in various direction. Mental space (Fauconnier’s term) is “a partial and temporary representational structure which speakers construct when thinking or talking about perceived, imagined, past, present or future situation.” (Grady, Oakley and Coulson 2007: 421). Conceptual blending necessitates at least four mental spaces. Two of them are input spaces, which contribute semantic content to the process. There are correspondences (i.e. conceptual associations) between certain elements in the input spaces, due to which the structures are perceived or thought of as similar in some respects. Another mental space involved in the process of blending is the generic space, embracing abstract schematic representation of elements shared by both input spaces. The last space is the blended space including the “output” of the process, i.e. called the emergent structure. The emergent structure is a combination of semantic content selectively projected from the input spaces and integrated into a coherent conceptual construction. The blend, however, is not a mere sum of elements derived from the input spaces; on the contrary, the emergent structure typically attracts additional components, which are not necessarily recruited from any of the inputs.

CBT aspires to be a universal methodological framework for analyzing different types of semiotic data. Apart from undeniable value for the study of language, the theory is used to account for visual signs, elements of computer interface, cartoons, cultural and religious phenomena and much more.

Imagination and creativity play an essential role in all types of semiotic activity of human beings. Language and all other semiotic systems can be used creatively and novel meanings are often far from what might be considered as standard. Yet, while the significance of imagination and creativity cannot be overestimated, describing them in a systematic and methodologically strict manner is an extremely complex and challenging task. In the field of linguistics, structural approaches, especially syntax-centered generative grammar, are in a relatively comfortable position, as they could afford to ignore, to large extent, some complexities of meaning (including conceptual metaphor and metonymy). However, meaning-centered approaches, especially a relatively recent school known as cognitive linguistics, view these complicated semantic phenomena as central issues in the study of language.

Conceptual Blending Theory (CBT), also known as Conceptual Integration Theory or simply Blending Theory (BT), is a methodological framework developed by Gilles Fauconnier and Mark Turner in order to capture and account for the creative aspect of the way humans manipulate semantic content. It is debatable whether the theory is (as its creators argue) a model of mental processes underlying the creation of meaning or a convenient framework for describing complex semantic structures without deep psychological commitments. As rightly noted by Gibbs (2000), CBT researchers has some problems with proposing robust falsifiable hypotheses amenable to empirical testing, so it is far from obvious whether the theory can be verified or falsified. Nonetheless, even if the theory is taken as a handy toolkit for describing semiotic structures (rather than a comprehensive theory of cognitive processing), it allows for capturing many intricacies of the data in a coherent, systematic, and psychologically plausible fashion.
The primary focus of the theory are composite semantic structures, i.e. conceptual constructions made up of elements from two or more input structures. The main assumption behind the model is that in many cases production and interpretation of such composite meanings are dynamic, heuristic and, to large extent, unpredictable rather than static, algorithmic and almost entirely computable on the basis of input elements. Even though CBT originated in the field of linguistics and the first practical applications were limited to linguistic phenomena, nowadays the theory is successfully used to describe virtually all types of semiotic data, including advertisements (Joy, Sherry, and Deschenes 2009), visual arts (Warchoł 2018), linguistic humour (Jabłońska-Hood 2015), cartoons (Abdel-Raheem 2019; Kwon 2019), elements of religious and magic rituals (Sørensen 2006), mathematical concepts (Woźny 2018), music theory (Arndt 2017; Kaliakatsos-Papakostas and Cambouropoulos 2019), and more. In fact, its proponents view conceptual blending as one of the central cognitive processes governing human thought and imagination.

Sources of inspiration

Evans and Green (2006) mention two important sources if inspiration for Conceptual Blending Theory. The first is Mental Spaces Theory (MST) created in mid 1980s by French linguist Gilles Fauconnier. A mental space is a temporary packet of conceptualization constructed for the purpose of local understanding. Mental spaces are usually created dynamically and are not complete representations of global and general world knowledge. Instead, they selectively recruit semantic content which is viable to the interpretation a specific set of data (e.g. a linguistic expression). Mental spaces should be distinguished from frames, which are used in cognitive sciences to describe more stable conceptual structures organizing global knowledge about the world. Frames are more static and structured repositories of knowledge, from which semantic content can be recruited by mental spaces.

The second source of inspiration is Conceptual Metaphor Theory (CMT) advanced in early 1980s by American linguist George Lakoff and philosopher Mark Johnson. CMT marked a breakthrough in the treatment of metaphor and soon became one a milestone in the development of cognitive linguistics. Lakoff and Johnson stipulated that, contrary to the popular view, metaphors are not purely linguistic entities used almost exclusively for artistic and rhetoric purposes; instead, they are primarily conceptual phenomena constituting a key part of everyday conceptualization of the world. In other words, metaphor was defined as a manner of understanding one concept in terms of another rather than a manner of talking about one thing in terms of another. By the same token, metaphors are present in language in the form of metaphorical expressions, i.e. linguistic representations of metaphors proper, which are conceptual structures.

Both Mental Spaces Theory and Conceptual Metaphor Theory were extremely important developments in cognitive linguistics, but as the time showed, they were not devoid of flaws and shortcomings. Conceptual Blending Theory is designed to remedy these defects and provide researchers with a more powerful explanatory tool. On the other hand, CBT should not be considered as a replacement for the two above theories, due to the fact that their scopes are not identical. Despite large areas of overlap, the three theories should be thought of as complementary rather than mutually exclusive.

Networks of mental spaces

Conceptual blending involves a network of at least four mental spaces. Two (or more) of them are input spaces contributing initial semantic material to the process. The content of the two input spaces is linked with correspondences, i.e. mental associations between elements perceived as related in one way or another. It must be borne in mind that since conceptual blending is a mental process, correspondences are of purely conceptual nature and they do not need to have anything to do with physical and “objective” relationships between real world entities or phenomena. In fact, the perception (or, to be more precise, conceptualization) of correspondences is to large extent subjective as well as culturally and contextually determined.

Another mental space included in the network is the so called generic space. This space embraces all elements shared by the input spaces. Since in principle the content of the inputs is different at the default level of specificity, similarities can be noticed only when some fine-grained detailed of both input spaces are filtered out. Thus, the content of the generic space is a structure representing abstracted commonalities inherent in both inputs, but evident at higher level of generality. The construction in the generic space is mapped onto the input structures and guides the emergence of cross-input correspondences described in the above paragraph.

The outcome of the conceptual blending is the emergent structure (or also called the blend) included in the blended space. The blend is a coherent composite semantic structure, which is a combination of elements selectively recruited from the inputs. Since the process of conceptual blending is not mere addition, the blend is not a simple sum of components derived from other spaces involved in the process, but typically attracts additional semantic content.

A good illustration of how conceptual blending works in practice is a metaphorical expression in (1) analyzed by Grady, Oakley and Coulson (2007)

  1. This surgeon is a butcher.

One of the disadvantages of Conceptual Metaphor Theory is that while it successfully accounts for some conceptualized correspondences and similarities between concepts ‘surgeon’ and ‘butcher’ involved in (1), it fails to explain crucial inferences about the surgeon, i.e. the fact that the surgeon is incompetent. This aspect of the metaphor is captured more adequately in the conceptual blending account.

There are two input spaces, one of them is structured by the frame of surgery, the other by the frame of butchery. The content of the two spaces is connected with a number of correspondences, e.g. ‘surgeon’ is associated with ‘butcher,’ ‘patient’ is associated with ‘cow,’ ‘scalpel’ is associated with ‘cleaver,’ ‘operating theater’ is associated with ‘abattoir,’ etc. The generic space embraces an abstract generalized scenario, which, after filtering out fine-grained details, can be found in both surgery and butchery spaces. This scenario can be summarized in the following way:

There is a person (agent) doing something to an entity (patient). The action performed by the agent involves cutting parts of the patient with a sharp tool. The purpose and the outcome of this action is not specified. The agent performs the action in a place designed for this purpose, he or she has adequate outfit, uses appropriate tools, etc.

The blended space includes a combination of the elements from the input. The most important part of the blend is what Grady, Oakley and Coulson call the means-end relationship, i.e. the relation between the aim of the action and methods used to achieve this aim. The means-end relationship is present in both inputs and the generic space, but in the blended space it is elaborated with elements from both input spaces in order to form an integrated wholeness. More specifically, the aim is derived from the surgery space (curing a patient), while means are recruited from the butchery space (killing an animal); as a result the blend features a person trying to cure a patient with methods used by a butcher. It is precisely this aspect of the blend that leads to the central inference of (1), i.e. the fact that the surgeon is incompetent.

Key processes in conceptual blending

Integration of input elements in the blended space is facilitated by three processes: composition, completion and elaboration. The processes are also partly responsible for the emergence of additional semantic content which is absent from the input spaces (like the inference concerning surgeon’s incompetence in (1)). Composition involves joining corresponding components from the input spaces without providing additional semantic content. The surgeon-as-butcher example composition integrates, among other things, two distinct means-end relationships (means-end of the surgeon’s and means-end of the butcher’s) into a novel structure (surgeon’s aims with butcher’s means).

Nonetheless, it is often the case that composition alone is not enough to build a coherent emergent structure. Quite frequently “gaps” in the structure need to be filled with background knowledge related to content of the input spaces. This process is called completion. In (1) one is able to conceive the conflict between the aims of surgery and the means of butchery only when additional knowledge of the world is evoked, e.g. the fact that humans can be ill, some illnesses can be treated surgically, some animals are killed for meat, there are people who treat patients, there are people who kill animals for meat, etc. This additional knowledge leads to the assumption that actions performed by butchers and surgeons are not compatible and consequently a surgeon acting like a butcher (which the structure produced in the process of composition) is incompetent.

The third process, elaboration, enriches the blend by providing more fine details when the blend is “executed.” Once the blend is fairly well established, it may evoke a scenario which develops in a many unconventional and unpredictable ways governed by internal logic of the blend (but not necessarily by the logic of the inputs). For instance, in (1) the blend builds a scenario of a surgical operation performed by means of butchery. This basic scenario can be further enhanced and developed; one may, for example, imagine a grotesque situation in which the surgeon is operating in an abattoir, using cleaver rather than scalpel, putting patient’s organs on the display of a butcher shop, etc.

Semantic structures from input spaces can be projected into the blended space in various ways. If two or more input elements are mapped onto one element in the blended space, fusion takes place. One instance of fusion can be found in (1), where two agents from the input spaces, the surgeon and the butcher, are merged into one surgeon-as-butcher agent in the blend. Yet, the process is not obligatory and distinct input components may retain their autonomous status in the blend. An good illustration of the difference between fused and unfused components is a philosophical debate in (2) as analyzed by Fauconnier and Turner (e.g. 2007).

  1. I claim that reason is a self-developing capacity. Kant disagrees with me on this point. He says it’s innate, but I answer that that’s begging the question, to which he counters, in Critique of Pure Reason, that only innate ideas have power. But I say to that, what about neuronal group selection? He gives no answer.

In this passage a modern philosopher is conducting an imaginary philosophical debate with Immanuel Kant. The input spaces include the thinkers (along with the frame organizing additional knowledge about them) and the blended space is an imaginary space where the debate takes place. In this case, the two interlocutors are not fused – each of them is mapped separately and they are distinct human beings in the emergent structure. Other elements, however, do undergo fusion; for example, different historical periods (18th century and modern times) are fused into one period of time, in which the philosophers can have a conversation; German and English are fused into one unspecified language, which is used for the debate, etc.

Sometimes the incompatibility of the structures fused is so great that, in order to form a coherent wholeness, certain aspects of these structures must be ignored. The process of suppressing potentially conflicting aspects of input elements that would prevent successful fusion is known as accommodation. For instance, in the philosophical debate example the modern thinker is aware of the fact that Kant was a philosopher and developed his philosophical system, but Kant was never aware of the modern philosopher’s existence. This would create a grave problem in the blend, since in the real world each interlocutor must be aware of the partner’s existence. Therefore, in order to form a seamless combination of all input elements, Kant’s ignorance of the modern philosopher is suppressed and prevented from entering the blend.

Conceptual blending frequently involves compression and decompression of the so-called Vital Relations. Vital relations is an umbrella term for a number of various notions fundamental for our understanding of the world. It is somewhat debatable whether all of them are inherently relational and nature, but it is certainly true that they are “rooted in fundamental human neurobiology and shared social experience” (Fauconnier and Turner 2002, xiii). Vital relations include (but are probably not limited to): cause and effect, time and space, change, identity, representation, intentionality, analogy and disanalogy, property, similarity, uniqueness, role, category, and parthood (Fauconnier and Turner 2002, chap. 6).

Vital Relations and Optimality Principles

Compression of a vital relation involves “scaling down” of the relation from an input space in the blended space. For instance, in the blended space the large distance between two elements in one input space may be rendered as a much smaller distance, a long process from an input space may be rendered as a short process etc. One example of compression is presenting the history of Earth on a 24-hour clock. In this representation Earth came into existence at 00:00 AM and humans appeared at 11:59 PM. The representation involves compression of a vital relation, since a long period of time (4,54 billion years) is rendered in the blended space as a much shorted period of time (24 hours) Analogically, decompression takes place when a small distance is rendered as a long distance, a short process is rendered as a long process, etc. One example of decompression is a popular analogy comparing the size to an atom to a football stadium to demonstrate that after “supersizing” the nucleus of the atom would be the size of a pea. This analogy decompresses space, since is renders an extremely small size as a much larger size.

The creative potential of conceptual blending is seemningly unlimited. It appears that it is not possible to impose stringent restrictions on the content that can be combined in the blend and the way it can be combined. Some blends, however, are “better” than others, which means that they achieve higher degree of conceptual integrity and coherence, are interpreted more easily and produce more immediate cognitive effect. Technically speaking, such blends are characterized by higher optimality, which is determined by so called Optimality Principles. A comprehensive list of the following principles has been provided by Fauconnier and Turner:

  • Intensifying Vital Relations – Compress what is diffuse by scaling a single vital conceptual relation or transforming vital conceptual relations into others.
  • Maximizing Vital Relations – Create human scale in the blend by maximizing vital relations.
  • Integration – The blend must constitute a tightly integrated scene that can be manipulated as a unit. More generally, every space on the blend structure should have integration.
  • Topology – For any input space and any element in that space projected into the blend, it is optimal for the relations of the element in the blend to match the relations of its counterpart.
  • Web – Manipulating the blend as a unit must maintain the web of appropriate connections to the input spaces easily and without additional surveillance or computation.
  • Unpacking – The blend alone must enable the understander to unpack the blend to reconstruct the inputs, the cross-space mapping, the generic space, and the network of connections between all these spaces.
  • Relevance – All things being equal, if an element appears in the blend, there will be pressure to find significance for this element. Significance will include relevant links to other spaces and relevant functions in running the blend. (Fauconnier and Turner 2007: 393)

Grady, Oakley and Coulson propose one additional principle, which is

  • Metonymic Tightening – Relations between elements from the same input should become as close as possible in the blend. (Grady, Oakley and Coulson 2007: 428)

Optimality is a matter of degree. It is not the case that some blends are optimal and others are not; instead, some blends are more optimal than others. In theory, it is possible to envision a fully optimal blend, i.e. a blend which satisfies each and every Optimality Principles to maximum degree, but in practice there is a dynamic tension between them – high compatibility with one principle entails low compatibility, zero compatibility or even violation of another principle. Judging optimality of a blend involves an overall holistic assessment of the entire set of principles rather than mechanic verification of individual rules.

Typology of integration networks

It is difficult to propose a fixed and comprehensive classification of conceptual blending networks. Fauconnier and Turner (2007) suggest four types of networks, but at the same time they note that the types are focal points of a continuum. In other words, the types are categories which overlap and gradually merge one into another rather than constitute clear-cut distinctions.

The first type a simplex network. Simplex networks include two input spaces; one of them contains a frame of roles and the other a set of values (thus, only one input is structured by a frame). In the blend roles and values are juxtaposed and compressed into a unique entity. An example of a simplex network can be found in (3).

  1. Rupert is my friend.

Here, one input space includes the role ‘my friend’ and the other the value ‘Rupert.’ The emergent structure combines ‘my friend’ with ‘Rupert’ giving rise to a unique concept ‘my friend Rupert.’

Another type is a mirror network. In this kind of network all spaces, including the generic space and the blend, are organized by the same frame. A good illustration is Fauconnier and Turner’s (2007) example of a boat race in (4).

  1. As we went to press, Rich Wilson and Bill Biewenga were barely maintaining a 4.5 day lead over the ghost of the clipper Northern Light.

The passage is derived from a news report in a sailing magazine and describes a race in 1993. In (4) the position of Wilson and Biewenga’s catamaran Great American II is compared to the position of Northern Light, a clipper which set the record for the route in 1853. In the blend two events from different years are blended into one event, so that the interpreter is able to compare progress made simultaneously by both ships. The organizing frame for all spaces is the frame BOAT RACE – both inputs feature a regatta and therefore a general scenario of this event enters the generic space. The blend involves boat racing as well, but the details the event are selectively projected from the inputs and years 1853 and 1993 are combined so that the blend features one event and not two different races.

The third type is a single-scope network. In this case, both input spaces are structured by frames, but the two frames are different and only one of them is used to organize the blend. Typically, the main goal of a single-scope network is to construct one input (called the focus input) in terms of the other (called the framing input). An example of a single-scope network can be found in (5) (adapted from Fauconnier and Turner 2002).

  1. Murdoch has delivered a knock-out punch to Iacotta.

In this network one input space embraces the frame of two competing CEOs: Murdoch and Iacotta, while the other contains the frame of a boxing match. The blend is produced in the process of applying the semantic structure of a boxing match (framing input) to the semantic structure of business competition (focus input). In other words, the this single-scope network employs the boxing frame to organize the elements from the business competition frame.

The last type of conceptual integration networks distinguished by Fauconnier and Turner is a double-scope network. Similarly to the single-scope network, the input spaces hold different frames, but in this case both frames are used to create the blend. A double-scope network is responsible for the expression provided by Fauconnier and Turner, where a popular idiom to dig one’s own grave is used (2002).

  1. You’re digging your own financial grave.

The expression may be used in reference to a business person who makes financial commitments that he or she is not able to fulfill. In such a case, one input frame defines a person raising a loan which is likely to cause bankruptcy. The other input frame specifies a situation of digging a grave. The blend is a combination of both frames – the basic details of the emergent structure are projected from the grave digging space, but the cause-and-effect relationship is mapped from the loan input and is not compatible with the former frame. More specifically, in the blend digging of a grave causes death and is not the response to it (like in the original grave digging frame). The source of this counter-intuitive causal aspect of the emergent structure is the loan frame, where it is the act of taking the loan that causes financial failure.

Multiple inputs

The basic process of conceptual blending comprises four spaces, but the network may be more complex, typically due to a greater number of input spaces. A classic example of such an extensive network leads to the emergence of the figure of Grim Reaper (as analyzed by Fauconnier and Turner 2007).

This network involves five input spaces. Three of them are used to form the minor blend of a personified agent; they are death input (with ‘death’ as an abstract force putting an end to human life) and agency input (with a frame defining all aspects of human-like agency: animacy, consciousness, volition, etc.). Since ‘death’ is non-human entity and cannot be ascribed typically human attributes like consciousness and volition, in order to create a human-like agent, the content of both input spaces are blended into an emergent structure ‘death-as-agent.’ Nonetheless, neither the two inputs, nor the minor blend, provide the causal aspect of the final Grim Reaper blend – it must be remembered that in the death input space death is the result of certain action or event and does not cause dying. The causal aspect is provided by the third input, namely the killer space (with ‘killer’ causing death of an individual). Blending of semantic material from the three inputs gives rise to an intermediate blend featuring ‘death-as-killer.’ This intermediate emergent structure becomes an input for yet another blend and is combined with elements of the fourth input space containing the REAPING frame (since reapers are prototypical agents as well, significant number of correspondences is immediately established: ‘death-killer’ is ‘reaper,’ ‘weapon’ is ‘scythe,’ ‘humans’ are ‘plants,’ etc.). The new blend features death-as-killer with an attribute of a reaper, i.e. the scythe. But the network so far fails to account for certain elements of the Grim Reaper blend, e.g. why is the reaper a skeleton and why does it wear a robe and a cowl? These components are derived from the fifth input space including HUMAN DEATH frame. This frame structures encyclopedic knowledge of the death of human being, including culture specific aspects of funerary rituals, to name just one example. One element of the frame is the information that after death human body turns into a skeleton. In this way, a skeleton is easily associated with death and becomes a natural choice for the Grim Reaper’s body. Another information captured by the HUMAN DEATH frame is that in our culture funerals were often conducted by monks wearing robes and cowls (at least in the historical period in which the blend was formed) and that mourners wear black clothes. These pieces of information are used recruited to the blend to specify the outfit of the Grim Reaper, which is a black robe with a cowl. Eventually, this complex network gives rise to the conventional personification of death in Western culture and accounts for all salient attributes of the Grim Reaper.

The figure of the Grim Reaper is a good example of explanatory potential of Conceptual Blending Theory. While originating from linguistics, over two decades it has become a universal methodological framework for analyzing virtually all kinds of compound semantic structures. Moreover, CBT is capable of providing a methodologically strict and systematic account of a number of problematic issues (like some types of inferences in metaphors that cannot be handled by CMT) and puzzling complexities of meaning (like in the Grim Reaper blend).

Bibliography

Abdel-Raheem, Ahmed. 2019. Pictorial Framing in Moral Politics: A Corpus-Based Experimental Study. Routledge Studies in Multimodality 28. New York: Routledge.

Arndt, Matthew. 2017. The Musical Thought and Spiritual Lives of Heinrich Schenker and Arnold Schoenberg. Ashgate Studies in Theory and Analysis of Music after 1900. London-New York: Routledge.

Evans, Vyvyan, Benjamin K. Bergen, and Jorg Zinken, eds. 2007. The Cognitive Linguistics Reader. London-Oakville: Equinox Publishing.

Evans, Vyvyan, and Melanie Green. 2006. Cognitive Linguistics: An Introduction. 1 edition. Mahwah, N.J.: Routledge.

Fauconnier, Gilles. 1985. Mental Spaces. Cambridge: MIT Press.

Fauconnier, Gilles, and Mark Turner. 2002. The Way We Think. Conceptual Blending and the Minds Hidden Complexities. New York: Basic Books.
———. 2007. “Conceptual Integration Networks.” In The Cognitive Linguistics Reader, edited by Vyvyan Evans, Jörg Zinken, and Benjamin Bergen, 360–419. London-Oakland: Equinox.

Gibbs, Raymond W. 2000. “Making Good Psychology out of Blending Theory.” Cognitive Linguistics 11 (3/4): 347–58.

Grady, Joseph, Todd Oakley, and Seanna Coulson. 2007. “Blending and Metaphor.” In The Cognitive Linguistics Reader, edited by Vyvyan Evans, Benjamin Bergen, and Jörg Zinken, 420–40. London-Oakville: Equinox.

Jabłońska-Hood, Joanna. 2015. A Conceptual Blending Theory of Humour: Selected British Comedy Productions in Focus. Frankfurt am Main-New York: Peter Lang Edition.

Joy, Annamma, John F. Sherry, and Jonathan Deschenes. 2009. “Conceptual Blending in Advertising.” Journal of Business Research 62 (1): 39–49. https://doi.org/10.1016/j.jbusres.2007.11.015.

Kaliakatsos-Papakostas, Maximos, and Emilios Cambouropoulos. 2019. “Conceptual Blending of High-Level Features and Data-Driven Salience Computation in Melodic Generation.” Cognitive Systems Research 58 (December): 55–70. https://doi.org/10.1016/j.cogsys.2019.05.003.

Kwon, Iksoo. 2019. “Conceptual Mappings in Political Cartoons: A Comparative Study of the Case of Nuclear Crises in US–North Korean Relations.” Journal of Pragmatics 143: 10–27. https://doi.org/10.1016/j.pragma.2019.01.021.

Oakley, Todd. 1998. “Conceptual Blending, Narrative Discourse, and Rhetoric.” Cognitive Linguistics 9: 321–60.

Sørensen, Jesper. 2006. A Cognitive Theory of Magic. Lanham: AltaMira Press.

Turner, Mark, and Gilles Fauconnier. 1998. “Conceptual Integration Networks.” Cognitive Science 22: 133–87.
———. 2003. “Metaphor, Metonymy, and Binding.” Edited by Antonio Barcelona. Metaphor and Metonymy at the Crossroads.

Warchoł, Adam T. 2018. Conceptual Blending and the Arts: An Analysis of Michał Batory’s Posters. Newcastle upon Tyne: Cambridge Scholars Publishing.

Woźny, Jacek. 2018. How We Understand Mathematics. New York: Springer. https://link.springer.com/book/10.1007/978-3-319-77688-0.

Website
http://markturner.org/blending.html

Author

Hubert Kowalewski is an assistant professor at Maria Curie-Skłodowska University in Lublin, Poland. His main areas of professional interest include cognitive semiotics, cognitive linguistics, conceptual metaphor and metonymy, methodology of linguistics, philosophy of science. In 2016 he published the monograph Motivating the Symbolic. Towards a Cognitive Theory of the Linguisitc Sign, where he proposed an approach for the study of motivation in language from the perspective of Charles Peirce’s semiotics.