Semiotic Machine
The most common understanding of machine is associated with an entity conceived for producing a desired artifact as many times as desired. Therefore, semiotic machine begs the understanding of such a production entity as applied to semiotic “artifacts:” signs, messages, significations. Another understanding is that of sign process (semiosis), i.e., if some sign entity is provided as input, the semiotic machine would generate interpretations according to a given context. We cannot escape the cultural understanding, or the more specific semiotic understanding, without ignoring interpretations based on how people conceive of and use machines, or become involved in particular sign processes. A historical perspective of how the notion of machine as such emerged has to be associated with the earliest forms of semiotic activity. However, the semiotic machine can, at the same time, challenge definitions grounded in culture and advance new understandings. Accordingly, a methodological perspective is not just an option, but also an epistemological necessity. Within the matrix that unites time and method, our current understanding of the semiotic machine should not preclude new understandings, as these become increasingly possible. We are not trying to guess the future, based on knowledge accumulated so far, but rather to indicate that this knowledge might guide new understandings. To write about the semiotic machine is to write about semiotics as it relates to the human being involved in expression, communication, and signification.
Preliminaries
A semiotic machine, no matter how it is embodied or expressed, has to reflect the various understandings of what the knowledge domain of semiotics is. It also has to reflect what methods and means support further acquiring knowledge of semiotics. Moreover, it has to express ways in which knowledge of semiotics is tested, improved, and evaluated. Given the scope of the endeavor of defining the semiotic machine, the methodological approach must be anchored in the living experience of semiotics. Accordingly, the cultural-historic perspective, which is the backbone of any encyclopedic endeavor, is very much like a geological survey for a foundation conceived from a dynamic perspective. The various layers could shed light on a simple aspect of the subject: At which moment in the evolution of semiotics does it make sense to make the association (in whatever form) to tools and to what would become the notion of a machine? Reciprocally, we would have to explain how the various understandings of the notions tool and machine are pertinent to whatever was the practice of semiotics at a certain juncture.
Yet another reference cannot be ignored: The reductionist-deterministic view, celebrated in what is known as the Cartesian Revolution. Since that particular junction in our understanding of the world, the reduction of semiotic processes to machine descriptions is no longer a matter of associations (literal or figurative), but a normative dimension implicitly or explicitly expressed in semiotic theories. Given this very intricate relation, we will have to systematize the variety of angles from which various understandings of the compound expression semiotic machine can be defined.
In our days, such understandings cover a multitude of aspects, ranging from the desire to build machines that can perform particular semiotic operations to a new understanding of the living, in view of our acquired knowledge of genetics, molecular biology, and information biology. That the computer—a particular form of machine—as an underlying element of a civilization defined primarily as one of information processing, could be and has been considered a semiotic machine deserves further consideration.
Cultural-historic perspective
Whether the implicit semiotics of the earliest forms of human interaction (pre-language), or the more identifiable semiotics of the most rudimentary representations (in found objects, artifacts, or notations), as well as the semiotics implicit or explicit in tool-making and tool usage, conjures even the thought of a device associated with producing it is a matter of conjecture. Let us agree that a mold, the most rudimentary medium for reproduction of any form of expression, is a tool that contributes to the change from the unique (such as footprints) to the shared and repeatable. For all practical purposes, such a mold is a semiotic machine to the extent that it is deployed to stabilize the nature of human interaction (Haarman 1990; Nadin 1998: 81-88). Sameness in expression (regardless of whether we refer to images, objects, or alphabets, for example) is conducive to and supportive of sameness in action. The timeframe referred to is in the order of 50,000 years, during which language and writing emerged.
The awareness of distinctions between what is represented and why a certain representation (to give but one example, the concreteness of hieroglyphic signs, around 3100 BCE up to around 400 CE) better serves a certain purpose (contracts, teaching and learning, memory) is expressed in the tools utilized for reaching the respective goal. This semiotic awareness is initially implicit in the act of using signs. When semiotic means, in their most rudimentary form, become part of what we call learning, semiotic awareness becomes explicit: how to generate signs better adapted to the task at hand. While we do not suggest that at that time there is an awareness of the machine—a concept to emerge well after writing is acknowledged—there is definitely an understanding, through the use of tools, of how to transcend differences in order to achieve sameness, based on which a more effective pragmatics is possible.
The emphasis is on tool-based operations that make something possible, that enable, that assist. When the words that eventually lead to machine appear, as an expression of the pragmatics they will embody, such words—as in the Ionic Greek machos, or machama in the Doric Greek—will refer to way (of doing something), assist, be able. They are an extension of the tools deployed in a variety of human activities. Eventually, the Greek words were assimilated into the Latin ( machina), and from there, to our days, into many cultures and languages. As testimony of that particular time makes plentifully clear, the emphasis on the use of means—what today is called media—is on making sameness possible, and ensuring that learning is facilitated.
Epistemological perspective
Pragmatics—the same factor that leads to dealing with representations, as well as with experienced reality—leads also to the progressive awareness of what eventually becomes semiotics (in its many variations). That is, we focus precisely on what individuals and groups do, i.e., on their practical activity (Nadin 1998). This unfolds predominantly in the physical-object (e.g., the lever extends the arm) and direct-action domains. It also extends over a relatively long time (anthropologists count ca. 10,000 years between the first rudimentary tools and the initial use of representations (Gombrich 1954) inthe realm of sign-based activities (without an underlying concept of sign, of course). The evolutionary advantage of any form of mediation—the “something,” material or non-material, between the subject of activity and the individual(s) involved in an activity—is not self-evident. Therefore, the process through which sign-based practice expands is also relatively slow. But in each representation, those generating and using it express knowledge. This knowledge is mainly short-lived and pertinent to the circumstances. But this does not change the fundamental fact that what we call epistemological motivation is dominant among many other factors, such as communication intent, initial social instinct, and sexuality.
The dominant epistemological motivation is also confirmed by the need to share. This is a major factor in the progressive increase in the efficiency of human activity, and thus, also of evolutionary impact. If indeed knowledge acquisition drives, in very limited ways, the semiotic animal (zoon semiotikon, cf. Hausdorf 1897:7), it follows that the sign gains the status of a conceptual tool. Moreover, every tool, as an expression of knowledge pertinent to the action in which it is utilized, is a machine avant la lettre, at least in the sense of the very initial understanding expressed in the words from which our concept derives. The assistance provided by a conceptual tool, its way of aiding in the action, is easy to assess, even in retrospect, if we consider how imagery, sound (rhythms, in particular), tactility, smell, and taste partake in the “semiotics in action” of our early ancestors. Each semiotic instance is one of knowledge—explicit or implicit—and of interaction, including the interactions that result from sharing, stabilizing, comparing, learning, and teaching. The quipu of the Incas (Ascher and Ascher 1981) or the Ishango bone (Zaslavsky 1979; Bogoshi, J. et al 1987:294) cannot be compared to Napier’s bones (an abacus using rods, cf. Napier 1617). Neither can the primitive semiotic machine embodied by a mold, or by the bamboo slips (dating back to 2200 BCE) used for record keeping in China (cf. People’s Daily On Line, 2005) be compared to the computer. Still, they have in common the epistemological status of the practical human activity that made them possible at a certain moment in time. They testify to the knowledge of the persons using them. The connection between the ontological and the epistemological dimensions of human existence justifies the attention we give to the prehistory of the semiotic machine.
Gnoseological perspective
The abstraction of knowledge and the ways of acquiring knowledge are not the same as knowledge, as such, involved in our practical activity. The difference is more evident when the knowledge is generated not only in direct interaction with the surrounding world, but also from the mediated semiotic effort per se. In the process of deriving knowledge from representations, human beings not only become aware of their own abilities, they also affect these abilities. They witness their own change, since working with signs affects their own cognitive condition. The fact that human beings are existentially their own signs leads to a genetically enforced cognitive and neuro-anatomical condition that makes the semiotic component part of the thinking identity of the species (homo sapiens). But to think is to process, and in hindsight, a machine is nothing more than an embodied processing function, or several such functions, somehow coordinated. Among the first sign-tools, the lever, like the wheel, enables those conceiving them to perform some operations otherwise close to impossible (e.g., lifting heavy objects), and also to reproduce such operations for the same or similar purposes, in different locations, using different levers or wheels. The lever as a sign stands not for similar pieces of wood, but for similar actions (i.e., of leveraging, involving an extended “arm”). The entire history of early semiotics (Plato, Aristotle, the Stoics, cf. Borsche 1994) is one of repeated confirmations of the practical nature of the sign-focused experience. Water, fire, and wind afford the energy that drives elementary tools as these turn into semiotic devices, too. The studies of signs in the Middle Ages (Augustine, Boetius, Anselm of Canterbury, Roger Bacon, William of Ockham) accommodate a conception of the sign in which signification and how this is produced take the center stage (cf. Borsche 1994; Engels 1962; Fuchs 1999; Howell 1987; Jackson 1969; Jolivet 1969). They are the “elements” making up the world, and the subject of all those changes brought about since ancient times to the living environment. When relatively late in time (1673) machine means “a device for applying mechanical power,” and “appliance” (for military purposes), the semiotics is embodied in various parts (e.g., levers, wheels, pulleys) synthesized in an entity that never existed before. It was produced with the help of a form of thinking impossible without the underlying semiotics of representations. Again, the many meaning variations—around the same time, machine even defines the components of a sexual act—are ultimately a testimony to the gnoseological effort, and also to what it actually afforded in terms of new knowledge and new practical experiences.
The pendulum is a machine that compresses knowledge on gravity, the close cosmos (day and night cycle), numbers, levers, wheels, transmissions, and friction, among many others aspects. It is also a semiosis (sign process) that embodies a particular characteristic of the abstraction of time, i.e., duration, interval. The pendulum serves many functions. It can be programmed (even in its most primitive form), and it can even learn, as the most ambitious clocks of the time show. Still, there are many layers of discontinuity between such very early machines and our new understanding of the machine. Moreover, a fundamental gap, represented by a conception of the world as ambitious as that expressed in Descartes’ Method (1637), along with the animistic view of the world expressed by Aristotle and his followers, marks the change from an intuitive empirical understanding to a systematic gnoseological approach defined as rationality.
A beginning and an end
Amply documented, the Cartesian Revolution can be summarized as
- a method—reductionism—for dealing with complexity;
- a conception—determinism embodied in the cause-and-effect sequence;
- a unifying view—the machine as a prototype for the living. In this respect, Julien Offray de La Mettrie (1748) is even more radical than Descartes.
These aspects need to be understood in their unity. They seem to be as far removed as possible from semiotics; and upon superficial examination, they might appear irrelevant to it. Indeed, in projecting an understanding of the world that corresponds to an advanced model of physical reality, Descartes deals with knowledge, and its acquisition, from a deterministic perspective. The reader of his work is eventually confronted with what Descartes (1684) called mathesis universalis (in the “Fourth Rule for the Direction of the Mind;” from the Greek mathesis: science, and the Latin universalis:), which “explains everything,” involving in the procedure not only numbers, but also shapes, sounds and any object whatsoever. The philosopher and mathematician let us know that he hoped “that posterity will judge me [Descartes] kindly.” This continues to be the case, even as science reaches the limits of his encompassing conception of all there is, and criticism of the Method increases.
Indeed, when things become complex, reduction to constitutive parts helps. Again indeed, many sequences of a clear-cut cause followed by an effect confirm his conception. Moreover, the machine metaphor successfully guided humankind into the Industrial Revolution, and into the civilization that benefited from the “machine of literacy” (Nadin 1998:231-239). But within this encompassing model, semiotics was either integrated in the mathesis universalis or in logic; or it was reduced to linguistics. And the implicit understanding of the semiotic machine, as an instantiation of knowledge acquisition and dissemination, was subjected to the exigencies of mechanical functioning as opposed to living processes.
This is by no means the place to restate the various forms of criticism to which reductionism and determinism are exposed in our days. This is, however, the place where one can and should realize that the notion of machine since Descartes is very convincing in respect to functions related to the physical, but void of the fundamental characteristics of living processes. Semiotics-based human activities are representative of the entire being, not only of its physical substratum. In fighting for the emancipation of philosophy and science from the force vitale that explained the living, at least since Aristotle, Descartes and the scientists who followed him adopted a view of the world based on a rather limiting form of rationality. The limited understanding of causality was acceptable in a context of minimal interactions. After Descartes, signs could not be more than or different from what the senses conveyed to a mind—he did not know of the brain; his drawings point to the “pineal gland.” And the mind would operate like the machines of his time. In this respect, the Cartesian perspective is a beginning, anchored in the world of perceptions and apparent causality.
Hence, Descartes’ mind could not conceive of comprehensive sign-based processes reflecting the complexity of human interactions. The sign processes in the Cartesian tradition cannot be other than those we associate with the rudimentary machines of his time. This is why, in examining semiotics and its epistemological condition, we must realize that the entire development of a theory and practice of signs shaped by Descartes is unavoidably reductionist and deterministic; and the semiotic machine associated with it is accordingly limited in scope. This statement does not exclude the various attempts, known from the history of science and philosophy, in particular the history of semiotics, to render the Cartesian view relative, or even to attempt alternate views (reference is made here to developments such as quantum mechanics, genomics, and to views advanced by Leibniz, Locke, and Peirce, to name only three semioticians).
In the Encyclopedic tradition, acknowledgment of the Cartesian perspective is a necessary condition for understanding the successive definitions of semiotics, machine, and semiotic machine. Within the same line of thinking, we need to take note of the elimination of the final cause ( causa finalis) from among those pursued in the rationalism inspired by Aristotle’s work. While the analytic dimension of semiotics is marginally affected by the elimination of a teleological dimension of the sign (the possible causations), the generative dimension becomes rather limited. Purpose is removed from the realm of the possible to that of the contingent. The machine, in its physical embodiment, accepts the future only in the form of failure. The breakdown of any part of the machine brings the whole to a stop; that is, the future state affects the machine’s current state as a potential action, not as an effective factor. In this respect, the Cartesian view is an end. While we can indeed explain, to a satisfactory degree at least, the physical world as one determined by its past, the living is determined by its future, as well. Diversity in the living is never the exclusive result of deterministic processes. Non-determinism explains the implicit creativity of the living as a never-ending process of producing identities that never existed before (cf. Elsasser 1998:91-95).
All these considerations are meant to guide the reader in further examining the many different understandings of semiotic machine within the variety of semiotic endeavors leading to current semiotics.
Historic perspective
Along the diachronic axis of semiotic doctrine, the focus continuously changes from the sign in its generality (reflecting the variety of sensory perception) to the sign of language. The most impressive progress was actually made in linguistics, to the detriment of any other domain involving or facilitating sign processes. For this entire development, it makes sense to point out that the syncretic semiotic machine becomes a linguistic machine. Ferdinand de Saussure’s admirable work in linguistics guided him towards the observation that the sign might be a concept of an abstraction higher than the abstractions he used in dealing with language. He introduced semiology (at the end of the 19 th century and the beginning of the 20 th, taking a decisive step best defined in his own scarce words. Today, cognitive scientists are hard at work in dealing with semiotic matters, even when they are not explicitly identified as such. It might not have crossed Saussure’s mind that there could be a science whose knowledge domain would transcend the various kinds of signs on whose basis the human being engages in practical experiences. But he was aware that, at least from his linguistic perspective, the language system of signs was dominant (Saussure 1983:15-16).
The paradoxical nature of the relation between the two sides of a coin, one signifying and the other being the signified, leads to an unexpected view, not necessarily beyond the Cartesian model, but definitely challenging it. The arbitrariness of the signs and their mutual formal relations—making up a language—are sources of change in the system. In some unexpected ways, this two-sided relation can be associated with a machine yet to be defined—the Turing theoretical construct, of later years, a hypothetical computer with an infinitely long memory tape. But we do not want to add to Sausurrean mythology. The scientific condition of linguistic elaborations, for which he argued in a context in which language was mainly a subject of history-based analysis, justifies the thought expressed above. Furthermore, the many contributions that his initial ideas prompted (the famous Prague School of Functional Structuralism, the Tartu School, Russian Formalism, among others) justify a posteriori the suggestion made in relation to the Turing machine. It should come as no surprise that this aspect will eventually lead to a “cultural machine,” or “text machine,” endowed with self-control functions (inspired by Norbert Wiener’s cybernetics). Yuri Lotman (Tartu School) paid quite a bit of attention to modalities of cultural productions, i.e., generative procedures (Lotman 1990). Indeed, when using the metaphor of the machine after Descartes, we no longer relate to assistance, means, or enabling procedures, but to generative processes. More than anyone else, Noam Chomsky, definitely not inclined to acknowledge any intellectual affiliation with semiology or semiotics, gave the notion of generative procedure a more effective embodiment (Chomsky 1959).
To rewrite the history of semiotics from the perspective of the semiotic machine might afford some surprises. One is the realization that Saussure’s paradoxical metaphor is in nuce equivalent to a Turing machine. Another is that generative thought, extended from the sign to vast sign systems (such as culture, or text) suggests that, epistemologically, the machine metaphor remains a powerful representation that can assist us in a constructivist understanding of such complex systems. But in the end, the historic account of variations changes the focus from the semiotic machine as such to the variety of embodiments manifested over time, and frequently practiced without questioning the premises on which such embodiments were based. In retrospect, the tradition of semiology reveals that its implicit dualistic structure leads to a synchronic perspective, and therefore the semiological machine is of limited dynamics. Without bunching together what remains distinct in many ways, neither Hjelmslev (1968: 175-227) nor Greimas (1966), nor the French school (Barthes et al 1964) transcended this model in their elaborations. One semiotician, Roman Jakobson (1979:3-18), with a tent set up both on the continent of synchronic semiology and on the dynamic semiotics, realized the need to bridge between the two.
We can only suggest that, in order to deal with the implications of the semiotic machine that emerges from Peirce’s semiotics, every effort should be made not to repeat the error of making his ideas less complex, and hope that they thus become more palatable. Morris (1938) was the first to trivialize Peirce; and since the time of his elaborations, many scientists (some of undisputed reputation) worked on a version that resembles the original as much as articles in the Readers Digest resemble those from which they were derived. The triadic-trichotomic sign definition (and structure) makes references to the icon, or symbol (the representamen domain) absurd. There is no such thing in Peirce. A semiotic procedure, described in detail, is used to generate the ten classes of signs (cf. Peirce 1931, 2:264, MS 540-17). Accordingly, a semiotic machine of triadic-trichotomic resolution is actually available in the Peircean text. Formal descriptions of the procedure have been given (Marty (1990); Richmond 2005; Nadin 1978, 1981; Farias and Queiroz 2003:165-184), thus providing all there is necessary for actually constructing such a semiotic machine. Parallel to this line of thinking, there are dimensions of the Peircean system, in particular, Peirce’s phaneroscopic categories, and moreover his diagrammatic thinking elaborations, conducive to different types of machines. And there are various articles inspired by the early attempts to build actual machines, as inference engines or logical machines, in respect to which Peirce (1871:307-308, see also Ketner 1975) articulated a position of principle in 1871 impossible to ignore in our age of infatuation with machines.
In some ways, with Peirce’s semiotics we reach the core of the subject, with the still vague realization that the age of computation—i.e., of the dominance of a certain machine—is the age of the semiotic engine. Of equal interest, although of less notoriety among semioticians, is the contribution of George Boole (1854; cf. Boole 1958:24-39). In a chapter dedicated to the notion of the sign in general, he started with what he perceived to be an undisputed statement: “That Language is an instrument of human reason, and not merely a medium for the expression of thought, is a truth generally admitted.” It is a system “adapted to an end or purpose,” he wrote, suggesting the systematic approach to signs, regardless of whether we regard them as “representative of things and of their relations, or as representative of the conceptions and operations of human intellect.” The formal equivalence between these two conceptions points to a “deep foundation” exemplified, as he put it, in the “unnumbered tongues and dialects of the earth,” against the reassuring background of the “laws of the mind itself,” (cf. Boole, 1958:24-25).
His definition is constitutive of the mind as the semiotic machine: A sign is an arbitrary mark, having a fixed interpretation, and susceptible to combinations with other signs in subjection to fixed laws dependent upon their mutual interpretation. The three classes Boole defined make the operational nature of his semiotics even more evident.
Class I: Appelative or descriptive signs, expressing either the name of a thing or some quality or circumstance belonging to it.
Class II: Signs of those mental operations whereby we collect parts into a whole, or separate a whole into its parts
Class III: Signs by which relations are expressed, and by which we form propositions.
Not unlike the mind, any machine modeled on Boole’s Propositions (which are rules) turn out to be semiotic machines operating in a universe of clear-cut distinctions between Truth and False (conveniently symbolized by 1 and 0). As we know by now, computers are the unity between a language consisting of only two letters and the logic describing the relation between any statements in this very precise, but minimally expressive, language. It is, no doubt, yet again a case of reductionism, from natural language to one of the strictest mathematical formalisms. But it is also the threshold between the materially embodied machines of the Cartesian viewpoint and the first immaterial machine. This machine processes not things, but information, representing “in some form or capacity” (to allude to Peirce’s sign definition) things, or even, as our knowledge advances, information about a lower level of information and so forth (ad infinitum).
At this juncture, it becomes evident that the four letters of the DNA alphabet (A, C, G, and T, standing for Ademine, Cytosine, Guanine, and Thymine, respectively; Watson and Crick 1953: 737-738) represent yet another modality to describe processes, in this case, the intriguing genetic code, and to model the “fabrication” of entities, in the realm of the living, with known or desired characteristics. Descartes abolished the teleological dimension. The genetic engine—yet another embodiment of a particular semiotic engine (coupled to a knowledge domain expressed in the four letters of the genetic alphabet and the generative rules that guarantee the coherence of the genetic semantics)—while not explicitly affirming a final causation, cannot exclude it either. Many other specialized semiotic engines are articulated, as more generative mechanisms, such as the ones characteristic of unfolding stem cells, are discovered and put to practical use.
Accordingly, we have an interesting question to address: If semiotics is a universal science (THE universal science, a statement that, of course, irritates mathematicians), shouldn’t the semiotic engine be universal? Or can we consider the variety of semiotic engines, corresponding to particular semiotic descriptions, as part of an open-ended set of machines, each embodying the particular knowledge to be deployed in a particular field? The latter is not a trivial question, to be addressed lightly. The circumstances—i.e., the state of computation and knowledge today—should not prevent an answer that transcends the opportunistic inclination to justify the current paradigm. The methodological aspects to follow will serve as a guide as we further investigate the subject.
Computers are semiotic machines driven by semiotic engines
There are machines that are cranked manually; others are activated by falling water, steam, or gravity; others are activated by electricity. There are biological machines, where processing is the result of biological processes. Given the laws of thermodynamics, machines are not reversible. Processing takes energy; reverse processing would contradict the laws of energy conservation. Together with the expectation of processing, embodied in the machine, comes the expectation of automation—processing that takes place on its own, without the participation of the human being. By no accident, the most abstract machine—the mathematical machine—is expressed in automata theory. An automaton is a mathematical machine that accepts an input, has a set of inner states, and produces an output. For all practical (and theoretical) purposes, this machine is reversible on account of cognitive energy: that is, it can work in both directions. In proving the equivalence between automata and sign processes (in Peirce’s defnition, since all other known definitions are particular cases), a methodological foundation for the entire discussion regarding the semiotic machine has been established (cf. Nadin 1977, 1978, 1981). In a summary of the proof, we can establish that Peirce’s definition can be formally expressed asS = S(R, O, I, o, i), which is equivalent to A = A(X, Y, Q, α, β),
in which S stands for sign processes resulting from the open relation among objects, representamina, and the interpretant process; A stands for automata processes; X and Y, respectively, for the signs of input and output; and Q for the set of states. The transition function and the output function describe how output is generated from a certain input. Every automaton is a generative semiotics. Once the equivalence was proven, it henceforth justified the introduction of a notion many times quoted, but never really understood: The computer is a semiotic engine.
A generative semiotics, which is the same as describing a machine that can output sentences and texts, as well as semiotically meaningful visual and acoustic sequences or configurations (Nadin 1982:79-88 ) can be conceived as a formal description of a variety of alphabets and syntactic and semantic rules. The validity of its output is always pragmatic, i.e., in reference to the human being’s practical performance. If a physician, well versed in the semiotic identifiers of an illness, as expressed in medical classifications, can perform effective pattern recognition, we have as output the semiotically relevant entity called a diagnosis. Alternative examples: the legal diagnosis (performed by officers of the justice system), the weather forecast (generated by meteorologists), and evaluation of the political situation (done by more and more professionals, ranging from journalists to various types of advisors and pollsters). It is by no means surprising that all kinds of analytical performance (such as literary or art criticism, real estate appraisal, military operations, mechanical diagnostics of cars and very complicated machinery, etc.) fit within the same procedure. The more complex operations of generative semiotics—such as how to convey a message using multimedia; how to generate a story, what it takes to make a good game, for one player or for massively distributed situations—also belong to the functioning of the semiotic engine. Synthetic semiotics—e.g., synthesizing new materials while working with chemical symbolism and symbolic processing methods, or synthesizing life from the inanimate, if at all possible—also falls within the scope of the subject. In the final analysis, generative semiotics is the “engineering” of a “semiotic machine” for a given purpose.
After this broad image of what the discussion of the semiotic engine encompasses, it is time, for the sake of implicit goal of any encyclopedic attempt, to focus on the characteristic ways in which computation can be understood as the concrete functioning of a semiotic engine.
Problem solving. Problem generation
Computation—which means processing of semiotic entities—comes in many forms: digital, analog, algorithmic, non-algorithmic, serial (von Neumann’s paradigm), parallel, interactive, numeric, symbolic, centralized, distributed—the list is open. To leave these distinctions to scientists and engineers and to focus exclusively on the outcome of computation is probably appropriate, as long as one positions himself or herself in the now established role of user. It should be remarked from the outset that 80% of what is defined as computation concerns users. Word-processing is the user application that takes the lead; but desktop publishing (which involves text, layout, and computer graphics), database applications (from pre-programmed tax return calculations to keeping records such as addresses, recipes, financial information, to advanced datamining), and more recently networking (e-mail, Websites, Web-publication, remote teaching, cooperative projects, and so much more) make up an increasing complementary set of applications. Some of these applications assume a user different from the one limited to word-processing, but in the end still not a computation professional. Such a professional translates questions (from trivial to scientific) into programs or procedures. Embedded computation, or ubiquitous computing, effectively overwrites the role of the user, and extends the significance of the semiotic machine into the realm of the artificial.
Again, one would be better off leaving a comprehensive evaluation of these particular applications in the hands of those who invented them, since, for better or worse, all that users have to say is that one or the other program still does not work as well as expected, or that the price-performance ratio is in some cases better than in others. Computation users are merely the most cost-efficient quality control agents (“debuggers”) of a very interesting science and technology that the term computation denotes, but by no means describes. Ideally, computation is an expression of knowledge, in the forms of algorithms, processing procedures, interactions, programs, etc., subjected to a wide variety of tests. It embodies the positivist expectation of validity, effectively erasing the distinction between science and humanities. It claims universality and is, together with its twin sibling genetics, constitutive of an epistemological horizon of unprecedented characteristics.
As has been established so far, a semiotic engine drives the computer. Boole’s contribution to this was already highlighted. If the assertion that the computer is ultimately a semiotic engine, or machine, should be of any consequence both to semiotics and to computer science, the initial limitations of the proof of equivalence between the most general sign description and the automaton need to be overcome. Moreover, the consequences of such a statement should become clear, if indeed there are consequences to be expected beyond giving semiotics that much needed boost of credibility, without which its future relevance outside academic endeavors remains, as always, doubtful. Let us address these two requirements, not only for the sake of addressing them—intellectual goals often end up becoming relevant in themselves, but of no consequence for anything else—but foremost because, if they can be clearly pursued, neither semiotics nor computer science will remain the same. This assertion is of a tall order and poses many challenges to those interested in and willing to pursue its consequences.
Indeed, the semiotic machine as problem-solver gives the correct answers to questions of well-defined relevance: the red light means “Stop;” a company’s brand carries information about its various dimensions (e.g., local or global, trustworthy, market acceptability); a text unfolds around its narrative focus. When we solve problems, we are often after a rationalist justification. But there is also the problem-generation component to semiotics, enlisting empirical testing and triggering behavioral change. In the rationalist domain, we focus on generating new ways of thinking and new values expressed in behavior. Algorithmic computation is problem-solving. Interactive computation is based on empirical ways of acquiring and expressing knowledge. The semiotic engine, as the unity between the two, handily transcends current computer implementations that are not yet capable of unifying the two modes of acquiring and expressing knowledge presented above.
Computation is knowledge
Regardless of the type of computation considered, there is one characteristic that they all share: the outcome is an expression of something that could not be explicitly identified before the process took place. All the ingredients in the process—digital alphabet, Boolean logic, data, instructions, memory management, process and user interface—can be described in detail, and still the outcome cannot be predicted. (Otherwise, we would not go through the effort of producing it.) What matters is the process. Therefore, to compute means to design a type of processes fundamentally different from those we are familiar with from physics, chemistry, biology, and other sciences. Computation can unfold on virtual or on real machines, in machine-based time or in (almost) real time, in single or multiprocessing sequences in sequential or parallel machines, in neural networks or in a genetic medium (DNA or genetic computation). What counts is its inherent dynamic condition, as well as the fact that knowledge is generated at the intersection between the semiotics leading to human cognition and the semiotics underlying machine-based cognitive functions.
This knowledge can be of various kinds, like human knowledge itself. To be more specific: word-processing is the knowledge of all the elements involved in generating and disseminating texts. It is primarily a comprehensive theory of all the variables involved in the human or machine experience of generating texts in a context of acknowledged rules that embody grammar, syntax, etymology, linguistics, as well as rules for structuring and presenting ideas in written form. This theory, still in the making, is embodied in particular programs that allow for spell checking, for instance, or stylistic refinement, or for various visual forms of structuring (through layout rules, for instance). Its use is neither more no less than the test of the text knowledge embodied in the model of a specific computational word-processing implementation. As people use this knowledge, they test it beyond everything a particular person or group (developers) could even imagine.
However, at this moment in the development of computation—a relatively young discipline, whose main products are still rudimentary—knowledge generated in computation processes is predominantly acknowledged outside the process, i.e., in the interaction between human beings and the machines supporting these processes. In other words, like the abacus, the computer does not know right from wrong, and even less, significant from insignificant, meaningful from meaningless.
Instead of revisiting the formal descriptions of the various types of computation known so far (many more will come, if we consider the extraordinary multiplication of means and methods dedicated to computation), and inferring from such descriptions to sign processes (in Peirce’s sense, or in some alternative fundamental concepts of semiosis) and vice versa, let us take an alternative path. Under the assumption that computation is knowledge pertinent to a new moment in the evolution of the species, and in the knowledge that there are no known cognitive processes whose underlying principle is not semiotic, it follows that the statement, “The computer is a semiotic machine,” does not need to be formally further pursued, since it is the necessary consequence of the condition of computation. Granted, the assertion might be weakened if someone could come up with a type of computation that is not knowledge-based, but even if one could produce such an example, it would not automatically exclude semiotic processes, but rather prompt more adequate definitions of what we call semiosis.
The point we are trying to make is far from trivial. Many scientists, technologists, and semioticians consider computers a technology, and what happens in a computer, a matter of moving electrons, heat dissipation, and electromagnetism, i.e., physical processes. They are not totally wrong. After all, computation as process does not happen in a vacuum (after the disappearance of vacuum tubes, this sentence holds true even in the literal sense), but with the participation of matter (organic or anorganic), or better yet, at the meeting point between matter and human cognitive capabilities.
In one of his famous statements (probably quoted as frequently as his theory of relativity), Einstein declared: “It would be possible to describe everything scientifically, but it would make no sense. It would be without meaning, as if you described a Beethoven symphony as a variation of wave pressure.” Up to a certain point, Einstein was right. Indeed, electronics—the science and technology of all that made computers possible—is a necessary but no sufficient condition for computation. All the circuits can be perfectly designed and produced, the power supply in good order, and the input and output devices correctly integrated, and still there would be no computation at this stage. Something else, of a higher order (if we agree to accept that abstraction is of a higher order than the concreteness of matter) makes the function of computation possible. Alternatively, a situation in which we have no machine whatsoever, but in which we conceive a program and execute it mentally or on paper (granted, slowly, step-by-step, with many intermediate steps), can be seen as computation, insofar it is part of a cognitive process involving a representation, a logic, data, and instructions applied to them. “No machine whatsoever” does not mean that the biological machine—to use the old machine metaphor—which we humans are, is not the substratum of the process. The Turing machine is an example. The demonstration (Nadin 1977) that the mathematical category describing it is equivalent to the mathematical category describing sign processes only confirms why one can claim that the engine of the Turing computation is semiotic. On this account, let it be noticed that Turing did not reduce the human being to a machine. He wrote: “We may compare a man in the process of computing a real number to a machine which is only capable of a finite number of conditions,” (Turing 1936:230-265 ). This is a fundamental position, very little noticed in the computing community, and almost never discussed by the semiotic community.
As we focus on the semiotic machine, our subject is computation, not only as a technological process, but as semiotic process unifying the algorithmic and the interactive. The qualifier semiotic means that a sequence of interpretations is generated in each and every computation. By this, we understand that much more than permutations, and even more than tractability—i.e., whether one transcends the time limitations by which humans live (finite intervals) in order to compute—need to be considered.
If computation, regardless of its nature (algorithmic or interactive), is not reducible to electric, or quantum, or DNA processes, but involves semiotic entities, the question is: What are they? A short answer would be: The same entities that make cognitive processes possible. Somewhere along the line, we end up at the one and only culprit of semiotics: the sign. Thus we close the infamous circle: The sign as an underlying element of thinking = The sign as a product of thinking, which Boole alluded to while describing language. Computation has it easier. Bits and bytes (which are only strung-together bits) are processed, but not necessarily defined, through computation; rather, they are defined beforehand, as a condition of computation.
As a measure of information, the bit describes quantities. As a unity among what is represented, the representational means, and the infinite process of interpretation, the sign emerges as individuals constitute themselves through whatever they do. The bit itself was generated in such an experience of generating, transmitting, and receiving information. As a sign, the bit can be seen at the syntactic level as the string of letters b, i, t, or as whatever the syntax of the information it embodies is; at the semantic level, as the univocally defined unit of information pertinent to the simplest imaginable choice (heads or tails); at the pragmatic level, as the relation between the information it describes, the many ways in which it can be expressed, and the infinity of actions it can trigger, or, alternatively, inhibit. Insistence on clarifying concepts at this juncture stems not from a pedantic instinct typical of the spirit of the Encyclopeadia, but from a pragmatic necessity: If the relevance of semiotics to computation is to be established, then it is obvious that one more analytical tool will be exactly what other analytical tools are, i.e., perhaps an instrument of validation, a method for evaluating, or, at best, an optimizing procedure. There is nothing against such possibilities, which semioticians took advantage of, producing lectures, articles, even books about analytical semiotics. However, the nature of computation is such that semiotics belongs to its premises, and, accordingly, a legitimate semiotic approach can and should be part of the computation, not only of its validation after it was finished.
In more detail, what this means is nothing other than the rethinking of computation in semiotic terms, and their effective integration in the means and methods through which knowledge is computationally expressed. That involves transcending the quantitative level of the bit and integrating qualitative signs, with the implicit understanding that quality is not reducible to quantity. This major understanding is far from being trivial, especially in a context of technological innovation within which some aspects of qualitative distinctions were successfully translated into quantitative distinctions. Point in case: music. Thus Einstein’s assertion on representing Beethoven digitally comes back to haunt us. Indeed, the high generality of the bit, as opposed to the concreteness of wave pressure differences, explains the perfect digital rendition of a Beethoven symphony, without, of course, making it identical or equivalent to a live performance (in a studio or before an audience). We can even imagine an automated performance, by virtual musicians, directed by a virtual conductor, faithful to Beethoven’s musical text to any extreme we can think of. But that again is Beethoven as quantity, measurable and controllable, while a performance, with its implicit deviations, results as a living product and ceases in this definition once the performance has taken place. This is not an elaboration on music, or on the arts. It is an elaboration on what happens when the semiotic engine human being is replaced, or even complemented, by a semiotic engine of a different nature. Feigenbaum’s confessions to calculations he performed in his mind, and which resulted in valid outcomes different from that of computation by powerful computers, is but one example of how the means of representation are not a passive constituent of the semiotic processes in which they are used.
Semiotics brings to computation the awareness of the fact that sign processes depend on the nature of the signs, that they are constitutive of new realities, and as such, not unlike notation systems (e.g., numbers, letters, colors, shapes), they are present not only in the input (what goes into a sign process, what goes into computation), but also in the output. A digital rendition of a Beethoven symphony could be as fascinating as any other we can think of, provided that it can make possible the closure through which representamina are integrated in an interpretant process. Circumstances for this to happen are provided. We experience a fundamentally new pragmatic framework, i.e., of semiotically driven human experiences. Indeed, the species moved another notch away from its natural condition to its human, i.e., semiotic, condition. To elaborate on this, as the many aspects of the semiotic engine are described, would be presumptuous. We are actually trying to determine how computation can be grounded not only in electronics, logic, algorithms, mathematics, etc., but can also integrate the enormous semiotic experience that the species has acquired so far.
Computation as semiosis
To nobody’s surprise, semiotic considerations in respect to computation were first articulated in respect to so-called man-machine interaction (Nadin 1983). Considerable experience originating from past challenges posed by all kinds of artifacts used by individuals was brought to the table. Even line editors—precursors of the current interfaces—were subjected to semiotic scrutiny. Commands had to be abbreviated, made as clear and univocal as possible, presented in legible form, and according to cognitive principles pertinent to the human processing of words. But this is prehistory. Iconic interface was a definite semiotic statement, inspired, as we know, by trivialized semiotic terminology. To its fame, and to its shame, semiotics contributed to the desktop metaphor—a huge step forward in making new forms of computation available to a large number of users, but also a dead-end street in which computation has remained stuck to our day.
Much more interesting was the attempt to enlarge the notion of computation itself to include varieties of signs extending from those elements making up the elusive domains of the visual, the aural, and multimedia. In the virtual realm, much more than in the pseudo-3D realm, all kinds of semiotic devices found their usefulness in, or contributed to, the periodical moments of confusion that mire computation. To a lesser extent, semiotic considerations were present in neural networking, biocomputing, and molecular and quantum computation, to name a few fields. But it remains to be seen whether this situation will eventually change. In some areas, extremely intricate semiotic considerations, though rarely identified as such, are a dominant component. Datamining, the magic formula of the networked computation dedicated to the use of information leading to more individualized forms of interaction (dissemination of the new, e-commerce, healthcare, culture, etc.) is, after all, the embodiment of abductions, in the strictest sense of Peirce’s definition, carried out by the semiotic engine. Almost all known inference engines deployed today encode semiotic elements, although at times, those who designed them are rather driven by semiotic intuition than by semiotic knowledge.
To start a search on the Web today is literally to start sign processes, to either watch how these unfold or to affect their unfolding by controlling the syntax level, the semantic involved—still the dominant dimension of any Web activity—or the pragmatics in cooperative projects, remote learning, and interactive publishing. These forms of computation as semiosis will continue to attract more and more people. Their efficiency can be improved only if more methodical, and more professional, semiotic elements will be integrated and fine tuned in their use.
Semiotic awareness and the semiotic machine. The future.
The semiotic community has shown interest in the subject of the semiotic machine especially in view of the hope that it will give currency to research seen more extraneous than significant in the current context. Since the early 1990s, several authors have dealt with various aspects of machine semiosis (Andersen, Nake, Siekenius de Souza). Others (Zemanek, Nake, Andersen, R.W. Floyd) focused on semiotic aspects of programming, beginning with formal languages and program evaluation and ending with automatic programming. Referring to Jean Petitot Cocorda and Per Age Brandt, but avoiding Peirce, Albertina Laurenci focused on intelligent systems (Agile Software Development). Gabriele Gramelsberger tried to define the sign (as a digital particle), while examining the building blocks of virtual worlds. Coming from the computer science community, Gomes, Gudwin and Queiroz (2003:69-79, 2005) have tried to sketch an introduction to computational semiotics. As they see it, computational semiotics “seeks inspiration in semiotics.” But to realize what they have in mind, one has to realize that the notion of semiotics might not be automatically accepted within the semiotic community: “a tradition in the philosophy of mind dealing with concepts of representation and communication from a more technical perspective. Their contribution is helpful in identifying the work of Dmitri Pospelov—a Russian scientist specialized in intelligent control theory—and Eugene Pendergraft—author of a so-called “self-knowing” machine (the Autognome). They also make reference to Gerd Döben-Henisch (1996), who tried his hand in defining a semiotic machine as a “device able to reconstruct the common structures of human experience in terms of sign processing.” Döben-Henisch worked on a knowledge robot (Knowbot; 2002: 59-79), an agent-based implementation of his ideas concerning semiotic machines. Jack A. Shulman (1996), a very active computer professional, goes as far as to present the idea rhetorically: “Imagine a machine which can think like a human. A semiotic machine.” He provides some details for what he calls “implementational protocols of thought” (conveniently abbreviated as IP) and defines “four fundamental mechanisms used by the mind”—called a Cognitive Abstraction Inference Induction machine (CAII)—each being a basic pre-semiotic process. Shulman states that “implementation protocols and semiotic processes are two sides of the same coin,” which is more than an allusion to Saussure’s distinction between the signifiant and signifié.
While such elaborations, from non-semioticians, are indicative of the level at which semiotics permeates other sciences, more significant ideas are offered for debate by philosophers, such as Lauro Frederico Barbosa da Silveira, and by historians of semiotic ideas, such as Winfried Nöth. Da Silveira is focused on Peirce’s philosophy as a broad, unified conception, impossible to understand unless taken in its totality. Learning is what a semiotic machine would have to perform in order to “progressively” modify its way of functioning. Such a machine would have to be endowed with a “generalizing capacity.” It is clear that such in-depth surveys will have to guide future attempts dedicated to understanding how the implicit notion of semiotic machine changes over time. Nöth (2002): 5-21) takes this challenge and proceeds acrimoniously in his overview of more recent, but by no means exhaustively reported, concepts pertinent to the subject. The scholarly quality of the overview is unimpeachable. One need not agree with all assertions, especially those relative to semiosis, in order to profit from the vast body of work referenced. Jonathan Swift’s Academy of Lagado, in Gulliver’s Travels, caught Peirce’s attention, as well. The machine described could be used even by “the most ignorant person, at a reasonable charge” to “write books on philosophy, poetry, politics, laws, mathematics, and theology, without the least assistance from genius or study.” Nöth is alarmed by the perspective, but does not think that the writings on the semiotic machine subject might qualify as such work. This is mentioned here because, in dealing with the subject of the semiotic machine, the community of semioticians can benefit from the awareness of interactivity, which is not a characteristic of the machine.
As we know, semioses, regardless of their nature, are dynamic sign processes. Through semioses, minds interact, and thus become identified in a course of action (pragmatics) definitory of their characteristics (cf. Nadin 1991). As we move towards evolutionary computation, with evolvable hardware, we need to make sure that the semiotic engine on which they are by nature based is designed having in mind the requirements of semiotic processes as we know them from human interaction. It is beyond dispute that new classes of such semiotic processes might evolve. However, as sign-based, they will reflect the epistemological nature of the sign, and thus replicate semiotic awareness. Indeed, a semiotic engine is not pure and simple an engine, but one with a certain self-awareness. The bits processed are bits that know where they are and to which string they belong. More precisely, the operation to which they belong—which is a semiosis— is not mechanical, but semiotic, that is, with the mechanism of self-interpretation embedded in the process. When representations of digital circuits are placed at the level of the chromosome—as it takes place in our days—a foundation is laid for computation that involves and facilitates self-awareness.
Bibliography
Andersen, Peter Bogh (1986). “Semiotics and informatics: computers as media.” In Information Technology and Information Use. Towards a Unified View of Information and Information Technology (P. Ingwersen, L. Kajberg, A. Mark Pejtersen, eds.). London:Taylor, Graham (64-97)
Andersen, Peter Bogh (1995). “The force dynamics of interactive systems. Towards a computer semiotics.” In Semiotica 103, 1/2. Berlin:Walter de Gruyter (5-45)
Andersen, Peter Bogh, P. Hasle, and P. A. Brandt (1997). “Machine semiosis.“ In Posner, R., K. Robering, T.A. Sebeok, eds., Semiotics: A Handbook About the Sign-Theoretic Foundations of Nature and Culture, Vol. 1. Berlin:Walter de Gruyter (548-570)
Aristotle (350 BCE). On Interpretation. See also E.M. Edghill, trans. http://classics.mit.edu/Aristotle/interpretation.html
Augustine of Hippo. On Christian Doctrine, I.2. See also De Doctrina Christiana (Oxford Early Christian Texts, 1996). R.P.H. Green (Ttans.). New York:Oxford University Press.
Barthes, Roland (1964) Eléments de sémiologie. Paris:Seuil.
Bogoshi, J., K. Naidoo and J. Webb (1987). “The oldest mathematical artifact.” In Mathematical Gazette, 71:458.
Boole, George (1854). “Of signs in general, and of the signs appropriate to the science of logic in particular; also of the Laws to which that class of signs are subject.” Chapter in An Investigation of the Laws of Thought on Which are Founded the Mathematical Theories of Logic and Probabilities. New York: Macmillan (reprinted with corrections New York: Dover Publications, 1958).
Borsche, Tilman (1994). “Zeichentheorie im Übergang von den Stoikern zu Augustin.” In Allgemeine Zeitschrift für Philosophie, 19.
Chomsky, Noam (1957). Syntactic Structures. The Hague:Mouton.
Descartes, René (1637). Discourse on the Method to Rightly Conduct the Reason and Search for the Truth in Sciences.
Descartes, René (1684). Regulae ad directionem ingenii (Rules for the Direction of the Mind). See also Giovanni Crapulli (ed., 1966). The Hague: Martinus Nijhoff.
Döben-Hensich, G. (1996). “Semiotic Machines – Theory, Implementation, Semiotic Relevance.” In 8 th International Semiotic Congress of the German and the Netherlands Semiotic Societies, Amsterdam.
Döben-Hensich, G., L. Erasmus, J. Hasebrook (2002). “Knowledge Robots for Knowledge Workers: Self-Learning Agents Connecting Information and Skills.” In: Jain, L.C., Zhengxin Chen, N. Ichalkaranje, eds. Intelligent Agents and Their Applications (Studies in Fuzziness and Soft Computing) Vol. 98. New York:Springer.
Elsasser, Walter (1998). Reflections on a Theory of Organisms. Baltimore: The Johns Hopkins University Press.
Engels, J. (1962). “La doctrine du signe chez Saint Augustine. In F.L. Cross, ed. Studia Patristica. Berlin: Akademie Verlag (366-373)
Farias, P. and J. Queiroz (2003). “On diagrams for Peirce’s 10, 28, and 66 classes of signs.” In Semiotica, Vol. 147 (1/4). Berlin:Walter de Gruyter.
Floyd, R.W. (1967). “Assigning meaning to programs.” In J.T. Schwartz ed. Proceedings of the Symposium in Applied Mathematics, AMS (19-32)
Fuchs, Michael (1999). Zeichen und Wissen. Das Verhältnis der Zeichenthroie zur Theorie des Wissens und der Wissenschaften im dreizehnten Jahrhundert. Münster:Aschendorff.
Gombrich, E. H. (1954). The Story of Art. Garden City, NY:Phaidon.
Gomes, A., Gudwin, R. & Queiroz, J. (2003). “Towards Meaning Processes in Computers from Peircean Semiotics.” In: S.E.E.D. Journal Semiotics, Evolution, Energy, and Development 2.
Gudwin, R. and J. Queiroz (2005). “Towards an Introduction to Computational Semiotics.” In: C. Thompson and H. Hexmoor, eds. Proceedings of the International Conference on Integration of Knowledge Intensive Multi-Agent Systems. KIMAS ’05: Modeling, Evolution, and Engineering. IEEE Systems, Man and Cybernetic Society. Vol.1, (393-398)
Greimas, A. J. (1966). Sémantique structurale. Paris: Larousse.
Haarman, Harald (1990). Universalgeschichte der Schrift. Frankfurt:Campus Verlag.
Hausdorf, Felix (writing as Paul Mongré) (1897). Sant’Ilario. Gedanken aus der Landschaft Zarathustras.
Hjelmslev, L. (1966). “La structure fondamentale du language.” In Prolegomènes à une th éorie du language. Paris: Minuit.
Howell, Kenneth (1987). “Two aspects of Roger Bacon’s semiotic theory in De signis.” In Semiotica 63 (73-81)
Jackson, B. Darrell (1969). “The Theory of Signs in St. Augustine’s De doctrina Christiana.” In Revue des Études Augustiniennes 15 (9-49)
Jakobson, Roman (1979). “Coup d’œil sur le développment de la sémiotique,” in S. Chatman, U. Eco, and J.M. Klinkenberg, eds. A semiotic landscape/Panorama sémiotique. Actes du premier congrés de l’Association internationale de sémiotique, Milan, juin 1974. The Hague/ Paris/New York:Mouton (3-18)
Jolivet, Jean (1969). Arts du langage et théologie chez Abélard. Paris: Vrin.
La Mettrie, Julien Offray de (1748). L’Homme Machine. Leyden:Elie Luzac.
Leibniz, G.W. (1989). Philosophical Essays. Indianapolis:Hackett, 1989.
Leibniz, G.W. (1995). Discours de métaphysique. Paris:Gallimard.
Locke, John (1690). An Essay Concerning Human Understanding.
Lotman, Yuri (1990). Universe of the Mind. A Semiotic Theory of Culture, Ann Shukman, trans. London: I.B. Tauris. (His initial work on the subject dates back to 1973.)
Marty, Robert (1990). L’algèbre des signes. Amsterdam:John Benjamins.
Morris, Charles (1938). “Foundations of the Theory of Signs.” In International Encyclopedia of Unified Science, Vol. 1:2. Chicago:University of Chicago Press.
Nadin, Mihai (1977). “Sign and fuzzy automata.” In: Semiosis, Heft 1, 5. Baden-Baden: Agis Verlag.
Nadin, Mihai (1978). “Zeichen und Wert” (Sign and Value). In Grundlagende Studien aus Kybernetik und Geisteswissenschaft, 19:1, March.
Nadin, Mihai (1981). Zeichen und Wert (Sign and Value). Tübingen: Günther Narr Verlag.
Nadin, Mihai (1982). “Consistency, completeness, and the meaning of sign theories: The semiotic field.” In The American Journal of Semiotics, 1:3 (79-88)
Nadin, Mihai (beginning 1983). In a number of lectures, beginning as early as 1983, Nadin was the first to describe the computer as “the semiotic machine par excellence.” For example: Interactivity: on computer art and design. William A. Kern Institute Professorship in Communication, Rochester Institute of Technology, NY, March 16, 1983; The semiotic model of artificial intelligence – a shift in paradigm. Colloquium on Artificial Intelligence, Rochester Institute of Technology, NY, May 12, 1984.
Nadin, Mihai (1991). Mind—Anticipation and Chaos. (Milestones in Thought and Research). Zurich/Stuttgart: Belser Presse.
Nadin. Mihai (1998). The Civilization of Illiteracy. Dresden: Dresden University Press.
Nadin, Mihai (2005). “The Timeliness and Futureness of Programs.” In C. Pias, ed. The Futures of the Computer. Zurich/Berlin: Diaphanes Verlag (29-46)
Nake, Frieder (1994). “Human-computer interaction: signs and signals interfacing.” In Languages of Design, 2 (193-205)
Nake, Frieder, ed. (1994) Zeichen und Gebrauchswert. Beiträge zur Maschinisierung von Kopfarbei. Bremen: Universität Bremen, FB Mathematik/Informatik, Bericht Nr. 6/94.
Nake, Frieder (1997). “Der semiotische Charakter der informatischen Gegenstände.” In U. Bayer, ed. Festschrift für Elisabeth Walther zum 75. Geburtstag. Baden-Baden:Agis Verlag.
Napier, John (1617). Rabdologiae.
Nöth, Winfried (2002). “Semiotic Machines.” In: Cybernetics and Human Knowing, Vol. 9, Nr. 1. Essex, UK:Imprint Academic.
Peirce, C.S. (1931-1935). Collected Papers of Charles Sanders Peirce, Vols. 1–6, Charles Hartshorne and Paul Weiss, eds. Cambridge, MA: Harvard University Press.
Peirce, C.S. (1871). Obituary of Charles Babbage, in The Nation, 13 (9 November 1871) (307-308)
Peirce, C.S. (1975). K.L. Ketner and J.E. Cook, eds. Contributions to “The Nation”. Vol 1 (1869-1893). Lubbock:Texas Technological University Press.
People’s Daily On Line (2005) http://english.people.com.cn/200503/25/eng20050325_178268.html
Plato (360 BCE). Cratylus. See also Howett, Benjamin, trans. http://classics.mit.edu/Plato/cratylus.html
Pendergraft, E.P. (1993). The Future’s Voice – Intelligence Based on Pragmatic Logic.
Internal Report, Creative Intelligence Incorporated, Jasper, WY.
Pospelov, D.A. and Y.I. Yeimov (1977) “Semiotic Models in Planning Problems of Artificial Intellect Systems.” In Engineering Cybernetics, 5 (37-43)
Pospelov, D.A. (1991). Situational Control: Theory and Practice. (Unpublished translation of the original in the Russian from Nauka Publishers, Moscow, 1986.)S
Richmond, Gary (2005) “Outline of trikonic |>*k: Diagrammatic Trichotomic.” In Lecture Notes in Computer Science. New York/Berlin: Springer Verlag. (See also http://faculty.lagcc.cuny.edu/garyrichmond/reserach/research_documents.htm)
Saussure, Ferdinand de (1983 ). Cours de linguistique générale (1910-1911) d’après les cahiers d’Émile Constantin ( Third Course of Lectures on General Linguistics (1910/1911) from the notebooks of Émile Constantin. E. Komatsu, Ed. and R. Harris, Trans. Oxford: Pergamon.
Shulman, Jack (1996). The Cognitive Abstraction Inference Induction Machine.
http://www.acsa2000.net/shulman1.html
Souza, Clarisse Sieckenius de (1993). “The semiotic engineering of user interface languages.” In International Journal of Man-Machine Studies, No. 39 (753-773)
Souza, Clarisse Sieckenius de (1997). “Supporting end-user programming with explanatory discourse.” In Proceedings of the ISAS. Gaithersburg MD: NIST.
Souza, Clarisse Sieckenius de (2005). “Semiotic engineering: bringing designers and users together at interaction time.” In Interacting with Computers, 17(3) (317-341)
Turing, Alan M. (1936). “On computable numbers with application to the Entscheidungs-problem.” In: Proceedings of the London Mathematical Society, Ser. 2. Vol. 42 (1936-7); corrections, Ibid, Vol. 43 (1937) (544-546)
Watson, James D. and Francis H. Crick, Francis (1953). “Molecular structure of nucleic acids.” In: Nature 171 (737-738)
Zaslavsky, Claudia (1979). Africa Counts: Number and Pattern in African Culture. Chicago: Lawrence Hill Books.
Zemanek, Heinz (1966). “Semiotics and programming languages.” In Communications of the ACM, 9 (139-143).