Three books to ponder
Information: The New Language of Science
By Hans Christian von Baeyer. A Phoenix Paperback (Orion Books), Great Britain, 2004 .
The notion of information is confidently taken for granted in most semiotic writings. On the one hand, its common use in everyday language gives it an appearance of conceptual transparency. On the other hand, it is endowed with an aura of scientificity that radiates from information technologies and theoretical physics. However, the mathematician (and semiotician) René Thom, among others, had pointed more than two decades ago the daunting polysemy of this term, listing as many as 17 meanings which are diversely used in discourse. These multiple meanings are not easy to reconcile within a single semantic frame. Hans Christian von Baeyer, a physicist and author of noted scientific writings, tackles anew the issue of the elusive definition of information in a book that retraces the quest for the grail of scientific knowledge, an ultimate building block, from Democritus on. His claim is that information is becoming the new corner stone of all science, as energy did in the middle of the nineteenth century, but in a more challenging, and probably more productive way. We do not know what information ultimately is but we can measure it in bits and qubits, represent it in mathematical equations, and even market it. The author points out that it is rarely the case that science starts with well defined notions. The conceptual dynamic of an elementary idea, often a mere metaphor, suffices to trigger the fruitful construction of models and experiments that lead to robust theories and stunning applications. Such was the early notion of the atom and its grip on human creative imagination in the successive forms of ultimate grains of reality, miniscule spheres with protruding hooks, or miniature solar systems, all mistaken representations but necessary steps towards a more abstract knowledge and its exponential means of efficient manipulation. While following von Baeyer's retracing of the emergence of the notion of information and its avatars in contemporary physics, semioticians cannot help being struck by the epistemological proximity of this development to the saga of the concept of sign, itself intimately linked to the dawn of information theory. The final chapters introduce current issues and debates, and open vistas to the future in the speculative mode. At a time when information is becoming "the new language of science", critically assessing our nineteenth century images and models of the sign should be in order.
Today we live in the information age. Wherever we look it surrounds us, and, with the help of ever more efficient devices from the internet through to mobile phones, we are producing, exchanging and harnessing more than ever before. But information does far more than define our modern age - at a fundamental it defines the material world itself, for it is through its mediating role that we gain all of our knowledge, and everything derives its function, existence and meaning from it. In twenty-five short chapters, von Baeyer takes us from the birth of the concept of information and its basic language, the bit - which encodes it in zeroes or ones like the heads OR tails result of a coin toss - through to the coal-face of contemporary physics and beyond in quantum mechanics, quantum computing and qubits - the quantum equivalent of the bit, where information is encoded in the form of zeroes AND ones, as if a tossed coin came up heads and tails at once. Along the way, he illuminates such diverse issues as Morse code; gaming theory and probability; genetics and heredity; Einstein and general relativity; black holes; randomness; abstraction; the impossibility of true objectivity and the role of philosophy in modern physics - deftly unpicking the many strands that knit information so tightly into the fabric of the universe, and explaining why it has the power to become the most fundamental concept in physics. This is a snappily written and utterly absorbing work, which, with its deceptively simple presentation, gives an incredible insight into a new language of science and a new way of understanding.
By Jeff Hawkins with Sandra Blakeslee. Times Books, New York, 2004.
Semiotics, and semioticians (with a few, very rare exceptions), have been notoriously absent from the AI (artificial intelligence) projects that dominated the second half of the previous century and captured a disproportionate share of the available research resources. The assumptions underlying such projects were those of the Chomskyan paradigm and owed little, if anything at all, to the knowledge of actual human brain processes. In the 1980s the MIT establishment did not believe that you had to study real brains to understand intelligence and build intelligent machines. This approach has now proved to be fundamentally mistaken as its limited achievements did not meet the expectations of its sponsors and promotors. A similar, albeit more nuanced critique can apply to connectionism. The time has come to get back to the drawing board. Computer scientist and entrepreneur Jeff Hawkins, the co-founder of Palm Computing in 1992 and founder of the Redwood Neuroscience Institute in 2002, now heralds a renaissance of interest in intelligent systems based on a new theoretical understanding of how the human cortex works. On Intelligence, written in collaboration with journalist Sandra Blakeslee, attempts to take stock of the current knowledge in the neurosciences and develop a theoretical understanding of cortical brain's operations in order to eventually devise computer simulations grounded on this knowledge. Hawkins's main inspiration comes from the insights of Vernon Mountcastle regarding the organization and dynamic of the neocortex, which are systematically expounded in his 1998 book, Perceptual Neuroscience: The Cerebral Cortex, but were first introduced in a seminal article in 1978. The assumption is that brain processes, once they are understood, can be described in mathematical language, and thus provide the means of creating appropriate algorithms. Whether this approach will be fruitful remains to be seen. In the meantime, the book should engage readers beyond computer scientists (and investors). Semioticians could greatly benefit from pondering Hawkins's distillation of the most recent discoveries in the cognitive neurosciences. Readers familiar with semiotics will immediately perceive the relevance of this provocative book to their research interests. Some may simply reductively translate his discourse into various semiotic jargons, others might find there an opportunity to think anew the intellectual puzzles which gave rise to semiotic theories in the first place, at a time when the human brain was nothing but a black box. Operationalizing semiosis in terms of actual brain processes would be a step in a promising direction.
Jeff Hawkins, the high-tech success story behind PalmPilots and the Redwood Neuroscience Institute, does a lot of thinking about thinking. In On Intelligence Hawkins juxtaposes his two loves--computers and brains--to examine the real future of artificial intelligence. In doing so, he unites two fields of study that have been moving uneasily toward one another for at least two decades. Most people think that computers are getting smarter, and that maybe someday, they'll be as smart as we humans are. But Hawkins explains why the way we build computers today won't take us down that path. He shows, using nicely accessible examples, that our brains are memory-driven systems that use our five senses and our perception of time, space, and consciousness in a way that's totally unlike the relatively simple structures of even the most complex computer chip. Readers who gobbled up Ray Kurzweil's (The Age of Spiritual Machines and Steven Johnson's Mind Wide Open will find more intriguing food for thought here. Hawkins does a good job of outlining current brain research for a general audience, and his enthusiasm for brains is surprisingly contagious. --Therese Littleton --This text refers to the Hardcover edition.
From Publishers Weekly
Hawkins designed the technical innovations that make handheld computers like the Palm Pilot ubiquitous. But he also has a lifelong passion for the mysteries of the brain, and he's convinced that artificial intelligence theorists are misguided in focusing on the limits of computational power rather than on the nature of human thought. He "pops the hood" of the neocortex and carefully articulates a theory of consciousness and intelligence that offers radical options for future researchers. --This text refers to the Hardcover edition.
Varieties of Meaning. The 2002 Jean Nicod Lectures
By Ruth Garrett Millikan. A Bradford Book (The MIT Press), 2004.
How the diverse uses of the word "meaning" in common language relate to each other? How do these senses relate to the definitions proposed by philosophers and linguists? How to come to grips with this semantic fluidity in relation to the phenomena it tries to capture? How embodied language and thought interact? This is the polymorphous challenge that Ruth Millikan set for herself in the Jean Nicod Lectures she was invited to deliver in Paris in 2002. Varieties of Meaning is the book she derived from this prestigious occasion. Mindful of the complexity of the problem she addresses, the author proceeds in a dialogical mood, articulating her thought in relation to other thinkers who, in recent times, have put their marks on this debate, such as Chomsky, Fodor, Dretske, Dennett, Dawkins, Gould and others. She also draws from her own impressive contributions to naturalistic philosophy (e.g. Language, Thought, and Other Biological Categories, 1984) based on a notion of the sign that was elaborated in the tradition of Charles Morris. This new book in which discussions of signs (their kinds, origins, and functions) are salient, is an important, updated account of human semiotic competency. Interestingly, however, the word "semiotics" is to be found nowhere in the book, perhaps because its meaning has been adulterated or diluted in unpalatable discourses with which the author does not want to be associated. Nevertheless, semioticians of all feathers would benefit from pondering this rigorously argued work that offers provocative insights on signs, language, meaning and representation.
Many different things are said to have meaning: people mean to do various things; tools and other artifacts are meant for various things; people mean various things by using words and sentences; natural signs mean things; representations in people's minds also presumably mean things. In Varieties of Meaning, Ruth Garrett Millikan argues that these different kinds of meaning can be understood only in relation to each other.
What does meaning in the sense of purpose (when something is said to be meant for something) have to do with meaning in the sense of representing or signifying? Millikan argues that the explicit human purposes, explicit human intentions, are represented purposes. They do not merely represent purposes; they possess the purposes that they represent. She argues further that things that signify, intentional signs such as sentences, are distinguished from natural signs by having purpose essentially; therefore, unlike natural signs, intentional signs can misrepresent or be false.
Part I discusses "Purposes and Cross-Purposes" -- what purposes are, the purposes of people, of their behaviors, of their body parts, of their artifacts, and of the signs they use. Part II then describes a previously unrecognized kind of natural sign, "locally recurrent" natural signs, and several varieties of intentional signs, and discusses the ways in which representations themselves are represented. Part III offers a novel interpretation of the way language is understood and of the relation between semantics and pragmatics. Part IV discusses perception and thought, exploring stages in the development of inner representations, from the simplest organisms whose behavior is governed by perception-action cycles to the perceptions and intentional attitudes of humans.