Simulation

Simulation is a technique for acquiring and generating (new) knowledge, facts, or theories, which is applied in the context of science and technology. It represents a (scientific and/or technical) procedure for producing results, forecasts, possible scenarios, etc. by experimentally computing complex models in the representational/virtual domain (of a computer). Simulation can be compared to a “Gedankenexperiment” (i.e., a test of a hypothesis that can be performed only in the mind): new knowledge is constructed by making use of only one’s own internal representations of the world, mind, as well as capacity of thinking. This is what happens whenever one is making plans, developing a strategy, searching for a solution in one’s own mind, etc.

What makes simulation so appealing as an alternative tool for knowledge acquisition are the following features: no further direct (“empirical”) contact with reality is necessary in order to perform this process of knowledge acquisition/production. Secondly, this process of testing a hypothesis or developing new knowledge happens completely detached from the human mind – it is performed automatically on a computer.

This is achieved in several steps: first, one has to establish an empirical model or theory describing a certain aspect of reality. In a second step, this scientific model or theoretical/mathematical description is transformed into an algorithmic form. In a third step, this algorithm gets implemented as a computer program on a concrete computer. Finally, by executing this program with specific sets of parameters the dynamics of the represented aspect of reality is simulated and becomes explicit (under the assumption that the underlying model and the algorithmic representation are sound). Such, the space of possible theories and dynamics can be explored in a relatively economical manner.

I.e., in such a virtual experiment the comparatively static theoretical descriptions become dynamic and the observer can follow the (virtual) dynamics of the tested model/hypothesis. If the outcome of the simulation experiment is combined with a sophisticated graphical representation of the results, the process of simulation does not only bring about new results, but also increases the level of the explanatory value of a model/theory describing a certain aspect of reality.

What is simulation?

Simulation is a technique for acquiring and generating (new) knowledge, facts, or theories. It is applied in the context of science and technology and represents a (scientific and/or technical) procedure for producing results, forecasts, possible scenarios, etc. by experimentally computing complex models in the representational/virtual domain (of a computer). Simulation can be compared to a “Gedankenexperiment” (i.e., a test of a hypothesis that can be performed merely in the mind): new knowledge is constructed or existing knowledge is modified by making use of only one’s own internal representations of the world, one’s mind, as well as capacity of thinking. This is what happens whenever one is making plans, developing a strategy, searching for a solution in one’s own mind, etc.

What makes simulation such an interesting alternative for knowledge acquisition are the following features: no further direct (“empirical”) contact with reality is necessary in order to perform this process of knowledge acquisition. Secondly, this process of testing a hypothesis or developing new knowledge happens completely detached from a human mind – it is performed automatically on a computer.

This is achieved in several steps: first, one has to establish an empirical model or theory describing a certain aspect of reality; such a theory is the outcome of the classical empirical approach in the sciences. In a second step, this scientific model or theoretical/mathematical description is transformed into an algorithmic form. I.e., the empirical model is formalized and is changed in such a way that its logical structure becomes explicit.

In a third step, this algorithm gets implemented as a computer program on a concrete computer. Finally, by executing this program with specific sets of parameters the dynamics of the represented aspect of reality is simulated and becomes explicit (under the assumption that the underlying model and the algorithmic representation are sound). Such, the space of possible theories and dynamics can be explored in a relatively economical manner.

Why simulation? What is the goal of simulation?

One of the main reasons for using the method of simulation and for constructing computational models is a lack of full access to the complex phenomenon one is interested in (be it the phenomenon of cognition, biochemical processes, astrophysical processes, etc.). I.e., the methods of the empirical approach very soon reach their limits when they are confronted with such highly complex phenomena. This is due to the exponentially increasing space of possible theoretical accounts and variations of theories. Hence, an alternative method is necessary for exploring this huge theory space in an economic manner.

This lack of access to the phenomenon of interest can be found in the spatial and/or in the temporal domain; above that it can be caused by one or more of the following parameters: a lack of theoretical and/or background knowledge, of methods, of resolution in the gauges, a lack of time and funding for complex and costly empirical research, etc. In any case, simulation acts as a means for overcoming one or more of these lacks by transferring parts of empirical research into the virtual domain. I.e., the task of simulation consists in bridging exactly this lack of information by operating in the representational domain.

In most cases a simulation experiment aims at legitimating and testing empirical models or theories. One tries to go beyond the classical experiment by exploring the space of possible theories (or of variations of a theory) in the virtual domain in order to reach its limits and possible cases of falsification. Cognition is a good example making explicit the necessity of the method of simulation, because it belongs to the class of processes with the highest known complexity. Hence, the space of possible theories dealing with cognition (or even with only one aspect thereof) necessarily is of extremely high complexity and dimensionality. This implies that the aspect of evaluating these theories in all their variations and details is of central interest. It turns out that simulation is a comparatively economic means for such an undertaking of evaluating a huge theory space.

Above that another goal consists in utilizing theories stemming from simulation experiments as points of crystallization for (speculatively and synthetically) developing completely new perspectives or theories on a phenomenon. Hence, simulation can be understood as a method covering four aspects which are central in the process of theory development: (a) the aspect of verification (and legitimating) a theory/model, (b) the aspect of construction, production, and synthesizing of new knowledge/theories, (c) the aspect of interpretation and understanding (i.e., increasing the level of explanatory value), as well as (d) the aspect of exploration of theory spaces in a virtual domain.

How are simulation models working?

The goal of every scientific theory or model is to construct mechanisms accounting for phenomena which can be observed in the “directly accessible world.” In the classical scientific disciplines this has been achieved by following the standard empirical approach: the goal of this cyclic process consists in finding an epistemological closure between a certain environmental aspect one is interested in (e.g., a certain cognitive behavior) and its theoretical description. This goal is achieved by mutually adapting the theories under construction and the object of investigation to each other until a kind of consistency between these two domains is reached. This is done by applying the classical means of (empirical) theory construction: forming a hypothesis out of previous results and in the context of background knowledge/assumptions, making predictions, and manipulating the reality in an experiment (following the instructions given by a set of methods). This experimental action causes a change/reaction in which the investigated (aspect of the) environment follows its intrinsic dynamics and exhibits a certain behavior. By making use of measuring devices this behavior is reduced to a particular set of environmental states. These states are detected by gauges and get transformed into numerical/quantitative values. In the following steps these values receive an interpretation in the framework of the original theory. These so-called “results” confirm or falsify the original predictions; statistical and inductive methods are used for the construction of alternative hypotheses or adaptations for the original theory. From there this feedback process starts anew. This cycle (“empirical loop”) represents the classical empirical approach to a particular phenomenon. It is repeated until a fit between the theory and the investigated reality is achieved.

If we are confronted with highly complex phenomena, this approach reaches its limits very soon, however. The introduction of the method of simulation implies an extension of the classical feedback loop of theory development: a kind of deductive-inductive mirror-loop is established, when simulation methods are being used for the process of theory construction. Here the starting point is the theory, model, or a hypothesis (in many cases being the result of the empirical loop), which gets implemented as an algorithm. The execution of this algorithm/simulation on a computer leads to results, which confirm or falsify the original theory. In turn this leads to the necessity that the theory has to be adapted or completely changed. The resulting revised theory acts as an alternative starting point for a new loop of a modified virtual (simulation) and/or empirical experiment. In both cases the criterion of epistemological closure and functional fitness between theory, experimental results, and simulation results has to be satisfied. As can be seen in many models from the fields of cognitive science, theoretical physics, or artificial life a fertile cooperation between these two modes of knowledge acquisition and theory development has been established.

This alternative approach to theory construction has become a standard procedure in many scientific disciplines and areas of technology development nowadays. Think, for instance, of the construction of airplanes or large bridges; empirical theories about statics, aerodynamics, materials, etc. are the starting point for such projects. They get formalized and joined together with the plans of the architect or engineers – this results in computer models, which can be easily verified, adapted, expanded, or abandoned. The power of simulation models lies in the fact that theses variations of the projects and models never have to be physically realized. They remain completely in the virtual domain and the engineers, scientists, architects, etc. can explore all possibilities and variations in a relatively economical manner without ever having to leave their desk (computer) and to build the real thing or physical models thereof. Similar developments can be found in the field of the natural sciences as well: the areas of cognitive science, Artificial Life, neural networks, computational molecular biology, computational physics, etc. use the same technique in order to explore huge theory spaces.

Examples for simulation

The Finite Element Method (FEM)

The Finite Element Method is a analytical tool that allows engineers to calculate and visualize how physical systems will react when exposed to different conditions (in the virtual domain). In this technique systems are modeled and described by mathematical (differential) equations. The problem consists in finding solutions for these equations: this is rather unproblematic for simple systems or objects; however, finding a solution to complex and heavily interacting systems or physical structures is almost impossible.

FEM addresses this difficulty by dividing a complex system into smaller objects so that the solution for each object can be represented by an equation much simpler than that required by the entire system. The smaller elements that make up the larger object are connected at discrete joints called nodes. For each element, approximate stiffness equations are derived relating the displacements of the nodes to the node forces between elements and a computer is used to solve the simultaneous equations that relate these node forces and displacements. Since the basic principle of subdivision of the structure into simple elements can be applied to structures of all forms and complexity, there is no logical limit to the type of structure that can be analyzed if the program is written in the appropriate form. By applying this technique it is not only possible to explore the static effects of mechanical changes or influences on the structure under investigation, but also to observe the dynamical development over time (in the virtual domain). If these results are combined with a well chosen graphical representation, such a simulation does not only help to make predictions for engineering purposes, but also increases the explanatory value and enhances the understanding of the processes occurring in the situation of mechanical deformation.

Insert the Video here: Filename: finite_elements1.mpeg & finite_elements2.mpeg

Source: OSC is also known as the Ohio Supercomputer Center, http://www.osc.edu/

Insert here the following media: Figures: Making1.jpg & Making2.jpg video ” MovieMiraLab Speech Expre.mpg” Source: MiraLab, Univ. of Geneva, http://miralabwww.unige.ch

Further link: http://www.digitalmankind.com/

Computational Neuroscience

Computational neuroscience (connectionism, Parallel Distributed Processing; Rumelhart and McClelland 1986 [the classical text]; McLeod at al. 1998) tries to simulate the processes taking place in a natural nervous system (or a part thereof) on an abstract level. Neurons are modeled as so-called units which integrate their inputs (from other units). These units are connected to each other in a network like structure via so-called weights (they have a similar function as the synapses in a natural nervous system). Each unit has a certain level of activation at a certain point in time – it is the result of the sum of the weighted inputs from the other units. Each unit computes its activation in parallel at each point in time.

On the level of the whole system this results in a dynamics, which is referred to as spreading activations: patterns of activations can be interpreted as representing knowledge or as being responsible for controlling behavioral output. By such a neurally inspired for of computation it is possible to achieve a better understanding of what is happening in the natural nervous system and, thus, about cognition.

What are the advantages of simulation?

Recent developments in various fields of the natural sciences have shown as well as epistemological arguments can be identified that simulation is not only an extension of the classical empirical approach, but that it is an autonomous method for theory-development. The following points are of interest in this context:

Synthesizing. The classical empirical approach follows a rather analytical strategy: the phenomenon of interest is observed and – in a second step – analyzed. This is achieved by sometimes highly sophisticated methods of deconstructing complexity and by taking things apart in order to study their isolated behaviors and interactions. On the contrary, most simulation experiments follow a synthetic mode of theory development: the goal is to build complex structures and behaviors from entities with simple functionalities. This has an important implication concerning the explanatory value: from such a “synthetic experiment” it is possible to achieve a better understanding of a complex phenomenon which in an analytic approach would have been extremely difficult to understand (“understanding by synthesizing”).

Introducing dynamics. Whereas the original empirical theory describes the phenomenon under investigation in a rather static manner, the execution of the algorithm representing a certain class of, say, cognitive processes brings about a new quality: the static concept in the theory or model becomes dynamic. It is as if the theory “wakes up” and becomes “alive” through the execution of the algorithm. By applying simulation techniques it becomes possible to observe and follow the temporal and spatial dynamics as well as implications of the rather static description of the rules (in the empirical theory) governing the (virtual) processes. As simulation brings about a dynamic representation of the originally static description in the empirical theory, the explanatory value is increased.

Conceptual level. The focus of most computational models is rather on the conceptual level of description and does not get lost too in the jungle of the “dirty micro details of reality”; this implies a higher level of cognitive accessibility. By that the computational (information processing and simulation) paradigm often acts as the unifying element between the sometimes highly diverse and even contradictory approaches, methods, and concepts from different participating disciplines in interdisciplinary theories and models. The level of description of artificial neural networks acts as a good example which illustrates how this gap between results from neuroscience and concepts from philosophy of mind can be bridged in order to achieve a more interdisciplinary perspective (e.g., A.Clark 2001).

Exploration of theory spaces. By shifting experiments into the virtual domain the space of possible theories (variations of theories, different configurations of parameters, etc.) can be explored by simulating these configurations/models. The results give a first hint as to the plausibility of the (virtually) tested theory. If such a theory turns out to be plausible it is worth being tested empirically.

Economic theory development. Classical empirical research is mostly concerned with conducting endless variations of (empirical) experiments. Under the assumption that the underlying theory as well as the computational/simulation model are sound, it is – in many cases – more economic to shift these experiments into the virtual domain and to conduct virtual/simulation experiments. This implies a speeding up of theory development by a virtual exploration of the theory space.

Conclusion

Simulation has become an alternative and autonomous methodological approach in the context of knowledge acquisition and theory development over the last decades. One of the main characteristics consists in the fact that it shifts experiments from the physical/material into the virtual domain. By that a lot of advantages concerning the economics of theory development as well as increasing the level of explanatory value arise. One has to keep in mind, however, that the best simulation (model) is only as good as its empirical foundation. Furthermore, the level of theory ladenness as well as the “epistemological distance” between the object and its theoretical description is increased in the case of applying the method of simulation.

Bibliography

  • Allbrecht, R. (ed.) (1998). Systems: theory and practice. Vienna: Springer.
  • Clark, A. (2001). Mindware. An introduction to the philosophy of cognitive science. New York: Oxford University Press.
  • McLeod, P.M., Plunkett, K., and Rolls, E.T. (1998). Introduction to connectionist modelling of cognitive processes. Oxford, New York: Oxford University Press.
  • Oreskes, N., Shrader-Frechette, K, and Belitz, K (1994). “Verification, validation, and confirmation of numerical models in the earth sciences”. Science 263 (641-646).
  • Rumelhart, D.E. and McClelland J.L. (1986)(eds.). Parallel Distributed Processing, Explorations in the Microstructure of Cognition, Volume I & II Cambridge, Massachusetts: MIT Press.
  • Zeigler, B, Praehofer, H., and Gon Kim, T. (2000, 2nd ed.). Theory of modeling and simulation. San Diego: Academic Press.

Related Internet sites

Author

Markus F. Peschl
Dept. for Philosophy of Science and Social Studies of Science
University of Vienna
Sensengasse 8/10
A-1090 Wien / Austria
Franz-Markus.Peschl@univie.ac.at

Markus F. Peschl is Professor in Philosophy of Science and Cognitive Science at the University of Vienna, Austria since 1989. For post-doctoral research he spent two years at the University of California, San Diego (UCSD, cognitive science and philosophy department). His main research interest is in the question of knowledge representation in natural and artificial cognitive (neural) systems as well as in science. Furthermore, his focus of research is on the philosophical, epistemological, anthropological, and cognitive foundations of science and of the human person. M. Peschl has published 5 books and more than 45 papers in international journals and books. For further information see: www.univie.ac.at/Wissenschaftstheorie/PESCHL/.