You are on page 1of 121

Nr.

1/06

How People Communicate about New


Technologies and Ideas:
Modeling Decision Coordination to
Simulate the Diffusion of Innovations.

Timo Smieszek

Diploma Thesis

Supervisors:
Dr. Peter de Haan
Michel Müller

January 2006
I dedicate this diploma thesis to

my parents Heinz and Renate Smieszek


who gave so much love to me and
who invested a lot of hard-earned money in my education

and to my beloved Annabelle


who accompanied me the last five years and
who had the patience to accept my
scientific and political obsessions all the time.

i
ii
Abstract

The breakthrough of an innovation depends on the question whether the members of a


society adopt the innovation in a large number or not. Individuals decide on this partly
based on a rational evaluation of the decision situation and partly based on the
influence of others.
Frank Schweitzer (2004) introduced an individual-based socio-physical model for the
spatial and temporal coordination of decision-making. Individuals in his model decide
solely based on the information they receive from other individuals; a rational compari-
son of the alternatives is not part of the model. Communication is modeled via a
communication field analog to fields in physics.
The purpose of this thesis is to investigate whether the delineated approach is a
reasonable basis to simulate the fate of an innovation in a society. Therefore, the model
is enhanced by two new features: First, five agent groups with different strengths of
influence and degrees of susceptibility to opposite opinions are introduced. Second,
elements of rational decision-making are implemented by a decision module, which
compares the qualities of both the innovation and the existing alternative. Both
enhancements are evaluated with respect to the corresponding model behavior under
different parameter settings.
This study shows that the enhanced model is capable to reproduce known and assumed
characteristics of innovation diffusion processes such as a sigmoid course of adoption as
well as spatial patterns of adopters and non-adopters. Additionally, a theoretically
assumed high importance of early adopter’s characteristics for the dynamic of the
whole system is confirmed. However, the model is rather abstract, as some of its
parameter values do not have an equivalent in the real world. This causes difficulties
with respect to the validation and calibration of the model and the interpretation of its
results.
Accordingly, a simple network approach relying on direct interaction is recommended
for simulating the diffusion of innovations instead of the investigated socio-physical
model based on indirect communication.

iii
Zusammenfassung

Der marktwirtschaftliche Durchbruch von Innovationen hängt davon ab, ob die Mit-
glieder einer Gesellschaft eine solche Innovation in grosser Zahl annehmen oder nicht.
Individuen entscheiden darüber teils aufgrund rationaler Abwägungen, teils auf der
Grundlage des Einflusses anderer.
Frank Schweitzer (2004) entwickelte ein individuen-basiertes, soziophysikalisches Mo-
dell zur Beschreibung der räumlichen und zeitlichen Koordinierung von Entscheidungs-
prozessen. In seinem Modell entscheiden Individuen ausschliesslich auf der Grundlage
von Information, welche sie von anderen Individuen empfangen; ein rationaler Ver-
gleich der Alternativen ist nicht Bestandteil seines Modells. Kommunikation findet nicht
direkt zwischen Individuen statt, sondern wird indirekt durch ein Kommunikationsfeld
dargestellt – vergleichbar mit dem Feldbegriff aus der Physik.
Ziel dieser Diplomarbeit ist es zu untersuchen, ob der eben skizzierte Modellansatz dazu
geeignet ist, das weitere Schicksal einer Innovation in einer Gesellschaft zu simulieren.
Zu diesem Zweck wird das Modell durch zwei neue Eigenschaften erweitert: Zuerst wer-
den fünf verschiedene Individuengruppen eingeführt, die unterschiedliche Einfluss-
stärken auf andere und unterschiedliche Empfänglichkeiten für die Meinung anderer
aufweisen. Weiter werden Aspekte einer rationalen, abwägenden Entscheidungsfin-
dung integriert, indem ein Programmmodul eingeführt wird, welches die Eigenschaften
der Innovation und der bestehenden Alternative vergleicht. Beide Erweiterungen wer-
den im Hinblick auf das resultierende Modellverhalten unter verschiedenen Parameter-
einstellungen bewertet.
Diese Arbeit zeigt, dass das erweiterte Modell in der Lage ist, bekannte oder ange-
nommene Charakteristiken von Innovationsausbreitungs-Prozessen zu reproduzieren.
Solche Merkmale sind unter anderem der sigmoidale Verlauf der Kurve, die die Gesamt-
zahl der Innovationsnutzer zeigt, oder räumliche Muster von Innovationsnutzern und
solchen, die sie nicht anwenden. Weiter wird die theoretisch vielfach betonte herausra-
gende Bedeutung der so genannten early adopters für die Dynamik des Gesamtsystems
bestätigt. Dennoch ist das Modell eher abstrakt, da einige seiner Parameterwerte keine
direkte Entsprechung in der wirklichen Welt haben. Dies führt zu Schwierigkeiten für die
Validierung und Kalibrierung des Modells und macht die Interpretation des Modells und
seiner Resultate kompliziert.
Deshalb wird ein schlichter Netzwerkansatz, der auf direkter Interaktion basiert, zur
Simulation von Innovationsausbreitung empfohlen. Ein solcher Ansatz könnte Ver-
gleichbares wie das untersuchte soziophysikalische Modell leisten und gleichzeitig
dessen Schwierigkeiten vermeiden.

iv
Preface

Three quarters of a year ago, I asked Peter de Haan for a demanding simulation task,
asked for a sandbox game – and I got it. I had to learn that it is more challenging than I
thought to develop a simple and understandable simulation tool, which is able to mirror
real world processes and which is good and accurate at the same time. I had to face the
problem that quite a lot of data is necessary for a case validation of the phenomenon I
ought to have modeled. Furthermore, psychological and sociological theory offered
more questions than precise answers – at least from a modeler’s perspective.

I perceive my work as close to the aims stated in the Leitbild of the ETH Department of
Environmental Sciences. It is the approach of my work to treat a complex system with
the integrated knowledge from various disciplines such as psychology, physics, mathe-
matics, and computer sciences. However, to fulfill the claim of this ETH department will
remain a hard task and an ongoing inquiry for a long time. It is a lot of work to provide
interdisciplinary research with an acceptable niveau also with respect to the perspective
of each single discipline. Furthermore, there is still a lack of a common language and a
common perspective amongst those disciplines, but such an understanding is crucial for
the integration of the knowledge coming from the different disciplines.

The insights I gained while I was working on my project would not have been possible
without the help, the support, and the continuous encouragement of other people.

First, I thank Peter de Haan and Michel Müller for their way of supervising my thesis,
which sets an example to other supervisors at ETH. Peter helped me to learn a new
programming language as time-efficient as possible. He also was and is a good advisor
with respect to many other questions. Michel rendered assistance, whenever open
questions had to be discussed.

Furthermore, I thank Philippe Peter for his fruitful comments on my work in progress.
His critical manner towards my work helped me to eliminate weaknesses and to
improve the overall quality of my thesis. He is not only a good friend of mine, but also
the best team counterpart I have ever had.

I thank my partner, Annabelle Puchta, for translating essential French literature into
German. Without her effort, neither the section on the importance of social imitation
nor the section about stigmergy would have been so complete and adequate as they are

v
now. I am grateful for Brigitte Kintzi’s help; she corrected the language of many parts of
this thesis, which was quite a lot of work in a busy time.

Finally, I thank Robert Bügl, Anja Peters, and Peter Loukopoulos who were good advisors
with respect to questions concerning statistics, and all the people who spoiled me with
tea, chocolate, and other nibbles.

vi
Table of Contents

1 Introduction .................................................................................................................................................3
2 Research Objective....................................................................................................................................7
3 Theoretical Foundations........................................................................................................................ 8
3.1 Psychological and Sociological Foundations .......................................................................... 8
3.1.1 Conceptions of Man................................................................................................................ 8
3.1.2 Society and Human Beings..................................................................................................9
3.1.3 Reconciling Herd Behavior and Rationality.................................................................20
3.2 Complex Systems, Synergetics, and Stigmergy: A Conceptual Framework........... 22
3.2.1 Holism versus Reductionism............................................................................................. 22
3.2.2 The Conceptual Framework of Synergetics ................................................................ 23
3.2.3 The Concept of Stigmergy..................................................................................................26
3.3 Multi-Agent Modeling as a Method.........................................................................................28
3.3.1 Defining the Notion ‘Model’ and the Importance of Models .............................28
3.3.2 The Method of Agent-Based Modeling ........................................................................30
3.3.3 Quality Criteria for Models.................................................................................................34
4 Towards a Socioeconomic Model of Decision Making ........................................................... 37
4.1 Spatio-Temporal Coordination of Decisions......................................................................... 37
4.1.1 Concept and Basic Assumptions ..................................................................................... 37
4.1.2 The Model..................................................................................................................................39
4.1.3 Simulation Results and Model Behavior ...................................................................... 45
4.2 Modeling the Diffusion of Innovations: An Idealistic Scenario ................................... 47
4.2.1 Enhancement of the Decision Module ......................................................................... 47
4.2.2 Behavior of the Enhanced Model .................................................................................... 53
4.2.3 Simplifying the Model......................................................................................................... 66
4.3 Modeling the Diffusion of Innovations: Reconsidering Preferences and Utility.. 69
4.3.1 Enhancement of the Model.............................................................................................. 69
4.3.2 Model Behavior under Certain Case Scenarios ......................................................... 73
5 General Discussion.................................................................................................................................79
5.1 A Strategy for a Model Validation by Case Studies............................................................79
5.2 Strengths and Weaknesses of the Model ...............................................................................81
6 Concluding Remarks and Outlook.................................................................................................. 89
7 References..................................................................................................................................................92

Table of Figures ................................................................................................................................................101


Table of Tables..................................................................................................................................................101
Appendix ............................................................................................................................................................102

1
2
1 Introduction

Most of today’s so-called environmental problems are caused by actions of humankind


in a broad sense. Many of these result from the aggregated influences of millions of per
se tiny and unimportant activities of millions of individuals constituting our societies .
1

One of the missions of science is to give advice to decision makers on the probable con-
sequences of their decisions. For instance, if a government wants to implement a
specific recycling scheme, it is worthwhile estimating the potential success in advance
based on proven behavioral theories and empirical findings. Knowledge about determi-
nants of the selection of the people’s travel mode is decisive for the planning of public
transport enterprises and transportation policy strategies. Policies aiming at the
furtherance of renewable energy production need to know, under which conditions
private investors purchase photovoltaic cells or buy shares of wind parks. Finally,
knowing the factors determining the selection of new cars can help to design interven-
tions to decrease the average fuel consumption of the car fleet.

For more than one hundred years researches have been interested in the question why
some innovations succeeded quickly while others failed (e.g. Tarde, 1890/2001). In the
field of environmental policy, examples for both are observable: success stories as well
as failures. In order to give advice for future policy measures, it is necessary to investi-
gate and explain why, for example, the furtherance of the renewable energy production
in Germany was quite successful (Erneuerbare Energie, 2006), while fuel-efficient cars
as Volkswagen’s Lupo 3L TDI or Audi’s A2 1.2 TDI failed completely (Niedrigenergie-
fahrzeug, 2005). Even with respect to the same type of innovation, both success and
failure can be observed: While the battery-recycling scheme in Switzerland is doing well,
the schemes in Germany or the Netherlands lead to rather low recycling rates
(Hansmann, Bernasconi, Smieszek, Loukopoulos & Scholz, in press).

As indicated, it is mainly the interplay of many individual decisions that drives the
development of the passenger transportation sector. Whether selecting the travel mode
or deciding on a new car type to buy – all decisions are relevant for the evolution of this
sector. Against the background of climate change, the finiteness of fossil fuels and the
dependence of unstable regions of the world, a more efficient use of energy is

Stern (2000) points out that an individual-based, psychological approach for the environmental sciences
1

makes sense in all cases in which a mass of individuals determine the fate of the human-environmental
system. However, it should be kept in mind that there are also cases, in which only few key agents control
the system’s dynamic. In such cases, another approach might by better for deriving intervention strategies.

3
necessary. To reduce – or at least stabilize – the transport-related energy consumption ,
2

strategies are required which aim at both the change of travel mode patterns and the
introduction of energy-efficient technologies.
This diploma thesis is embedded into the NSSI project “Technology breakthrough
3

modeling”, which intends to explain “how long it will take for new technologies [in the
automotive sector] to penetrate markets and become effective” (Environmental
Modeling and Decision Making, 2005). Currently, the models developed and applied by
this research group focus on the individuals’ behavior disregarding inter-individual
influences on decision-making (Müller & de Haan, 2006; DG-ENV, 2002). However, it is
known from the research on diffusion of innovations that they in particular spread by
4

communication among the individuals in a population (Rogers, 2003). Many theories in


social psychology and sociology about attitude formation and decision making rest
upon the assumption that people influence their decision-making processes mutually
(e.g. Janis, 1982; Nowak, Szamrej, and Latané, 1990; Sarup, 1992; Turner, 1990; see also
Lazarsfeld, Berelson, and Gaudet, 1968). Consequentially, the aim of this thesis is to
develop a simulation model for the spatial and temporal coordination of the decision-
making between the individuals constituting a population. The starting point is a socio-
physical model developed by Schweitzer (2004) describing the mutual influence by
means of a communication field analog to physical fields.

There are two major lines to model the diffusion of innovations: either macroscopically
based on differential equations or microscopically by using social network approaches.
An example for the former is the work of Bass (1969) who developed one of the first
mathematical models to describe the adoption of innovations based on models known
from epidemiology. The basic assumption of his model is that the number of newly
purchased innovative products depends on the number of previous buyers: The more
adopters exist, the likelier it is to meet one. One of the large drawbacks of this approach
is the assumption of a homogeneous and uniformly mixed population. Accordingly,
there are no spatial effects in Bass’ model.
Network models such as the model of Valente (2005) address the shortcomings of the
macroscopic models based on diffusion equations. Valente embeds every individual in a
context of associated persons. The individuals follow a simple decision rule according to
which they decide to adopt if 50 percent of their associated persons have adopted, too.
The disadvantage of this approach is that even small networks are often very compli-
cated and the complicatedness grows vastly with each new individual added (for an
illustration see Figure 1).

which accounted for 28.1% of the total primary energy consumption of the OECD countries in 2000 (Müller
2

& de Haan, 2006).


Chair of Environmental Sciences: Natural- and Social Science Interface; ETH Zurich.
3

Innovations can be both new ideas, approaches, or philosophies and new products.
4

4
Therefore, there is a need for a model that describes the inter-individual coordination of
decision-making and that is able to resolve spatial and temporal heterogeneity by being
simple and illustrative at the same time.

The small circles stand for individuals, whereas the arrows stand for the influence of one agent on the decision of
another agent. The case study investigated the influence of the social network structure on the decisions of women,
which kind of contraception they want to choose.

Figure 1: Example for a Real Social Network: Women Friendships in Cameroon5.

The aim of this study is to derive a new model describing the diffusion of innovations
based on the spatial and temporal coordination of decision-making. In the subsequent,
second chapter, the precise research objective of this study is given in detail.

A computer model simulating the spread of new ideas in a population has to integrate
knowledge about complex systems theories and findings from various disciplines.
Therefore, the third chapter of this thesis will introduce important concepts and
methods necessary for the development of an individual-based innovation-adoption
model. First, the psychological and sociological foundations describing how decision-
making takes place in a social context are delineated in section 3.1. Subsequently, a
conceptual framework for the investigation of complex systems is sketched. The third
section of this chapter explains the strengths and weaknesses of multi-agent systems
and provides insights into the power and limitations of scientific modeling in general.

5
From Network Models and Methods for Studying the Diffusion of Innovations (p. 102), by T. W. Valente,
2005, Cambridge: Cambridge University Press. Copyright 2005 by Cambridge University Press. Reprinted
with permission.

5
The fourth chapter comprises of a stepwise derivation of the simulation model for the
spatial and temporal coordination of decision-making: It begins with an explanation of
Schweitzer’s (2004) original model. Thereafter, the model is enhanced in two steps.
Essential characteristics of the adopted model and its parameters are described and the
resulting model behavior is presented.

The fifth chapter is dedicated to the discussion of the model derived and the modeling
results. Strengths and weaknesses of the model are evaluated and compared to that of
other approaches. In the last chapter, conclusions about this research are drawn and an
outlook is given.

6
2 Research Objective

The main objective of this study is to investigate whether the socio-physical approach of
Schweitzer (2004) to model the spatio-temporal coordination of decisions is a rea-
sonable basis to simulate the introduction of innovations. Upon this foundation a
simple, modular model consisting of an inter-individual communication module and an
individual decision module shall be developed.

The model shall be in accordance with widely accepted theories on decision-making in a


social context and able to reproduce macroscopic phenomena described in literature on
diffusion research. In particular, the model shall

1. be able to reproduce the sigmoidal course of adoption typical for ideal innovations
6 7

(Tarde, 1890/2001; Rogers, 2003),


2. be able to resolve spatial effects, and
3. offer an interface between the two extreme concepts of decision-making, the one
based on social influence, and the other based on rational reasoning.

The model has to be applicable in the field of car purchase and has to tie in with the
concepts, models and theoretical frameworks used by the NSSI project “Technology
breakthrough modeling”. Furthermore, an investigation of the model’s behavior under
different parameter settings and underlying assumptions is required. Due to the time
constraints of a diploma thesis and the limited data available, the study cannot provide
a case validation. Nevertheless, it is a claim of this thesis that the derived model is
testable within case studies.

6
The total number of adopters plotted as a function of time has a sigmoidal shape.
Ideal innovations are innovations which are clearly better than the precedent solutions/ideas/products for
7

all individuals of the population in consideration.

7
3 Theoretical Foundations

3.1 Psychological and Sociological Foundations

The question on what determines the behavior of human beings is a key matter of social
sciences. In order to give answers to this question, a decision on the underlying con-
ception of man has to be made first. Economics, psychology, and sociology give
(partially) different answers to this question based on differing ideas of what a human
being is like.

In this section, some psychological and sociological foundations on the problem how
people come to decisions in a social (and thus real world) context will be depicted: The
first subsection introduces different conceptions of man such as the homo economicus,
the homo sociologicus, and the homo socioeconomicus as well as their implications for
decision-making and the idea of rationality. The second section presents some basic
theories and models about the influence of society and related human beings on a
certain individual’s decision-making. Finally, a metatheoretical approach to integrate
the findings on rational and heuristic decision-making is shown.

3.1.1 Conceptions of Man

The predominant paradigm in economics is the rational actor paradigm or the idea of a
homo economicus. Wächter (1999) defines rational action as “the identification of the
most efficient means of achieving the actors’ goals under certain external constraints”
(p. 56). In order to achieve the best option, the decision-maker has to evaluate the
different possibilities. Normally, economics assumes complete information and infinite
time for this evaluation task. As the homo economicus has complete information and
enough time to evaluate all of her options, she does not need the input of any other
agent to come to her decisions.
However, even the agents under the rational actor paradigm are not free of context, but
their surrounding only constrains the set of options and does not influence the
decisions. Under this paradigm, the individuals act independently under given external
conditions (Wächter, 1999).
In his article on rational choice, Dragan Miljkovic (2005) shows theoretically that most
assumptions of the rational choice theory are inappropriate. Humans do not decide as
rationally as economics normally assumes.

8
Psychology has developed more appropriate models on human decision-making consi-
dering this imperfectness. Nevertheless, what links the economic approach with the
principal approaches in psychology is the focus on individuals. The rational actor
paradigm, but also the work on heuristics is interlinked with an idea of methodological
individualism. Wächter (1999) cites a circumscription by Elster, who describes
methodological individualism as a “doctrine, that all social phenomena (their structure
and their change) are in principle explicable only in terms of individuals – their
properties, goals and beliefs” (p. 83; see also the subsection on reductionism in the
following chapter). All of these purely individual-based approaches neglect the impor-
tance of interaction for human behavior.

In contrary, the importance of society for a man’s behavior is emphasized in many


counterconcepts from sociology. The homo sociologicus is a person, whose behavior is
8

determined predominantly by her social environment. The homo sociologicus is framed


by cultural and institutional norms, which restrict the opportunities she perceives and
alters the likelihood of different types of behavior (Barley and Tolbert, 1997). Just as the
isolated, fully rational individual in economics is an exaggeration, there are also
oversocialized views coming from sociology.

Observations in the real world as well as good results for both the individualistic and the
social approach in special cases suggest that an intermediate form in between a
solitaire utilitarist and a socially embedded imitator is appropriate. To some extent, the
behavior is influenced by institutions, culture and (peer-)groups. However, humans are
not slaves of their environment and capable to process information in a meaningful
way. This idea of a homo socioeconomicus is in good accordance with Sherif’s meta-
theory that “neither individuals nor groups are completely self-sustaining, autonomous
systems and that the corresponding level of analysis for each system can present no
more than an incomplete picture” (Sarup, 1992, p. 61).

3.1.2 Society and Human Beings

3.1.2.1 Gabriel Tarde’s Laws of Imitation

The French lawyer and judge Gabriel Tarde was one of the predecessors of modern
sociology and social psychology. His contribution “Les Lois de l’Imitation” at the end of

The notion ‘homo sociologicus’ in this text is not necessarily equal to the definition introduced by Lord Ralf
8

Dahrendorf.

9
the 19 century laid the foundation for a modern understanding of mutual influence
th

between individuals. Tarde’s main principle says that every similarity evolves from
repetition and imitation (Tarde, 1890/2001).
The latter is the key concept in his work: Wherever social relations between creatures
exist, there is also imitation. In fact, Tarde perceives social relations and the process of
imitation as mutually related, because imitation acts as a social bond (“lien social”
p.46) . Individuals or groups mimic each other even without a very close spatial or
9

temporal relationship (Antoine, 2001).

Gabriel Tarde recognized the mutual character of imitation und thus of human relations
as follows:

„Il n’y a plus d’homme que l’on imite en tout; et celui que l’on imite le plus est
lui-même imitateur à certains égards de quelques-uns de ses copistes.
L’imitation, de la sorte, s’est donc mutualisée et spécialisée en se
généralisant“ (Tarde, 1890/2001, p. 290).
10

Furthermore, he uncovered that societies organize themselves by means of accordance


and opposition of beliefs. There is a competition of wishes and needs; the beliefs of
different people can endorse or limit each other.

Tarde was also one of the first scientists who reflected on the mechanisms underlying
the diffusion of innovations. He discovered that only a small share of the conceived,
realized and offered innovations prevail, while a vast majority fails. In his work, he anti-
cipates the homo socioeconomicus since he identified two reasons why an innovation is
chosen: The first cause is a logic one. An innovation is chosen because it is perceived as
the most useful and the truest of all. An innovation is selected because it is in
accordance with the established goals and principles of the individual. But there are also
causes beyond the logic ones, such as imitation.

He realized that before a good innovation ‘enlightens’ a nation, it has to ‘shine’ in a


single brain (“il faut qu’elle luise d’abord dans un cerveau isolé”, Tarde, 1890, p. 210).
Hence, inventors and innovators are necessary to introduce an innovation – but the
11

innovation has to spread by imitation. Tarde believed that progress is always achieved
by examples and imitation.

In Tarde’s opinion, humans are connected via dogmas.


9

Free translation: There is no human being anymore that we imitate completely; and the person we imitate
10

most of all is himself to a certain extent imitator of some of his copyists. Hence, the imitation is mutualized
and specialized by generalizing.
Tarde calls them „philosophers“.
11

10
Lastly, he observed that the social network of humans became more widespread
amongst others due to the invention of trains. Tarde concluded that the process of
imitations grew stronger, more immediate and pervasive – and just as well the diffusion
of innovations.

3.1.2.2 The Influence of Advertising and Media versus Personal Contacts

For a start, it makes sense to discuss the effects of the media in general and advertise-
ment in particular in a unique section, as both have structural similarities: Both media
and advertisement act as impersonal communicators on a mass audience. The
communication flow is unidirectional from the medium to the auditor and thus,
interaction and mutuality is very limited. Finally, advertisement is in most of the cases
subordinate to the media, as promotion is normally done via the media.

The predominant theories about the effect that the media have on their audience’s
opinions and attitudes oscillated several times between the assumption of almost ‘no
effect’ to omnipotence (Kunczik and Zipfel, 2001, pp. 287-293):
In a first phase between the beginning of the 20 century
th

and the 1940s it was the common belief that the media #
# #
would be omnipotent. The prevailing model was the
Stimulus-Response-Model, which states a direct and un-
altered answer by the receiver to a stimulus sent by a # " #
medium. A related concept is the Hypodermic-Needle
Concept that suggests that people are pervaded by media # #
information similar to a body after the injection of a #
substance (cf. also Rogers, 2003; see Figure 2). "$ "%&'%( ! )*% +%',-
#$ #%.%,/%( ! )*% 0%123%
In the following period, lasting until the middle of the
1960s, the confidence in the power of the media was Figure 2: Hypodermic-Needle
Concept
damped heavily. It was acknowledged that characters of
humans differed vastly and that these differences led to different perceptions of the
media’s content. Research findings suggested that media rather amplified existing
opinions than changed them.
Between approximately 1965 and 1980, the idea of strong media was rediscovered. This
process was closely related to the high penetration of the western societies by television
sets. The research on media effects tried to integrate the idea of strong media with the
concept of a strong audience (p. 293).

11
Nowadays, it is widely accepted that “interpersonal channels are more effective in per-
suading an individual to accept a new idea” (Rogers, 2003, p. 18) than other impersonal
channels such as the media. Lazarsfeld et al. (1968) mention two reasons why personal
relationships are potentially more influential than mass media: Their (accumulated)
domain is larger as they reach persons that are not reached by the media and they have
‘psychological’ advantages. One of these psychological advantages is an immediate
gratification if someone complies with the opinion of a close person rather than if
someone complies with opinions offered in the media.

Lazarsfeld et al. (1968) did research on the changes of opinion during an electoral cam-
paign. They found out that the share of persons who reported friends or family as
reasons for changing their own opinion was disproportionately high (see also Barton,
1968). McCombs and Shaw (1972) emphasize that “most of what people know comes to
them ‘second’ or ‘third’ hand from the mass media” (p. 176) which demonstrates an
indirect and not a direct relevance of the media.

An accepted approach to describe the media’s influence on people’s cognitions and their
behavior is the Agenda-Setting Approach. This approach states that the main function
of the media is to bring up new topics and thoughts for discussions. It integrates the
findings that media do not influence opinions directly or induce changes of opinion by a
relevant amount of the population (Kunczik and Zipfel, 2001, p. 355; see also Smieszek
and Mieg, 2003). Indirect influence takes place via the importance attributed to a certain
topic. McCombs and Shaw (1972) point out that “readers learn not only about a given
issue, but also how much importance to attach to that issue from the amount of
information in a news story and its position” (p. 176).

3.1.2.3 Attitudes and Opinions in Groups

In the course of the history of social psychology and sociology, a couple of theories
describing attitude and opinion formation in groups have been developed. In the
following a short introduction to some of the most important theories in the context of
the research questions shall be given.

The Groupthink Theory

Janis defines groupthink as “a mode of thinking that people engage in when they are
deeply involved in a cohesive in-group, when the members’ strivings for unanimity
override their motivation to realistically appraise alternative courses of action” (Griffin,

12
1991, p. 220). He did his research mainly about political decisions made by groups of
relevant decision makers and their political counselors. Picture-perfect examples for
groupthink phenomena are the failed Bay of Pigs invasion under the administrations of
Dwight D. Eisenhower and John F. Kennedy (Janis, 1982) or the campaign to vaccinate all
US-Americans against the pig flu under President Gerald R. Ford (Kolata, 1999).

Interesting is the subtleness with which groupthink often works: Janis emphasizes that
a group leader normally does not “deliberately try to get the group to tell him what he
wants to hear” (Janis, 1982, p. 3). The other members of the group are not turned into
“sycophants” either (Janis, 1982, p. 3). Nonetheless, due to delicate constraints reinforced
by the group leader or other participants, the group members are prevented from being
critical and from honestly articulating doubts. This subtle in-group pressure averts
decisions from being rebound to reality and weakens the intellectual strength of the
group members.

Groupthink does not only have effects towards the inside but also towards the outside
of the group. The in-group coherence is likely to be is increased, but at the same time
the separation from non-members will rise. The results are often “irrational and
dehumanizing actions against out-groups” (Janis, 1982, p. 9).

Zimbardo (1992) points out under which circumstances a person is likely to conform to a
group’s opinion: Firstly, such a behavior is likely if the corresponding task is complex and
ambiguous; secondly, if the group exhibits a strong company; thirdly, if the other
members of the group are perceived as competent by a person, but the respective
person does not feel competent herself; finally, if the group members know or can
observe how the other group members act.

Even though the development of the theory of groupthink was based on observations of
a rather special type of formal and institutionalized groups consisting of political leaders
and their counselors, the basic insights can be transferred to general situations of
group-influenced decision-making. We have learned that groups tend to compliance
and to separation from differing opinions. We have learned that understatement of the
own mental capabilities and overestimation of the others’ knowledge lead to uncritical
followers. Lastly, it has to be recognized that social pressure can lead to compliant
behavior as well.

13
The Social Judgement Theory

This theory of Sherif et al. (cf. Griffin, 1991) explains under which conditions a change of
attitude or opinion is likely and under which conditions it is not. Therefore, they
introduce the notion of a ‘cognitive map’. All statements and arguments one receives
are ranged according to the person’s cognitive map; i.e. the arguments are located
either in one’s ‘latitude of acceptance’, or in one’s ‘latitude of noncommitment’ or in
one’s ‘latitude of rejection’ (Griffin, 1991).

The fundamental idea behind this theory is that the degree of discrepancy between a
person’s original and a new attitude determines whether a change of attitude or
opinion is likely to happen. If a message falls within a person’s latitude of acceptance,
she is likely to change her opinion. The more the new information is differing from a
person’s original belief (but still within the range of acceptance), the more persuasive it
is. If a persuasive argument falls within the latitude of rejection, a kind of boomerang
effect occurs as the opposite of what is intended happens: The recipient of the
argument aligns her attitude or opinion away from the opinion intended by the
argument (Griffin, 1991).

Related research was able to reveal the following interrelations (cf. Griffin, 1991): (1)
Arguments from highly trustworthy senders have a positive impact on the extent of the
latitude of acceptance. (2) Vagueness places arguments rather in the latitude of
acceptance than crispy formulated arguments. (3) Dogmatic persons tend to an
extended latitude of rejection. (4) A maximum of influence is achieved if a message
from the cutting edge of the audience’s latitude of acceptance is chosen. If it is slightly
beyond this latitude, the information will be rejected. If it is more in the centre of the
latitude of acceptance, the force to change one’s opinion is not that large. (5) Persuasion
is a gradual process.

The Theory of Social Impact

The metatheory of social impact was invented by Bibb Latané, who alleged – based on
empirical findings – that a group of individuals exerts influence on a given individual
proportional to the state of three factors: The first factor is called ‘strength’ and stands
for the group member’s credibility and persuasiveness. Some persons such as the so-
called opinion leaders might exert a strong influence on others, whereas an unsettled
person is likely to have only little influence on what other individuals think and do. A
second factor is the ‘immediacy’, which describes the directness of a person’s influence
on another person and which is correlated with two persons’ social distance. The last

14
factor is simply the number of other people influencing a given individual (Lewenstein,
Nowak, and Latané, 1992). In this context, ‘social impact’ is defined as “any influence on
individual feelings, thoughts, or behavior that is exerted by the real, implied, or
imagined presence or actions of others” (Nowak et al., 1990).

It is meaningful to assume that this theory holds individually for every member of a
group – and thus the influence processes are mutual. Communication is not unidirec-
tional but a “two-way process of convergence” as it is called by Rogers (2003, p. 6). The
group cannot solely be seen as a formalized and institutionalized group such as a
society. In lieu thereof, the social impact theory incorporates a rather general
understanding of ‘group’ because beliefs and thoughts are discussed within a large
peer-community consisting of relatives, friends, neighbors, coworkers, etc (Nowak et al.,
1990).

Nowak et al. (1990) allude to the fact that not all of these changes of attitude or opinion
reflect true conversion. Some individuals play act in public and put compliance across
the other group members, but they do not accept that opinion entirely. Interrelations as
described in the subsection on the social judgement theory can lead to such a pretended
compliance, which might result in behavior that is inconsistent with stated beliefs.
Nevertheless, changes of attitude and opinion are normally in the direction of greater
agreement with the predominant attitudes and opinions in the peer-group. Therefore,
by simply aggregating these effects to a macro level, one should expect groups to move
towards a homogeneous opinion distribution adopting the most common viewpoint
(Novak et al., 1990) – but this it not the case!
Processes of social influence do not necessarily generate uniformity of opinion. It is a
trivial insight that for most of the topics there is a broad variety of views and attitudes
within society. Minorities can maintain their deviant opinions and sometimes even
become predominant in the course of opinion formation.
Further, these processes do not converge to the mean of the initial distribution of the
individuals’ opinions and attitudes (Nowak et al., 1990). One reason might be that many
tasks do only make sense with a bimodal distribution: There are no such things as ‘a
little bit of war’ or ‘a little bit pregnant’. Moreover, even if there is a continuous decision
space for a problem, often only certain discrete specifications make sense (either the
one extreme or the other). Finally, it can be argued that considering clearly separated
alternatives helps to prevent information overload according to the information theory:
The amount of information related to a choice set with evenly distributed attribute
levels is higher than the amount of a similar one with non-uniformly distributed
attribute levels (cf. Lurie, 2004).

15
3.1.2.4 Culture, Institution and Society

Not only social psychology gives hints on the importance of mutual influence of social
individuals, but sociology, too, offers with Giddens’ (1995) theory of structuration or
Archer’s (1996) work on the relationship between culture and agency theoretical
frameworks to describe phenomena that go beyond the sphere of influence of solitaire
agents.
Institutions can be defined as structures of social order, governing the actions of two or
more individuals. Institutions are usually long-lasting and transcend an individual’s life
and her intentions. The individuals’ behavior is governed by the creation, maintenance,
and enforcement of rules (Institution, 2005).

The institutional theory states, that on the one hand, organizations including the
persons who constitute them are embedded in a net of norms, rules and values. On the
other hand, exactly these norms, rules and values are continuously generated by
applying and altering them on an everyday basis (Barley and Tolbert, 1997). “Institutions,
therefore, represent constraints on the options that individuals and collectives are likely
to exercise, albeit constraints that are open to modification over time” (Barley and
Tolbert, 1997, p. 94). By any action of human beings, they contribute to the creation of
group norms which later recursively guide their own behavior. As enforcement and
alteration of a rule happens in the same process, it is difficult to separate the emergence
of norms from their pure enforcement (Turner, 1990).

Reno, Cialdini, and Kallgren (1993) distinguish between ‘descriptive norms’ and
‘injunctive norms’: The notion ‘descriptive norms’ specifies the behavioral reality or
“what most people do in a particular situation” (Reno et al., 1993, p. 104; see also
Hansmann and Scholz, 2002). The notion ‘injunctive norms’ signifies what people favor
and oppose within the respective culture. Accordingly, actions that are perceived as
being approved of by society follow injunctive norms (Reno et al., 1993). These two types
of norms are interlinked by the recursive maintenance of a social rule system described
above.

The importance of cultural and institutional norms for the spreading of new ideas,
products, or manners is illustrated by Rogers (2003): There was a campaign to introduce
the boiling of water as a standard cooking practice in a place called Los Molinas in order
to increase the hygiene in that region. The campaign failed because this process
innovation was in opposition to the cultural habits of the villagers: Individuals, who
were well-integrated in the village’s community, did not adapt to the suggested new

16
customs. Only people standing far-off the local community risked to disobey the com-
mon norms on food preparation.

The socio-physicist Wolfgang Weidlich (1991) tends to describe social interaction in


terms of a ‘social field’. Within this cognition, all of the long-lasting and continuously
maintained components of the social field can be interpreted as “cultural, political,
religious, social and economic components” (p. 10). All of the short-term actions that do
not lead to a lasting change of the overall field’s structure represent day-by-day inter-
actions. Such day-by-day interactions might lead to enduring alterations of the field’s
texture – but they do not have to. This representation of social forces is a valuable cog-
nition to reason on the interdependence of individuals, groups and societal institutions.

3.1.2.5 Communication Patterns

In this subsection, two essential types of communication patterns will be depicted: first,
the importance of homophily and heterophily and second, the relation between opinion
leaders and followers.

Homophilous versus Heterophilous Relations

According to Rogers and Bhowmik (1970), one of the fundamental principles of human
communication is the fact that an exchange of information is most likely between two
humans who are homophilous, i.e. similar in their attitudes, behavior, or social status.
The homophily principle means, that in a free-choice situation in which an individual can
interact with any other individual she wants to, there is a significant affinity towards
receivers who are similar to the sender.
Rogers and Bhowmik (1970) mention plenty of examples, which confirm this principle:
Influence on an agent’s political opinion during an election campaign was mainly
exerted by peers of the same age and social status. In a case study about the diffusion of
hybrid wheat, Iowa farmers tended to communicate rather with farmers of a
comparable disposition than with others. Lastly, communication patterns in formalized
organizations such as companies are likely to be rather horizontal (i.e. between
members of the same status) than vertical.

Communication via homophilous channels is not only more likely than via heterophi-
lous ones, but also more effective (Rogers and Bhowmik, 1970). On the one hand, this
can be explained by the mechanisms depicted in the subsection on the social judge-
ment theory: If two persons are similar in their attitudes, opinions and beliefs,

17
statements are likely to be mutual in the other’s latitude of acceptance. On the other
hand, there is a kind of inborn trust in persons of their own status and age cohort, which
is not as strong as in persons of a different status.
Nevertheless, heterophilous bridges between different homophilous isles in the social
space have been identified to be important for the spread of new ideas and practices
(Liu and Duff, 1972; see Figure 3). Without such so-called ‘weak ties’ between different
peer-groups all of the societal subsystems would remain constant.
"1.,-3 "2-.% 4&,56

"1.,-3 "2-.% 4&,56


Figure 3: The Strength of Weak Ties or the Importance of Heterophilous Links

The separation between homophily and heterophily can be operationalized by a


measurement of the social distance. Social distance can be measured either subjectively
by “the degree to which a source or receiver perceives the dyad as similar or dissimilar in
attributes, or by quantifying the observable similarities and dissimilarities” (Rogers and
Bhowmik, 1970, p. 527). Collins and Guetzkow (1964) mention that communication is
most likely (and hence homophilous) between (1) persons in close physical proximity, (2)
coworkers, and (3) similar socioeconomic status.

Opinion Leaders and Followers

Lazarsfeld et al. (1968) found out that networks of personal relations show a segmen-
tation into opinion leaders and followers: While followers were rather geared to the
opinion formation of the system’s opinion leaders, the opinion leaders oriented them-
selves more on the information spread by the mass media. These observations lead to
the development of the two-step flow theory: In a first step, the opinion leaders receive
information from the media. The opinion leaders process, interpret and evaluate these

18
pieces of information. In the second step, the processed and evaluated information is
communicated by the opinion leaders to the followers (see Figure 4).

8
8

8
7
8

8
8
8 8

8
8 7 "

8
8 8
8

8
8

"$ "%&'%( ! )*% +%',-


7

8
7$ 72,&,1& 9%-'%(
8$ 81331:%(6
8

8
8

Figure 4: Scheme of the Two-Step Flow Theory

Opinion leaders serve as a social model whose actions are reproduced by the followers.
Rogers (2003) underlines the importance of the opinion leaders for the formation of
opinion patterns and the diffusion of innovations: Only if the opinion leaders are
innovative towards a certain idea, the entire social system has a chance to change. In a
system, in which the opinion leaders’ norms are in opposition to the innovation in
question, a change in the system is rather unlikely.
Opinion leadership is not a feature of a person that is guaranteed for all times and every
situation. If an opinion leader is much more innovative than the average population, the
formerly homophilous relationship between an opinion leader and her followers might
turn into a more heterophilous one (Rogers and Bhowmik, 1970). This can be explained
by the mechanisms described in the subsection on the social judgement theory: If a new
idea is too far away from what is perceived as normal in a society, it is likely to fall into
the latitude of rejection of many people. Thus, radical innovations often fail or have to
be introduced stepwise as even opinion leaders reject these innovations or transform
from opinion leaders to deviants in the public perception.

19
3.1.3 Reconciling Herd Behavior and Rationality

Everett Rogers (2003) defines ‘diffusion’ as “the process in which an innovation is


communicated through certain channels over time among the members of a social
system” (p. 5). Modern research on the diffusion of innovations is based on the
assumption that individuals depend largely in their decisions on others’ behavior. As
innovations have an intrinsic uncertainty, the assumption of herd behavior as the
driving mechanism for the adoption of innovations seems to be meaningful (Deroïan,
2005; Teraji, 2003).
However, research on the diffusion of innovations has proven that innovations which
are perceived to have a greater relative advantage are adopted faster than innovations
with a smaller relative advantage (Rogers, 2003). This finding argues for partial
rationality in the individuals’ behavior since the quality of innovations is assessed
somehow.
Additionally, measures of contextual effects demonstrate that the behavior of individu-
als is significantly influenced by both, the rate of a given attitude in the peer-group and
the individual’s own attitude (Barton, 1968).

As described in the beginning of this chapter, a pure homo sociologicus would stick to
pure herd behavior. In contrast, a pure homo economicus would use a clearly superior
innovation immediately; time lags in the course of adoption would not appear. In this
section, an approach to reconcile both views resulting in a homo socioeconomicus will
be sketched. This approach bases on some considerations of the elaboration likelihood
model by Cacioppo and Petty (1979).

The elaboration likelihood model [ELM] offers two ways of persuasive influence: a
‘central route’ and a ‘peripheral route’. The ‘central route’ stands for an exhaustive
examination of the different aspects of a decision task. The ‘peripheral route’ stands for
a rather heuristic elaboration of the information perceived. This is in contrast to models,
which assume one single mechanism for the influence of the environment on
individuals’ decision-making.
The key variable in the ELM is the involvement of a person. The involvement indicates
the degree to which an individual is willing to think about a decision task deeply. A high
involvement means that people use the central route. Low involvement leads to the use
of decision heuristics (Cacioppo and Petty, 1979; Griffin, 1991).

The concept of a ‘central route’ opposing to a ‘peripheral route’ correspond quite well
with an integrated idea that humans sometimes behave on the homo economicus side
and sometimes on the homo sociologicus side. The ‘central route’ stands for relatively

20
rational, conscious, and reasoned behavior. Thus, an individual using the ‘central route’
considers the cost-value ratio of her decisions.
In contrast, people showing herd behavior deal with the decision task via the ‘peripheral
route’. Peripheral persuasion is in accordance with the social impact theory: To the
extent that persons are not highly involved in a decision task, the strength, immediacy,
and number of neighboring people are likely to influence them (Nowak et al., 1990).
12

Copying the behavior of others is a meaningful heuristic under the assumption that
“what everyone else is doing is rational because their decisions may reflect information
that they have and we do not” (Teraji, 2003, p. 662).

The variables determining the involvement of an actor are crucial to understand the
decision-making processes in society. Schenk (2002) gives some important operationa-
lizations of ‘involvement’:
1. Involvement as a trait; that means that there are persons whose character makes
them show more involvement than others do.
2. Stimulus-dependent involvement; that means that the involvement depends on the
characteristics of the innovative idea or product. If a high financial risk is related to a
certain product (for instance a new car), this product is likely to be a high-
involvement product. In contrary, simple everyday decisions are considered low-
involvement decisions.
3. Involvement as stimulus salience: Within this concept, involvement is interpreted as
personally perceived importance of an object or an issue. This operationalization is
closest to the understanding of involvement by Cacioppo and Petty (1979).

A new interpretation which relates ‘involvement’ to ‘uncertainty’ may help to explain


the behavior of humans somewhere in the middle of rationality and herd behavior. It is
a meaningful assumption that people who have the same attitude towards a certain
decision as their environment do not have any incentive and reason to think about new
ideas thoroughly. Things have always been good as they are and the other people in the
community think the same. However, if there is a dissonance between the own opinion
and parts of the peer-group, a stimulus to revise one’s own opinion emerges. In the
framework of this interpretation, disequilibrium in a group leads from the peripheral
route to the central route. The emergence of different opinions in a group evokes
attention, as the individual group member cannot trust any longer in the knowledge
that she and her peer-group have perceived as true so far. Consequently, people are
more likely to rethink their attitudes and widen their choice set.

In the sense of social space


12

21
3.2 Complex Systems, Synergetics, and Stigmergy:
A Conceptual Framework

3.2.1 Holism versus Reductionism

There are two contrary views within the philosophy of science, describing the
explainability of macro-phenomena by smaller entities and the interdependence of
phenomena on different scales: the holistic and the reductionistic view.

Holism states that an element is what it is because of its environment, because of the
context in which it is embedded. “According to holism, the properties and qualities of a
complex, organizational level define the wholeness of this level. These qualities have an
existence of their own and it is neither necessary nor possible to derive them from lower
level structures” (Weidlich, 1991, p. 9).

On the other hand, there is reductionism, which states that the nature of complex
entities can be condensed to the properties of their elements. Within this philosophical
framework, society can be described by observing humans, a human being can be
explained by conducting research on cells, the functioning of a cell can be enlightened
by investigating proteins and so forth (Gallagher and Appenzeller, 1999). The idealistic
goal of reductionism is one single science merging all the different disciplines.

Both views have shortcomings, which show that neither reductionism nor holisms can
be considered as ‘true’ or doubtlessly meaningful.
Two major problems of reductionism are information overload and oversimplification
(Gallagher and Appenzeller, 1999): In order to explain a human being by the cells it is
built of, it would be necessary to consider all of its thousands of cells at once. The same
holds for social systems, as with increasing group size the complexity of the whole
system increases rapidly (Nowak et al., 1990). However, extrapolating the macroscopic
behavior from known microscopic properties leads to the so-called “fallacy of
aggregation” (Barton, 1968). Barton calls the classical reductionistic approaches in social
sciences a “sociological meatgrinder, tearing the individual from his social context and
guaranteeing that nobody in the study interacts with anyone else” (1968, p. 1).

22
The holistic view ignores that there are phenomena which have been explained
successfully by understanding the elements they are built of . After all, macroscopic
13

systems do rely on their constituting elements – solely on their constituting elements.


Even if the interactions leading to the macroscopic phenomena are highly complex,
these phenomena emerge from the actions of the constituting elements. Everything
else is beyond the abilities and the scope of science.

The research done on complex systems and concepts such as synergetics or stigmergy
may – at least in some cases – form a bridge between the reductionistic and the holistic
view. If the researcher knows the essential properties of the elements of a complex
system and their rules for interaction, she can explain and describe the macro-behavior
in terms of general rules and probabilistic statements.

3.2.2 The Conceptual Framework of Synergetics

A short definition of the field of synergetics is given by its founder Hermann Haken:
14

“Synergetics deals with complex systems that are composed of many


individual parts (components, elements) that interact with each other and are
able to produce spatial, temporal or functional structures by selforganization.
In particular, synergetics searches for general principles governing
selforganization irrespective of the individual parts of the systems that may
belong to a variety of disciplines […]” (Haken, 2003).

Key concepts related to the framework of synergetics are ‘mutuality’, ‘feedback’, and
‘universality’ (for an illustration see Figure 5).
Feedback takes place by a bidirectional process: In the one direction, the macroscopic
levels are composed of and created by the individual elements of the microscopic levels.
In the opposite direction, the individual elements are embedded into the macroscopic
context that constrains their abilities to act (Nowak et al., 1990; Weidlich, 1991). The
phenomenon of “higher-level pattern[s] arising out of parallel complex interactions
between local agents” (Johnson, 2001, p. 19) is called ‘emergence’.
Mutuality of interactions between individual elements can happen either directly or
indirectly: Individual elements can influence each other by direct correspondence or via

13
Consider e.g. the advances chemistry has made because of the insight gained by nuclear physicists (cf.
Lyons, 2005).
The Greek word ‘synergia’ (!"#$%&'() denotes ‘collaboration’.
14

23
a so-called order parameter . Nevertheless, mutuality – at least that of some kinds of
15

individual elements – is an important factor in synergetics. (Johnson, 2001)


A last important idea within synergetics is the claim of universality. As Weidlich (1991)
states the mathematical concepts used to describe “statistical multi-component
systems” (predominantly in physics) are universal. The tools developed in the field of
synergetics may therefore be regarded as interdisciplinary or diagonal (Haag, 2000).

Order Parameters
;%<=< .1>>?&,.-5,1& @,%3'6A
2-.B%56 1@ ,&@1(>-5,1&C

System Elements
;,&',/,'?-3 -=%&56C
Figure 5: Process Scheme of Synergetics; after Schweitzer, 2003

The universality of this approach can be demonstrated by examples taken from various
disciplines in which the idea of mutually coupled micro-macro links provides a good
explanation for the system’s behavior and structures.

The classic example for micro-macro coupling is slime mold (Dictyostelium discoideum).
Slime mold cells act under normal (i.e. under advantageous) environmental conditions
as autonomous and non-interacting individuals. However, if the environmental
conditions turn to a meager state, the autonomous organisms organize themselves and
behave as well as appear like a single organism. Early attempts to explain these obser-
vations relied on so-called ‘pacemaker cells’ governing the organization of the joint
slime mold cells, but it was not possible to identify such a hierarchical system at all.
Later, Keller and Segel were able to show that the emergence of this quasi-organism is
owed to mutual interactions that lead to non-hierarchic self-organization (Keller and
Segel quoted in Johnson, 2001).

In a wider sense, an order parameter is a part of the environment.


15

24
One of the first approaches in physics to explain the macroscopic behavior by mutual
influence of system elements was done by Ernst Ising in the 1920s. Ising (1925) designed
a model based on Wilhelm Lenz’ idea to explain macroscopic ferromagnetic
observations by a mathematical model of the magnetic behavior of atoms. In his model,
the atoms could adopt only two possible states: ‘up’ or ‘down’. Atoms influence each
other mutually via an order parameter. Furthermore, if two neighboring atoms are
arranged in the same direction, this is energetically optimal. More advanced models
based on Ising’s work were able to give major insights into the micro-macro link of
ferromagnetism (Kobe, n.d.; Opel, 2005).

Another example of the benefits that can be derived from the synergetics approach is
the discipline of economics. Ormerod characterizes economies as “systems in which the
macroscopically observable quantities emerge from the effects of interactions amongst
the individual constituents of the system” (Ormerod, 2005, p. 721). Modeling the
macroeconomic outcomes arising from interacting economic subjects can give insights
beyond conventional economics. The latter relies on rational agents capable to process
all relevant information and on the assumption that this information is available to
them. As synergetics goes beyond these rather unrealistic assumptions, it can give
insight into phenomena for which the methods of conventional economics fail.

As a last example, the sociological theory of structuration shall be depicted: Anthony


Giddens, the founder of the theory of structuration, is one of the sociologists who tried
to overcome the artificial schism of his discipline into micro- and macrosociology. He
stated that human actions are recursive, which is in good accordance with the concepts
of synergetics (Giddens, 1995; see also Turner, 1990). Barley and Tolbert (1997) explain
that institutions embody constraints to the actions exercised by human beings; but in
16

the opposing direction, social actions of these humans also shape a society’s
institutions.

Such complex systems can have certain interesting inherent properties such as phase
transitions or mesostructures.
Phase transition means that there are abrupt changes of a system’s behavior due to
rather slight changes of a parameter state. Examples for systems exhibiting phase
transitions are water (with its well-known three phases) or ferromagnetic systems. In
physics, phase transitions are very often related to changes in temperature. E.g. below a
certain critical temperature, ferromagnetism can be observed – above this critical
temperature, ordered states disappear (Opel, 2005).

The emerged macrostructures and order parameters


16

25
The other interesting property is the existence of mesostructures such as domains
(which might even out at a more macroscopic level). We know this phenomenon from
ferromagnetism as well because there are local magnetic domains with different
directions. These ordered mesostructures neutralize each other on a macroscopic level,
but mesoscopic order and self-organization does exist (Opel, 2005). Transferred to the
realm of social sciences, this phenomenon corresponds for instance with local opinion
or behavior isles, which might neutralize themselves on a macro-scale.

3.2.3 The Concept of Stigmergy

From a contemporary point of view, stigmergy can be conceived as a sub-domain of


synergetics. This concept was introduced in the late 1950s by the French entomologist
Pierre-Paul Grassé to describe the communication taking place in termite colonies
(Grassé, 1959; Bonabeau, 1999):

“La coordination des tâches, la regulation des constructions ne dépendent pas


directement des ouvriers, mais des constructions elles-mêmes. L’ouvrier ne
dirige pas son travail, il est guidé par lui. C’est à cette stimulation d’un type
particulier que nous donnons le nom de STIGMERGY (stigma, piqûre; ergon,
travail, œuvre=œuvre stimulante).” (Grassé, 1959, p. 65)
17

As indicated by Grassé, stigmergy comes from the two ancient greek words ‘stigma’
(!"#$%&) and ‘ergon’ ('($)*), whereas ‘stigma’ connotes ‘spot’ or ‘mark’ and ‘ergon’ de-
notes ‘work’ or the ‘product of labor’. These two little words point out the small
difference which makes stigmergy a sub-domain of synergetics and not a synonym:
While synergetics is interested in all “complex systems that are composed of many
individual parts […] that interact with each other” (Haken, 2003), stigmergy explores
explicitly indirect interaction via changes of the environment. Thus, if the environmental
effects on the individual’s behavior are not dominant or if the environment is not
changed persistently by the individuals, the governing principle is not a stigmergic one
(Small, n.d.). The process scheme of a stigmergic system is depicted in Figure 6.
The persistence of such changes is important since the changes have to last long
enough in order to affect the agents’ behavior in at least the near future (i.e. the next
time step in a simulation) (Holland and Melhuish, 1999).
Furthermore, stigmergic systems can be closed or open: If the environment is exclusively
created and altered by the elements of the system in consideration, one calls such a

17
Free translation: The coordination of tasks and the regulation of constructions do not directly depend on
the worker, but on the constructions themselves. The worker does not control his work, but is controlled by
it. To this special kind of stimulation, we give the name STIGMERGY.

26
system closed. If there is additionally an external or externally altered environment, one
calls such a system an ‘open’ stigmergic system. (Small, n.d.)

A concept, which is closely related to stigmergy, is the self-consistency principle. It is the


central idea of the self-consistency principle that each element contributes to a
common field, which, in return, influences the elements . Processes underlying the self-
18

consistency principle run towards a stable state. (Weidlich, 1991)

DF #F G&',/,'?-3 " 1( "&

A = Stimulus
DE #E G&',/,'?-3 " 1( "& R = Response

D # G&',/,'?-3 "
Figure 6: Scheme of a Stigmergic Process; after Grassé, 1959

Usually interaction processes in social systems occur directly. If one transforms these
direct interactions into indirect ones via a medium such as an opinion or a
communication field, such a social system then turns into a stigmergic one. In this
transformation, it is possible to apply the self-consistency principle to such social
systems. Due to the self-consistency principle, a historical component is brought into
the formulation of the social system, as a field normally is dependent on previous points
in time (Weidlich, 1991). Additionally, the system becomes easily describable by means of
the instruments and tools of physics.

18
This principle is used – for example – in chemistry to elaborate wavefunctions. In an iterative procedure,
the wavefunctions of the electrons are altered until they are self-consistent, i.e. that they are consistent
with the field they create. (see Hartree-Fock-Methode, 2005)

27
3.3 Multi-Agent Modeling as a Method

In the previous chapter, some basic concepts, ideas and notions with respect to complex
systems have been explained. Although a theoretical framework and a method are two
different things, the realm of complex systems and the method Multi-Agent Modeling
are complementary ideas, such as two sides of a coin. In this section, Multi-Agent
Modeling shall be put in a context: In the first place, the term ‘model’ is defined and
some explanations on modeling in general are given. Thereafter, the potential as well as
the limitations of Multi-Agent Systems [MAS] are discussed and basic concepts are
introduced. Lastly, a brief review on some fundamental thoughts about quality criteria
for models is depicted.

3.3.1 Defining the Notion ‘Model’ and the Importance of Models

3.3.1.1 A Definition of ‘Model’

In a very general interpretation, thinking in models is as old as humankind (Müller,


1983). A general definition of the term ‘model’ is given by Haag (2000), who defines it as
a “material or ideal (re-) production of an object by means of analogies realized by a
cognitive subject” (p. 4). As a key feature of models the Brockhaus Encyclopedia
additionally emphasized the accentuation of aspects, which are considered as impor-
tant, and the negligence of aspects, which are considered as secondary – in natural as
well as in social sciences (Modell, 1999).
In the 19 century there was a main tendency to substitute the model thinking related
th

to things [Substanzdenken] by one related to structures and functions [Strukturdenken].


This tendency coincides with developments in the discipline of physics as physicists like
Lagrange and Laplace made substantial progress in the field of mechanical dynamics.
The cognition of reality became more and more abstract as the experimental and
mathematical methods of mechanical dynamics were more and more employed in
other branches of physics (Müller, 1983).
As a consequence thereof, the Austrian physicist and philosopher Ernst Mach concluded,
that models were not images of reality but constructs. He called theories and models
‘auxiliary notions’ [“Hülfsvorstellungen” (p. 54)] for the representation of matters of fact
and exemplified that trigonometric functions can be used to represent waves, but waves
are not trigonometric functions (Müller, 1983). Currently, this world outlook is named
systematic neo-pragmatism, i.e. models come into existence by the synthesis of

28
formerly unconnected observed elements. Accordingly, the model’s relation to reality
emerges by using it (Mosler and Tobias, submitted).
The process of modeling can be visualized as an encoding process: Observed or assumed
processes which belong to the sphere of ‘reality’ are translated into variables, para-
meters and their associated (mathematical) relations. That is, natural or social systems
are represented by an abstract, formal system. In a next step, deductions derived from
the formal system are decoded to the sphere of ‘reality’ in order to gain understanding
and to produce predictions on its behavior (Haag, 2000).
Mosler and Tobias (submitted) point out that the basic goal of recognition processes –
and thus of modeling in particular – is to illuminate phenomena of an infinite complex
reality by employing comprehensible rules.
As a consequence of the advances in the field of computer technology, an enhanced
type of mathematical models, computer simulations, have become more and more
relevant during the last 20 years as a third area of methodology complementary to
formal theories and empirical studies (Schweitzer, 2003). While some of these computer
simulations are the same as the macroscopic differential equation approach described
above – just more elaborated and more complicated – a new class of models was
developed: Individual-based systems that rely on microscopic behavior for the under-
standing and prediction of macroscopic properties.

3.3.1.2 Strengths and Limitations of Scientific Models in General

Three inherent advantages of mathematical models compared to verbal models are:


precision, explicitness, and faultless deduction (Mosler and Tobias, submitted). In order
to work, computer systems demand explicit formulations for every detail, exactness,
and the absence of any discrepancies. Results are computed without interpretations and
completely value-free. The intrinsic characteristics described lead to the uncovering of
shortcomings of both data and theory. The consequences of this are not purely ad-
vantageous as gaps in the data set often have to be closed by doubtful assumptions.
Another ability of mathematical and verbal models is related to their capability of
synthesizing detached research findings. For instance climate models are a puzzle of a
vast amount of different and formerly unrelated research findings coming from physics,
oceanography, chemistry, etc. As an example from the social sciences might serve the
work of Mosler and Brucks (2003) who integrated isolated findings from the research on
dilemmas of the commons in a general model.
A further limitation of models is the relationship between the model and the modeler
who designed it: The perception of modeling as an encoding process (see above) is
closely tied to a “sender-receiver conception […], in which nature under investigation

29
sends syntactic and objectiveable signals to scientific observers” (Haag, 2000, p. 10).
Thus, the shape of a model is determined by the perception of the observer and by the
observer’s (modeler’s) decisions about the inclusion or non-inclusion of certain elements
(Stachowiak, 1983, pp. 116 & pp. 129). As a consequence of this, a modeler is always a part
of her model. Various validation techniques may help to justify and objectify models.
However, subjective elements will remain.
The importance of the modeler’s decisions also implies that a model can only represent
phenomena and processes, which are incorporated in the model. Although this
statement is trivial, it has to be emphasized. For the realm of computer simulations,
Wooldridge (2000) states: “Computers are not good at knowing what to do: every
action a computer performs must be explicitly anticipated, planned for, and coded by
the programmer” (p. 27). Accordingly, the model itself limits its predictive and explana-
tory power of a model.
Models also face a general validation problem, due to their implicit hypotheses: Accor-
ding to Popper’s ‘Critical Rationalism’, a model is invalidated by a wrong result or a
failed prediction, but true verification of models is impossible (Reichert, 1994; see also
the subsection on “Quality Criteria for Models”). This problem even deteriorates in the
case of models for long-term prediction of real world systems, because the generally
accepted means of quasi verification by a large number of failed attempts to falsify a
hypothesis or an entire model is not feasible. Beyond this drawback, the predictability of
such systems today is mostly perceived as inherently limited as stated by Kadtke and
Kravtsov (1996): “Most scientists will agree now (as did the most acute minds long ago)
that long-term mathematical prediction of complicated physical systems is in practice
unachievable” (p. 3).
A last proposition shall be made on the notion of isomorphism and the explanatory
power deduced from it: According to Ashby, ‘isomorphism’ stands for the phenomenon
of two systems (in this case the reality and its related model) differing just in the type of
its elements but not in their number and interconnections. The current scientific
paradigm assumes that things themselves cannot be recognized in an ontological
sense. But, according to Müller (1983), isomorphic representations of things are possible
and all knowledge gained from isomorphic representations also holds in the realm of
reality. From this point of view applies, the more isomorphic reality and model are, the
closer predictions and derived concepts are to reality.

3.3.2 The Method of Agent-Based Modeling

Agent-based modeling is an approach in mathematical modeling – some even say a new


paradigm (e.g. Schweitzer, 2003). MAS generate macroscopic properties by functioning

30
according to rules defined for microscopic behavior. Hence, Multi-Agent Modeling
[MAM] is a method perfectly complementary to the concepts and ideas taken from the
fields of complex systems theory and synergetics (confer to section 3.2).
As these fields and the related methodology of MAM are relatively new, their set of
notions, their boundaries differentiating it from other approaches, and their systematic
of subsidiary methodological approaches have not been consolidated. A minimalistic
definition is given by Weiss (2000), who defines MAM as “the study, construction, and
application of multiagent systems, that is systems in which several interacting,
intelligent agents pursue some set of goals or perform some set of tasks” (p. 1).

3.3.2.1 The Strengths of Multi-Agent Systems

Classical models based on a set of differential equations exhibit two basic characteris-
tics: They are deterministic in their behavior and the quantities they describe are
continuous. The description of a system’s behavior in a deterministic and continuous
manner is reasonable for a lot of tasks and has brought forward science to an immense
degree: Since Isaac Newton and Gottfried W. Leibniz developed the differential calculus,
the potential of physics and other disciplines to describe the world has accelerated
vastly.
Nevertheless, there are phenomena that cannot be captured by a deterministic and
continuous approach. One of the first researches relying on particle based, stochastic
ideas in the field of chemical kinetics was done by Kramers (1941) . Tracking kinetic
19

problems by means of a probabilistic, particle-based model helped to explain


phenomena which rely heavily on statistical deviations from a fictive mean value
(Gillespie, 1977, McQuarrie, 1967). The same holds for example for social systems, which
are constituted by individuals (counted in discrete integer amounts like the molecules in
the examples from chemistry) partially behaving in a stochastic way as well.
Another problem, which can be tackled successfully by using MAS, is called the ‘fallacy
of aggregation’. This erroneous belief occurs “when we aggregate purely individual
relationships and assume that collectives will behave accordingly” (Barton, 1968, p. 8).
Mathematical models based on differential equations often assume implicitly uniformly
mixed conditions, but very often the local context, in which a basic element of a system
is embedded, determines its behavior. Such heterogeneous phenomena are difficult or
impossible to describe by differential equations but easily by programming a MAS.
The ability of MAS to deal with individuality, heterogeneity, and individual path
dependence opens up a space of possibilities for better forecasting and a better
understanding of macroscopic entities that emerge of joint microscopic behavior. If it is

According to McQuarrie (1967).


19

31
possible to describe the patterns of interaction of individual system elements and if it is
possible to describe the system elements and their various relations adequately (see
isomorphism), programming an MAS is rather easy. In contrast, describing a system by
differential equations in terms of a fixed set of macroscopic state variables and their
mathematical interrelations is harder as the interrelations stated are unobservable
constructs of the reality.
Using MAS to investigate the behavior of systems built of interacting elements can give
new explanations to the question why those macroscopic phenomena arise and not
only to the question which macroscopic relationships exist. Ecological and social
systems show more ‘individuality’ on the microscopic level compared to physical or
chemical systems. Thus, the impact of the MAS method to ecological and social theory
might be even larger than it is and was in physics or chemistry (Hodeweg and Hesper,
1990; Wainwright and Mulligan, 2004). One example for this elevated explanatory
power can be taken from economics: While classical economic models often fail to
predict the purchases of customers accurately, individual-based models could help to
make predictions better as they allow behavioral differences within their population. De
Haan hypothesizes that “there is no average customer, [and] assuming average market
responses might lead to erroneous results.” (2005).

3.3.2.2 Weaknesses of Multi-Agent Systems

Although in general the modeling of individual-based interaction and action rules is


easier than defining adequate and justifiable differential equations, there are some
disadvantages that make it in some cases impossible to set up a meaningful MAS. If the
system should be able to simulate the behavior of real-world systems, a lot of data on
the initial state of the system as a whole as well as of each individual agent are
necessary. For many social and ecological systems, the respective data are not available.
Furthermore, small deviations in the design of the interaction rules can have severe
consequences for the outcome of simulation runs. Weiss (2000) points out that “to
build a multiagent system in which the agents »do what they should do« turns out to be
particularly difficult in the light of basic system characteristics […]” (p. 5). On the one
hand, finding appropriate interaction rules in an iterative “trial-and-error”-process can
be of epistemological value. On the other hand, phenomenological laws on a macrosco-
pic scale may be more robust in such cases than imperfect MAS.
Finally, as an MAS can be seen as a tool complementary to the concepts ‘complex
systems’ and ‘synergetics’, its inherent limitations concerning long-term predictability
also holds for the corresponding method of MAM. Path dependence, high degrees of
non-linearity and the uncertainties regarding initial values limit the predictability of

32
complex systems mandatory (for further explanation read e.g. Lyons, 2005; Ormerod,
2005).
Consequently, Multi-Agent Systems are a brilliant tool to enhance the understanding of
emergent behavior of macroscopic systems built of microscopic elements. However,
their capability of forecasting future systems is rather limited. If forecasting is possible,
then it occurs in a probabilistic sense. That means it is possible to indicate the likelihood
of a certain future state, but it is not possible to describe what the future will be like.

3.3.2.3 Basic Notions and Concepts of Multi-Agent Systems

Complexity and Uncertainty


The idea of using statistical methods to forecast system states originally comes from
statistical physics. The complexity of real world systems both allows and requires the
use of probabilistic methods to describe system states (Kadanoff, 2000).

Agency and Autonomy


According to Wooldridge (2000), there exists no universally accepted definition of the
notion ‘agent’. Nevertheless, he offers the following minimal consensus on the concept
of agency: “An agent is a computer system that is situated in some environment, and
that is capable of autonomous action in this environment in order to meet its design
objectives” (p. 29).
The notion ‘autonomy’ is fuzzy as well. In the following, ‘autonomy’ will indicate that
the autonomous agent is able to process information and to act according to predefined
rules without the intervention of the user or other parts of the system during the dura-
tion of the simulation.

Decentralized Decision Making


Complex systems are characterized by decentralized decision-making (Wainwright and
Mulligan, 2004). Discovering the importance of decentralized decision making for the
emergence of new macroscopic phenomena was one of the major steps towards a new
paradigm and further insight into the behavior of complex systems. A well-known
example for the importance of decentralized decision-making is the swarm-behavior of
slime mold already mentioned in section 3.2.

Local, Incomplete Knowledge


The conceptual counterpart to decentralized decision-making is local (and thus incom-
plete) knowledge. There is not only no Leviathan in complex systems, but on the top of
that the individual and autonomous decision makers do not possess information on the

33
state of the whole system. Weiss (2003) denotes “incomplete information” (p. 3) and
“restricted […] capabilities” (p. 3) as major characteristics of agents in MAS. Every agent
is bounded to its next spatial and temporal environment and to the limitations of its
sensors that perceive this environment. Local knowledge emancipates MAM over the
classical approaches as this feature allows the introduction and persistence of spatial
heterogeneity (Hodeweg and Hesper, 1990).

Intelligence
There is a debate about what intelligence means in the context of MAS and whether
intelligence is necessary to agents or not. Johnson (2001) uses the notion ‘adaptation’
for what other authors call ‘intelligence’ and depicts a good analogy to prove the
concept’s importance: “Emergent complexity without adaptation is like the intricate
crystals formed by a snowflake: it’s a beautiful pattern, but it has no function” (p. 20).
Wooldridge (2000) bounds ‘intelligence’ to the idea of ‘flexibility’. In his sense an
intelligent agent is an agent able to act and react autonomously and flexible.

Group Formation
The notion ‘group’ can be defined in various ways. Scholz and Binder (2003) define
‘group’ as an “amount of people, who interact and form a unity in a definded period of
time for a defined task and in a certain place. There is no formal requirement to enter a
group.” (p. 10). Antoine conversely characterizes Trade’s concept of groups as bound to
imitation. In his sense, a group is an amount of people who mutually imitate themselves
(Antoine, 2001).
Both definitions have in common that membership is linked to interaction and not to
formal requirements. As MAS are based on mutual interaction and action is bound to
local knowledge, MAM is the method to investigate and explain group formation (e.g.
CIRAD, 2001, Weiss, 2000). Group formation takes place on a mesoscopic level.
Therefore, MAS are feasible to describe group formation’s importance for the micro-
scopic as well as the macroscopic level.

3.3.3 Quality Criteria for Models

In the following, two criteria for measuring the quality of models will be discussed: first,
the validity of a model and second, the parsimony principle, which is related to the
simplicity of the model.

34
Validity

As mentioned above, all gain of insight is tied to the elimination of incorrect hypotheses
and theories. Thus, all scientific knowledge is transient, assumptive knowledge.
According to Popper (1995), it is also more likely to take a wrong theory for true than to
take a true theory for wrong. This falsificationistic view was dominant in the second half
of the 20 century (Scholz and Tietje, 2002) and is still a powerful thinking in most
th

branches of sciences.
Notwithstanding knowledge is transient, science and society in general have to rely on
truths or what is perceived as true. Popper (1995) compares the establishment of the
truth in science with the same process in court: The transition from the search of truth
to the assignment of truth is strongly related to a decision. Thus, ‘proven knowledge’ in
science is always a matter of conventions.
As described in the subsection about the limitations of scientific models, validation of
long-term models and complex models has to take place in a pragmatic way (cf. Scholz
and Tietje, 2002, p. 333, Mosler and Tobias, submitted, pp. 22, Haag, 2000, p. 5).
Many authors mention two aspects of validity, which should be fulfilled by a model: (1) a
good fit between the results of the model and measurements and (2) the possibility to
relate the model to real systems in a meaningful manner. The first aspect means the
absence of statistically significant deviations of the model data from the measured data
(Reichert, 1994). This criterion is easy to test in the case of rather simple systems, in the
case of systems for which experiments are feasible and for short-term behavior. Thus,
climate models, for example, cannot be tested according to this strict criterion by
definition. A meaningful and necessary alternative is to validate the elements of the
model and to validate the model for the insufficient data available. The second aspect is
a pragmatic one: A model has to fulfill the requirements it is designed for. This view has
to do with understandability, interpretability and similarity between the real world
system and the model (Mosler and Tobias, submitted). The related concept is the
concept of isomorphism and validation takes place mainly by face validation (cf. Scholz
and Tietje, 2002).
In general, models are considered as being better models if they rely on ‘proven
knowledge’. In natural sciences, models mainly based on fundamental laws are valued
as the best kind of models. Analog to the natural sciences, a good social science model
should rely on multiply tested theories. The next best category are phenomenological
models. The weakest models in terms of validity are pure data models (Baccini and
Bader, 1996).

35
Parsimony

The foundations of the parsimony principle are attributed to William of Ockham , who
20

lived in the 14 century England. The main idea behind the parsimony principle is that if
th

there are two possible explanations for an issue and there is no way to judge which one
is correct, the simpler explanation is more likely to be the correct one.
Simplicity as a quality criterion for mathematical modeling is a common value in science
(Reichert, 1994). Although there might be cases where complexity at the level of
modeling (not as a result of modeling!) is wanted and meaningful, in general, a better
understanding of the systems investigated is achieved by applying the parsimony
principle (e.g. Schweitzer, 2003)

Integrating the two quality criteria

When evaluating the quality of a model one has to consider both aspects mentioned.
Sometimes a good quality regarding one aspect may be in contradiction to a good
quality regarding the other one. Baccini and Bader (1996) suggested the following
concept to evaluate a model on both aspects:

+1
2 2, ,
//
Quality 4 0 m 3 S F 0 x mess , p - - (1)
1 1 ..

Whereas m denotes the number of model parameters, S denotes the deviation between
the model and measured data, x denotes a vector containing the measured data and p
denotes the vector of the model parameters.

The parsimony principle is often also called “Ockham’s razor“.


20

36
4 Towards a Socioeconomic Model of Decision Making

4.1 Spatio-Temporal Coordination of Decisions

As mentioned in the introduction, Frank Schweitzer (2004) has developed an Ising-like


model to describe the spatial and temporal coordination of decisions in a MAS. In this
section, his model will be introduced: First, the concept and the basic assumptions
underlying Schweitzer’s approach are explained. Thereafter, the two core modules of
the model – the communication and the decision module – are introduced. Finally,
typical simulation results will be shown and the model behavior will be explained.

4.1.1 Concept and Basic Assumptions

Schweitzer’s model can be seen as a closed stigmergic system with a binary space of
opinions. Every agent has two variables describing her current state: one for her opinion
or attitude and one for her location in a social space. The social space is 2-dimensional
and Euclidean and it is a representation for the social distance or the social proximity
between two agents (see e.g. Lewenstein et al., 1992).

Figure 7: The Functional Principle of Schweitzer’s (2004) Model

37
The functional principle of the model is illustrated in Figure 7. It shows one social space
(horizontal plane) with many agents having distinctive opinions (blue and red). These
agents mark that part (one of the two components) of a communication field (vertical
plane) which corresponds with their current opinion like people pin information on a
pin-board. Later, this information is available to them and their surrounding agents and
influences their future decisions, which in turn will mark the communication field.

The Conception of Man in Schweitzer’s Approach

The model presented in Schweitzer (2004) is based on a pure homo sociologicus as


depicted in section 3.1. The agents make their individual decisions only on the local
information, which is generated by themselves and other agents. Behavior is solely
determined by other agents’ behavior and what the respective agent has done in past
time steps. There is no module dealing with the quality or content of the alternatives in
consideration: Schweitzer’s model comes to the same results, no matter if the decision
is to buy a hybrid-car or not, or to participate in a recycling scheme, or to buy skimmed
milk instead of full-cream milk, or to vote for the socialist party instead of the conserva-
tives. The maximization of the private utility is only considered implicitly .
21

Latané’s Theory of Social Impact, Path Dependence, and the Importance of Time

The model claims to be based on Bibb Latané’s Theory of Social Impact (Schweitzer,
2004). As described in section 3.1, the metatheory of social impact assumes that social
influence on an individual bases on three factors: (1) the individual strength of the
influencing agents, (2) their immediacy, and (3) the number of influencing agents. But
not only the features originally proposed by Latané are considered in the model
presented here, also other aspects like the impact of past events on the present or the
importance of the velocity of information spreading are taken into account:
Schweitzer introduces the concept of an external social memory that mirrors the
experience and the history of decision making of the collective. There is path depen-
dence in this model and context does matter: If an agent selected alternative “1” in the
last time step, this decision has to influence the decision process in the current time
step.
The second new feature of Schweitzer’s model is the assumption that information
spreads with finite velocity. Schweitzer supposes that information from another agent
far away in the social space reaches a certain agent much later than the information
from a neighboring agent. This assumption goes beyond Latané’s original propositions,

Herd behavior is often a good heuristic to maximize one’s utility under incomplete information conditions.
21

Confer to subsection “Reconciling Herd Behavior and Rationality”.

38
whose ‘immediacy’ only states that the influence decreases with distance without a
time lag.

Social Phenomena Treatable by Mathematical Methods from Physics

Schweitzer models the collective decision-making (or opinion formation) processes by


applying well-known tools and concepts from the natural sciences. He mentions that
one of the advantages of the social impact theory is that it can be formalized within a
physical approach (Schweitzer, 2004).
Physics typically relies on fields (see gravity, magnetism, electricity etc.) and antagonistic
concepts (‘plus’ versus ‘minus’ in electricity or the ‘north’ versus ‘south’ dipole in
magnetism). If one wants to describe social phenomena by using concepts from physics,
the formalized description of these phenomena has to be in accordance with those
typical properties of physical systems. As explained in section 3.2, social systems can be
made accessible to the methods of physics by transforming the direct human-to-human
interactions to indirect ones via a medium or a field.

4.1.2 The Model

As mentioned above, Schweitzer’s model comprises of two modules: (1) a communica-


tion module describing the generation, spread and decay of information on the two
alternatives, and (2) a decision module determining the decisions of the agents based on
local information. In the next subsection, the communication module will be described
and subsequently the decision module will be depicted.

4.1.2.1 The Communication Module

Schweitzer (2004) models all of the communication processes by a so-called “multi-


component spatio-temporal [communication] field” (p. 307). As described above, the
communication field comprises of two components, one for each of the two contrary
opinions 5 = 61, and obeys the following equation:

8 2, / N 2, ,/ 2, / 2, /
h5 0 r , t - 4 9 si:5 ,5 i : 0 r + ri - + k5 h5 0 r , t - 3 D5 7h5 0 r , t - (2)
8t 1 . i 41 1 . 1 . 1 .

Whereas h denotes the communication field, the vector r denotes a point in space and t
a point in time. The three terms of the equation’s right-hand side are subsequently
explained in detail.

39
Input of Information:

9 s :5 5 : 201 r + r /-.
, ,
i , i i (2.1)
i 41

The first summand stands for the contribution of all of the system’s agents (N) to the
two components of the communication field. Every agent i contributes to the field with
her individual strength si. The second part of the term, )*,*i, is the Kronecker Delta, which
is used for case differentiations in sums or matrix operations. By definition, it has the
value 1 if * = *i and the value 0 if * + *i. The effect of the Kronecker Delta in equation 2.1
is, that a certain agent i only contributes to her corresponding component of the
communication field; i.e. an agent with the opinion +1 only contributes to the
component for +1 and vice versa. The last part of the term is the Dirac’s Delta causing
agent i to contribute only to the communication field at her location and nowhere else.
The first term 2.1 as a whole describes the generation of information packets by the
agents.

Decay of Information:

2, /
k5 h5 0 r , t - (2.2)
1 .

The decay of information in Schweitzer’s model follows the rules of a first order
reaction, as it is proportional to the amount of information at a certain location. k*
denotes the relative decay rate for h*. Thus, the information generated in the first term
has a lifetime of 1/k*. As decay means a decrease of information at a certain location,
this term is subtracted in the time derivation.

Dissemination of Information:

2, /
D5 7h5 0 r , t - (2.3)
1 .

The third term describes the spreading of information within the spatio-temporal
communication field. Schweitzer implements the spreading process by using a diffusion
process. D* denotes the diffusion constant for information dissemination of the
component * of the communication field. The delta-symbol stands for the Laplace
operator, which is a second order differential operator in a n-dimensional (in our case 2-
dimensional) Euclidean space.

40
The Communication Module in Discrete Space and Time

For the implementation of such a model in a computer system, the continuous equation
has to be transformed into a discrete expression. In order to do so, the derivatives with
respect to space and time have to segue from infinitesimal periods in time and
infinitesimal extent in space into measurable ones. This is the reverse way of the
definition of the differential calculus: The derivative of a function f at x is the slope of
the corresponding tangent line. This tangent line is achieved by calculating the limit of
Newton’s difference quotient getting closer and closer to zero. Discrete space and time
demand extent in their dimensions – and thus the discretization means a virtual
retransformation of the tangent into a secant for all derivatives in the continuous
equation (Figure 8).

EH
2x/
f ( x) 4 exp0 -
12.
K
tangent
secant

F
!x

H
E

M
H<

H<

H<

H<

H<

E<

E<

E<

E<

E<

F<

F<

F<

F<

F<

L<

L<

L<

L<

L<

I<

I<

I<

Figure 8: From a Continuous to a Discrete Differential Calculus

A Discrete Dirac’s Delta

The Dirac function or delta distribution in a continuous space is the derivative of the
Heaviside step function H(x) and thus defined as follows:

? 0 @ x B xi
: ; x + xi < 4 > (3)
=A @ x 4 xi

The delta distribution is characterized by its integral, which has by definition the value 1.
If this condition has to hold in a discrete space, the impulse in the area of xi cannot have
an infinite value, but depends on the width of the lattice. In a 1-dimensional discrete

41
space the value of the delta distribution at xi is 1/!x; the 2-dimensional delta distri-
bution has the value 1/(!x!y) at xi.

Discretization of the Diffusion Term

The diffusion term (term three of equation 2) is a partial derivative with respect to two
dimensions in space. A common approximation of any f’’ in a discrete space is the three-
point centered difference formula (Faires and Burden, 1994). The application of this
formula on the communication field as defined by Schweitzer results in the following
equation

7h h + 2hi , j 3 hi +1, j h + 2hi , j 3 hi , j +1


4 ... 3 D i 31, j 3 D i , j 31 (4)
7t ;7x <2
;7y <2

Whereas !h/!t means the change of one of the two components of the communication
field by the course of time. !x and !y are the corresponding widths between two
discrete points n and n+1 in space. hi,j stands for the value of the communication field at
the position x = i and y = j.

Stability and Edge Conditions

Numerical representations of continuous phenomena tend to show unstable behavior


under certain parameter conditions. Therefore, it is indispensable to identify the range
of the parameter values for which a simulation is stable. A common method to calculate
the range of stability is the von Neumann analysis. Li, Steiner, and Tang (1994) analyzed
for what conditions a 2-dimensional diffusion equation implemented with the three-
point centered difference formula is stable. They could prove that the von Neumann
stability condition is satisfied for D " 0.25·(!x!y)/!t. Furthermore, Schweitzer (2003)
mentions k*·!t << 1 as another crucial condition for a stable simulation.

The social space, which the agents are situated in, is 2-dimensional. Thus, there are four
edges if the area is rectangular. It is necessary to define what lies beyond the borders of
the defined social space in order to calculate the information spreading process
appropriately. A possible assumption is that the area describing the social space is
centered in a quasi-infinite and empty universe – like a star or a planet. Consequently,
information flows across the borders of the defined social space and is lost for the
system. The laws of conservation do not hold within the scope of the system,
accordingly.

42
Another possibility is to choose a toroidal topology for the simulation world. Such a
topology is usual in the field of computer simulations: For instance, the worlds of Pac-
Man or the Game of Life are tori. As can be seen in Figure 9, an area can easily be
transformed into a torus by pasting the opposite edges together. In contrast to a sphere,
it is possible to map a torus onto a plane without singularities (Torus, 2005). One
advantage of a toroidal configuration is the validity of the laws of conservation and the
absence of any “border effects”, which do not exist in reality likewise . However, the fact
22

that a toroidal configuration also means that agents on two opposite sides of the social
space influence each other heavily is unsatisfying – especially if one equates physical
and social space (e.g. Latané, Liu, Nowak, Bonevento, and Zheng, 1995).
Due to its relative advantage over the isolated social-area-in-empty-universe assump-
tion or other possible, but rather complicated constructs, the following simulations are
based on a toroidal world.

Figure 9: Transformation from a Plane Area to a Torus23

4.1.2.2 The Decision Module

Schweitzer’s (2003 and 2004) decision module closes the stigmergic circle of his model:
While all the agents generate and maintain a communication field by their decisions or
expressed opinions, the two components of this field in turn determine the future
decisions of these agents. In a binary decision task, every agent has two possibilities to
act: She can reverse her opinion or she can keep it, controlled by the local information on
past decisions taken by her and other agents.

Apart from some very isolated dictatorships.


22

This figure bases partially on the work of Karl Bednarik (Mai 13, 2003) available at the URL given under
23

Torus, 2005. The use of the Bednarik’s figure is allowed under the terms and conditions of the GNU license.

43
To model this, Schweitzer introduces a so-called transition rate of opinion change, wi. As
described below, the sum of all individual transition rates determines both the time
until the next agent will change her opinion (i.e. the lifetime of the whole system) and
the likelihood which agent will be picked to change her opinion. The transition rate wi is
defined as follows:

H 2, / 2, /E
h
F 5 i0 r , t - + h+5 0 ri , t - C
w;+ 5i 5i < ~ exp F+ 1 . 1 .C (5)
F T C
FG CD

The expression w(–*i|*i) is a measure for the relative likelihood that the agent i changes
her opinion from *i to the opposite state –*i. The numerator in the exponent expresses
the difference between both components of the communication field. If the current
opinion of the agent i is also the locally dominant component of the communication
field, the whole term has a rather low value; if the current opinion is different from the
locally dominant component, the term has a rather high value. Finally, the parameter T
stands for a ‘social temperature’ , which encompasses all the “erratic circumstances”
24

(Schweitzer, 2004, p. 308) of opinion change processes. A potential real world interpret-
tation is that this unpredictability results from incomplete or erroneous transmission of
information. In the limit T,-, the influence of the difference between both compo-
nents of the communication field is insignificant; the transition rates of all agents will
be approximately equal and decisions will be made arbitrarily. In the limit T,0, the
influence of this difference is dominant and stochastic components are unimportant.
The influence of the social temperature is exemplified in Figure 10, in which an
exemplary set of ten agents with specific h*i and h-*i is shown. The difference between
the two – in other respects identical – sets is just a different social temperature T.

) S E T?&,56U

Social Temperature T

) S F T?&,56U

HP "%3%.5,1& 2(1Q-Q,3,5R EHHP

Figure 10: Influence of the Social Temperature on the Selection Probability

Following the functional analogy with the role of temperature e.g. in the field of magnetism.
24

44
Modeling the Lifetime of the System and the Change Probability of a Specific Agent

Schweitzer’s decision module as described in his book on Brownian agents (2003) is


similar to the “Stochastic Simulation Algorithm” used to describe phenomena in
chemical kinetics (Gillespie, 1976; Kurdikar, Somvárski, Dusek, and Peppas, 1995). First of
all, a lifetime . for the whole system (i.e. encompassing all of the agents) is calculated
according to the following equation (cf. Schweitzer, 2003):

1
"4 N
I f ;RND < (6)
9w
i 41
i

This equation is not entirely deterministic, but includes a stochastic component ex-
pressed by the term f (RND), where f (RND) denotes a function of a uniformly distribu-
ted random number between 0 and 1.

When the lifetime . is elapsed, exactly one agent is picked to change her opinion. The
probability of being selected depends on the individual wi relative to the other agents’
transition rates. Therefore, an agent whose opinion is in equilibrium with the opinions
of her local environment has a smaller chance of changing her opinion than another
agent who is surrounded by agents with opposite opinions.

Both modules, the communication and the decision module, are complementary and
together they form Frank Schweitzer’s model to describe the spatial and temporal coor-
dination of decisions.

4.1.3 Simulation Results and Model Behavior

Given appropriate parameter values, the simulation leads to mesoscopic clusters of


agents sharing the same opinion, as can be seen in Figure 11. Consequently, the model
leads to the formation of groups according to the definitions of Scholz and Binder (2003)
and Antoine (2001) described in section 3.3. Depending on k and D, these clusters are
stable for a long time or one opinion wins in the course of the simulation.
The model is able to simulate the imitational behavior of individuals but does not, as
mentioned above, consider the quality or the content of the two alternatives in any
respect. Hence, even an objectively bad alternative can become dominant in the course
of the simulation if the model configurations are advantageous (even accidentally!) for
this alternative.

45
(a)

(b)

(c)
4518 agents; 113 rows; 200 columns; T=1.2 [temperature units]; D=2.5 [length units /time units]; k=0.01
F

[1/time units]; !x=!y=1.0 [length units]; (a) 0.011 [time units]; (b) 16.037 [time units]; (c) 22.469 [time units]

Figure 11: Typical Simulation Results of Schweitzer 's (2004) Model

46
4.2 Modeling the Diffusion of Innovations:
An Idealistic Scenario

Schweitzer’s (2004) model, as described in the previous section, neither regards the con-
tent or value of the two alternatives nor includes different types of agents. In this
section, the model will be enhanced in a manner that it is able to reproduce the
diffusion of an initially inferior innovation (with respect to the number of adopters). This
first enhancement does not evaluate the relative utility of the innovation yet, but is still
a model driven solely by social interaction. Accordingly, the resulting model can only
describe the fate of truly better solutions, as the superiority of the innovation is a
necessary assumption for a model in which an innovation can become predominant.

The following subsection explains how the decision module has to be altered in order to
fulfill the purpose of the enhanced model. Thereafter, the behavior of this new model
will be demonstrated and the dependence of the model on its parameter values will be
investigated.

4.2.1 Enhancement of the Decision Module

4.2.1.1 A Completely Microscopically Determined Decision Module

A first change of the decision module has to be done because of both pragmatic and
logic reasons: The perspective of Schweitzer’s decision module focuses on the whole
system with all its agents. The state of the whole system determines when the next
change occurs and which agent changes her opinion. A first reason to alter the decision
module is that the described procedure is very costly with respect to computation time
25

(The more agents form the MAS, the more problematic is the computation time
needed). Fast algorithms are crucial for the extensive investigation of the parameter
space described subsequently. Secondly, a claim of this thesis is to develop a model
which is compatible to the NSSI project “Technology breakthrough modeling” (chapter
2). Most MAS rely on fixed, discrete time steps and so do models developed in the
context of this project. In contrast to that, Schweitzer’s decision module leads to !t,
which change from step to step. A last reason is that the current decision module
conflicts with the paradigm of MAS: While the approach of MAS is to model from the

The generation of Figure 11 demands several hours using a Windows XP® system with a 2.00 GHz
25

Pentium® M processor and 1.00 GB RAM and the implementation described in the appendix.

47
perspective of individuals, equation 6 is formulated from a whole-system view. Such a
macroscopic approach renounces needlessly the explanatory gain, which can be derived
from a microscopically formulated decision module .
26

Kacperski and Ho"yst (2000) suggest a decision module which has been formulated
from an entirely microscopic perspective and fulfills the requirements of constant time
steps and thus practicable computation times. A new equation based on this suggestion
describing the opinion change probability pi of a certain agent i reads as follows:

w;5i + 5i <
pi 4 (7)
w;5i + 5i < 3 w;+ 5i 5i <

Please note that the transition rate wi remains defined as described in equation 5.
This approach combines two advantages: It is able to reproduce the results of
Schweitzer’s unaltered model qualitatively while modeling completely from an
27

individual-based perspective: Every agent has to consider at every time step whether
she wants to change her opinion. Due to these improvements, all of the following
simulations will be based on this enhanced decision module.

4.2.1.2 Differentiated Agent Characteristics

Up to now, all of the agents have been equal. They acted in an equal manner on the
components of the communication field h and they reacted identically to changes of
this field, likewise. These rules result in the formation of spatial opinion or decision
patterns (confer to Figure 11), but the course of the relative shares of both alternatives is
not illuminative for the interpretation of real world phenomena. It is known from the
research on diffusion of innovations that the differences regarding the action and
reaction of agents are helpful to explain why new ideas and products spread in a
society. There are two concepts – one with respect to the action of agents, one with
respect to their reactions – which are necessary to model diffusion processes: The
former concept consists of the idea that not all of the agents have the same impact
strength, but there is a differentiation between ‘opinion leaders’ and ‘followers’ (see
section 3.1). The latter assumes that humans have intrinsically different threshold values

One example to show why the Stochastic Simulation approach is not adequate can be taken from the field
26

of car purchase. It is obvious that a person reasons about the question what type of car she wants to
purchase, because she wants to buy a new car or because the old car is broken – but not because the system
is ripe for an opinion change somewhere.
That means, it leads to similar group formation patterns.
27

48
regarding how many neighboring adopters are necessary until one imitates them by
acquiring the innovation as well (Rogers, 2003).

As mentioned, Lazarsfeld et al. (1968) demonstrated the importance of individuals ac-


ting as opinion leaders for the dynamic of a system. Latané’s theory of social impact and,
accordingly, Schweitzer’s (2004) model allow explicitly for individual strengths si.
However, the feature of agent-specific impact strengths has not been used so far.
Following simulations shall reflect known agent leadership characteristics by different
values of si.
Rogers (2003) merges various research findings into a characterization of opinion lea-
ders. According to his metastudy, opinion leaders have a greater exposure to the media
than their followers do. Furthermore, they show a high accessibility, which means that
they are interlinked extensively in an interpersonal network. This is related to a high
degree of social participation. In general, the socioeconomic status of opinion leaders is
comparatively high. Opinion leadership needs not come along with innovativeness but
leadership rather coincides with an elevated level of conformity with the system norms
(for an explanation refer to the part “Opinion Leaders and Followers” of subsection 3.1.2).
Hence, a case-dependent division between innovative systems and conservative
systems can be made as illustrated in Figure 12: In innovative systems, the opinion
leaders are rather innovative people “on the edge” (Rogers, 2003, p. 317), while in
conservative systems innovators and opinion leaders are separate persons (in which the
innovators are beyond the edge).

Innovative System Conservative System

8 8
8 8 8 8

G
8 7VG 8 7 8

8 8 8 8
8 G$ G&&1/-51( 8
7$ 72,&,1& 9%-'%(
8$ 81331:%(
Figure 12: Innovative versus Conservative Systems

The differences on the reactive or perceptive side have been studied by the sociologist
Mark Granovetter (1978), who described his concept of ‘threshold’ as “the number or
proportion of others who must make one decision before a given actor does so” (p.

49
1420). Granovetter perceives his concept in accordance with the rational actor paradigm;
this can be explained with the following example: When a riot starts, there are only few
rioting participants in the very beginning, but with every additional person who joins
the riot, further individuals can be ‘recruited’. The ‘radical’ persons in the beginning act
as rational as the ones who join rather late: While the former gain high benefits of
rioting and perceive the cost of punishment as low at the same time, the latter have an
inverted cost-value ratio. A related concept consists of the notions ‘risk seeking’ versus
‘risk avoiding’ as personal traits (Jungermann, Pfister, and Fischer, 1998): A risk seeker
tries an innovation even if only few of his peers have tested it, while a risk avoider waits
until almost all of his friends, neighbors, and kin confirm that the innovation is a good
one.
The concept of social thresholds is applicable for binary and mutually exclusive
alternatives. Therefore, the presented findings are transferable to our model. As
communication does not take place directly, but via a communication field, the relative
weight of one of the components of h has to be increased or decreased by a threshold
factor fi, accordingly. Consequently, the probability pi that agent i changes her opinion,
too, henceforth depends on fi (see equation 7).

4.2.1.3 Introducing Diverse Groups of Agents

It is widely accepted that five types of persons can be distinguished regarding innova-
tiveness: the innovators, the early adopters, the early majority, the late majority, and the
laggards (Rogers, 2003). In the first place, agents are classified according to the period of
time in which they decide to adopt an innovation. As typical plots showing the number
of new adopters per time step are shaped similar to a normal frequency distribution,
this type of distribution is usually used to separate the five adopter categories (see
28

Figure 13): The first 2.5% of the adopters (in a chronological meaning) up to that agent
who corresponds to twice the standard deviation (2!) prior to the mean (/) agent are
named ‘innovators’. The following 13.5% up to /-! are defined as the ‘early adopters’. The
‘early majority’ is considered to be the 34% up to the peak of the normal frequency
distribution at /. The ‘late majority’ encompass all the remaining agents up to /+!. The
group of the ‘laggards’ consequently includes the rest.

Typical plots of the total number of adopters versus time are considered sigmoid (Tarde, 1890/2001;
28

Rogers, 2003, Valente, 2005). As the first derivation of the sigmoid function is equal to the probability
density function of the logistic function, a logistic distribution would be more appropriate. Nevertheless, as
the normal distribution and the logistic distribution are similarly shaped and as the normal distribution is
common in this context, this study will rely completely on the classification based on the normal
distribution.

50
H<EK

H<EJ

H<EI
X?>Q%( 1@ X%: D'125%(6

H<EF

H<E

H<HK

H<HJ

H<HI

F<MP EL<MP LIP LIP EJP


H<HF G&&1/-51(6 Y-(3R Y-(3R 9-5%
D'125%(6 +-Z1(,5R +-Z1(,5R 9-==-('6
%
H
%+J! %+! ),>% %3!
<K

<J

<I

<F

<K

<J

<I

<F

<K

<J

<I

<F

K
WL

WF

WE

L
H<

H<

H<

H<

E<

E<

E<

E<

F<

F<

F<

F<
WF

WF

WF

WF

WE

WE

WE

WE

WH

WH

WH

WH

Figure 13: Separation of Five Adopter Groups; after Rogers, 2003

However, as suggested previously, agents do not only differ in the point of time at
which they decide to adopt an innovation, but also with respect to personal traits. There
is evidence, that these differing traits coincide with the relative point of time at which
they adopt . Rogers (2003) characterized the five adopter categories in his well-known
29

book “Diffusion of Innovations”. These characterizations will be summarized in words


subsequently. Furthermore, there will be suggestions how these verbal characteriza-
tions could be operationalized in terms of parameter values for the current model.

Innovators
Innovators are said to be “venturesome” and “cosmopolite” (Rogers, 2003, pp. 282). They
are rather well educated, wealthy and willing as well as capable to bear significant risks.
Very often, innovators are scarcely integrated into local networks due to both a lack of
interest (cosmopoliteness!) and their opinions which often deviate from the local norms.
The connections of these innovators with the local network are coined by social
heterogeneity. Nevertheless, they serve as cranking motor for the system, because they
launch new ideas.
It makes sense to assume, that innovators are already adopters with the start of a simu-
lation run, as the model needs a seed in order to start. Since innovators are outsiders of
the social system, their impact strength sinnovator should be minimal. In an idealized
scenario, the innovators are completely uninterested in the opinion of the other system

Refer to the previous sections for explanation.


29

51
members (they “do not care what people say”). Therefore, the component of h standing
for the traditional solution should be set to zero within their decision module by
multiplying it with a threshold factor finnovator = 0.

Early Adopters
According to Rogers (2003), early adopters have the highest degree of opinion leader-
ship of all adopter categories. With respect to local integration, they are completely
opposite characters compared with innovators: Early adopters are highly connected in a
tight-woven network of many surrounding individuals. The early adopter is respected by
her peers and thus asked for advice by other agents. By adopting it, she puts her “stamp
of approval” (Rogers, p. 283) on an innovation. It is mentioned above that the opinion
leaders’ attitude towards a certain kind of innovation determines whether the system is
an innovative or a conservative one. If the average member of this category is willing to
bear the risk of failure, the early adopters will imitate the innovators. If they are not
willing to do so, the innovation is likely to fail and to vanish in the course of time. In the
beginning, the early adopters are non-adopters like all other groups except the innova-
tors. Their impact strength searly_adopter should be the highest of the entire system. Their
threshold factor depends on whether the innovation is in accordance with the local
norms or whether it is not. If the innovation contradicts the tradition strongly, fearly_adopter
should be considerably above 1. In an innovative system the rule 0 < fearly_adopter << 1
obtains.

Early Majority
Rogers (2003) characterizes members of the early majority as “deliberate” (p. 283). They
are seldom opinion leaders, but are quite well connected in their local network. They
wait until the first adopters can report on their experiences with the innovation, but are
still willing to bear a manageable risk.
The impact of an early-majority member communicating her opinion can be considered
as ‘normal’ or ‘average’: They have considerably more influence than innovators as they
are fully respected members of their society, but do not possess the persuasiveness of
opinion leaders. Thus, searly_majority has to be defined in the range between sinnovator and
searly_adopter. In an innovative system, the threshold factor fearly_majority has to be larger than
the corresponding one of the early adopters.

Late Majority
Members of the late majority adopt new ideas, ideologies, or products clearly after at
least half of the related peers have done so. The late majority can be described as rather
skeptical. The main drivers for adoption are peer pressure and economical reasons (e.g. if

52
the traditional product is not supported adequately any longer by the producer). Often
due to scarce monetary resources, the late majority is rather on the risk avoider side.
For that reason, the threshold factor flate_majority is both larger than fearly_majority and larger
than 1. However, there is neither a theoretical nor an empirical hint which would justify
defining slate_majority smaller or larger than searly_majority. Keeping the parsimony principle in
mind, the impact strength shall be defined as slate_majority = searly_majority.

Laggards
The laggards belong to the most conservative group of all agents. Rogers (2003) claims
that the “point of reference for the laggard is the past” (p. 284). Due to their character,
they are neither influential nor highly connected. Compared to an innovator, a laggard is
an outsider from the opposite region of the agent spectrum. Therefore, the impact
strength is minimal and defined as slaggards = sinnovators. Their threshold factor flaggards is
clearly above 1 and has the highest value of all agent groups.

4.2.2 Behavior of the Enhanced Model

Simulations with parameter sets intuitively chosen according to the rules defined above
clearly show that the enhanced model is capable to reproduce macroscopic properties
known from the diffusion research: Plots of the cumulative number of adoptions are S-
shaped as demanded by Tarde (1890/2001), Rogers (2003), and other researchers.
Further, the model provides still spatially resolved opinion formation processes like the
original model of Schweitzer (2004).

In order to gain further insight into the interplay of the several parameter values and to
check whether the positive results based on an intuitive setting have been “flukes”, a
reasonable part of parameter space must be simulated and evaluated. In the following,
two things will be done: First, the mutual accordance between the theoretical consi-
derations about agent characteristics and the related model behavior will be proven.
Secondly, the dependence of the dynamic of the system on the information decay rate k*
and the information diffusion coefficient D* will be investigated.

The following part derives indicator values for the quality of a simulation outcome. Such
indicator values are necessary if a large parameter space has to be evaluated in a
partially automated procedure. Thereafter, the results of the investigation of the agent-
related parameters (strength si and threshold factor fi) are presented. Finally, the way
how k* and D* influence the behavior of the model will be shown and interpreted.

53
4.2.2.1 Evaluation and Describing Simulation Outcomes

As mentioned earlier, an idealistic adoption curve has a sigmoidal shape. For this reason
it seems to be reasonable to take a sigmoid function as benchmark for the quality of the
model under a distinctive parameter setting. If the generated data fits well to this
sigmoid function, the model is in good accordance with the theory (which has manifold
been validated empirically). However, if there are large residuals, the corresponding
model is a bad representation of the theory.

The sigmoid function is a special case of the logistic function and is defined as follows:

1
f (t ) 4 f 0 3 (8)
1 3 e + w It

Where f(t) stands for the total number of adopters at a given time step t, f0 is the
intercept of the ordinate or the initial number of adopters (innovators), and w denotes a
rate parameter which scales the curve in the direction of the time axis. A sigmoid func-
tion with f0=0 and w=1 as well as the related first derivation is shown in Figure 14.

E
"5-&'-(',\%' X?>Q%( 1@ D'125%(6

H<K

H<J

@;5C
@[;5C
H<I

H<F

H
WM

WI<N

WI<I

WI<E

WL<K

WL<M

WL<F

WF<O

WF<J

WF<L

WF

WE<N

WE<I

WE<E

WH<K

WH<M

WH<F

H<E

H<I
H<N

E<L

E<J

E<O

F<F

F<M

F<K

L<E

L<I

L<N

I<L

I<J

I<O

),>%
Figure 14: A Sigmoid Function and its First Derivation

54
Deviation between Theory and Data
As stated, a measure of the deviation between the theoretical curve and the simulated
data is the first characteristic value to be developed. Therefore, parameter values for a
sigmoid function are calculated, which fits optimal to the simulated data. Optimal
corresponds, in this context, to the minimal sum of squares of the residuals. To do so,
Ledvij (2003) suggests applying the Levenberg-Marquardt algorithm [LMA] as it is one
30

of the most robust optimization algorithms for nonlinear functions available. The
functional principle of the LMA is to interpolate amongst the Gauss-Newton algorithm
and the gradient descent method, which are both numeric iterative procedures. Since
the LMA relies on these two algorithms, an (automatically generated) initial guess for
the parameter vector p is needed (Levenberg-Marquardt algorithm, 2005): In this case p
consists of the rate parameter w, the inflection point of the sigmoid function t0, as well
as the upper and lower asymptotic values f0 and f-. As an indicator for the simulation
quality serves the sum of squares of the remaining differences between the optimal
sigmoid curve and the generated data. Different settings need different periods until all
agents have been converted. To make the sum of squares comparable, the remaining
value has to be weighted by the duration of the diffusion process Tw (confer to the next
characteristic value).

Duration of the Diffusion Process


The rate parameter w scales the sigmoid curve in the direction of the time axis.
Therefore, the optimized parameter value w is a measure of the duration of the diffusion
process. The standard sigmoid function with a rate parameter w=1 “lasts” from
approximately tstart= –3.5 [time units] to tend=+3.5 [time units] . The duration of the
31

process, Tw(w), can be calculated for every simulation result with the information
Tw(1)=7 and the optimal parameter value w calculated by the LMA.

Number of Converted Agents


A last characteristic value is the total number of agents which become adopters during
the duration of one simulation run. Our model encompasses stochastic components,
which allows for different outcomes of different simulation runs underlying the same
starting conditions. Considering this, two different values concerning the number of
converted agents are of interest: first, the share of simulation runs in which all agents
became adopters and secondly, the average number of converted agents.

In the subsequent calculations, the NOLF1 subroutine of FUJITSU’s “Scientific Subroutine Library II“ is used
30

to implement the Levenberg-Marquardt algorithm. For details, refer to FUJITSU LIMITED, 1989.
As the sigmoid function exerts asymptotic behavior for t#6$, the two endpoints have to be chosen
31

arbitrarily. However, considering Figure 14, the values 63.5 seem to be a reasonable choice.

55
4.2.2.2 Scanning the Parameter Space: Agent Characteristics

There are many optimization techniques which allow to identify a good, a locally opti-
mal, or even the best solution for a given parameter space and evaluation task. It would
be possible, by using for instance an evolutionary algorithm, to identify initial conditions
that lead to amazingly good results in terms of fit between simulation data and sigmoid
evaluation function. However, as mentioned in section 3.3, it is not reasonable just to
identify the parameter combination, which makes the model fitting optimal to a given
data set: It is possible to fit inappropriate models to a given data set if the model has
enough free parameters (degrees of freedom). Therefore, the definition of initial
conditions has to be reconnected to theory and must be defendable.
Nonetheless, a systematic screening of the parameter space – which is similar to the
exhaustive search method used for optimizations – can be insightful in order to
32

describe the model behavior and to evaluate the quality and appropriateness of a
model. In this study systematic screening means to evaluate the parameter space sys-
tematically following a discrete and regular grid, which reduces the infinite number of
possible states for every parameter to a small one.

First, fi / si combinations within sensible ranges for both sets of parameters are
investigated. The investigated parameter space is given in Table 1, whereas the rule
fearly_adopter 0 fearly_majority 0 flate_majority 0 flaggard has additionally been applied. The !f-values of
the used grid are given in square brackets.

sinnovator =1 finnovator =0
searly_adopter =6 fearly_adopter = 0.2 – 0.5 [0.05]
searly_majority =2 fearly_majority = 0.2 – 0.8 [0.1]
slate_majority =2 flate_majority = 0.2 – 1.6 [0.2]
slaggard =1 fearly_adopters = 0.2 – 1.8 [0.2]
sinnovator =1 finnovator =0
searly_adopter =7 fearly_adopter = 0.2 – 0.5 [0.05]
searly_majority =2 fearly_majority = 0.2 – 0.9 [0.1]
slate_majority =2 flate_majority = 0.2 – 1.6 [0.2]
slaggard =1 fearly_adopters = 0.2 – 1.8 [0.2]
sinnovator =1 finnovator =0
searly_adopter =6 fearly_adopter = 0.2 – 0.5 [0.05]
searly_majority =3 fearly_majority = 0.2 – 0.8 [0.1]
slate_majority =3 flate_majority = 0.2 – 1.6 [0.2]
slaggard =1 fearly_adopters = 0.2 – 1.8 [0.2]
Table 1: Parameter Space of the Agent Characteristics

Exhaustive search means to evaluate every possible solution in a given search space. This method surely
32

identifies the global optimum, but is usually not feasible for real-world-problems. For example, a Traveling
Salesman Problem with 50 cities generates 10 different possible tours (Michalewicz & Fogel, 2000). In our
62

case, the search space is defined by continuous variables, which means that the number of possible solu-
tions is infinite.

56
The given parameter space is a compromise between the wish to cover as much cases
following the rules defined on pp. 51 and the necessity to limit the computation time.
Every simulation run has been calculated with a randomly distributed set of 1,000
agents on a 100 x 100 lattice. The innovators appear after 7 time steps. The diffusion
constant D is set to 2.6, while the decay rate k is defined as 0.15. The social temperature,
T, is set to 0.5 to achieve a low stochasticity. As the model includes stochastic compo-
nents, the results of every parameter set have to be calculated repeatedly. To have a
minimum of statistical stability, each combination was simulated five times. The termi-
nation criteria for a simulation run are: (1) more than 400 time steps, (2) all agents are
adopters. This set-up leads to 14,800 single results, which demand approx. three weeks
of computation time on a computer system as described in footnote 26, p. 47.
With respect to the weighted sum of squares, the best 1% of all evaluated combinations
are shown in Table 2. The indices 2, 3, 4, and 5 stand for early adopters, early majority,
late majority, and laggards. W-LS denotes the mean weighted sum of squares of the
remaining residuals, while !-LS is the related standard deviation.

s2 / s3,4 f2 F3 f4 f5 W-LS !-LS


2/7 0.30 0.6 1.4 1.4 161.9 36.3
2/7 0.30 0.7 1.2 1.2 163.2 78.2
2/7 0.20 0.8 0.8 0.8 174.2 39.1
2/7 0.40 0.8 1.2 1.6 175.5 96.2
2/7 0.25 0.8 1.0 1.0 177.2 71.9
2/7 0.25 0.7 0.8 0.8 177.5 61.4
2/7 0.25 0.6 1.2 1.2 183.6 42.7
2/7 0.30 0.7 1.2 1.6 183.9 86.2
2/7 0.25 0.8 1.0 1.4 186.9 96.2
2/7 0.25 0.7 1.0 1.2 187.8 80.3
2/7 0.25 0.7 1.0 1.6 191.6 71.1
2/7 0.35 0.7 1.6 1.8 194.6 88.3
2/7 0.30 0.9 1.2 1.2 195.5 43.5
2/7 0.30 0.7 1.0 1.8 199.7 85.6
2/6 0.35 0.7 1.6 1.6 199.8 103.5
2/6 0.25 0.7 1.0 1.0 202.9 66.5
2/7 0.35 0.8 1.4 1.6 204.0 91.5
2/7 0.20 0.7 0.8 0.8 204.5 56.5
2/7 0.25 0.8 0.8 0.8 204.9 81.0
2/7 0.25 0.9 1.0 1.8 210.0 64.9
2/7 0.30 0.6 1.4 1.6 210.3 59.6
2/6 0.35 0.8 1.6 1.8 210.4 103.9
2/7 0.25 0.6 1.0 1.4 210.7 64.6
2/7 0.25 0.7 1.0 1.0 211.3 38.9
2/7 0.25 0.8 0.8 1.6 213.1 90.4
2/7 0.20 0.6 1.0 1.0 213.3 58.0
2/7 0.35 0.8 1.2 1.8 214.0 97.1
2/7 0.35 0.8 1.2 1.8 216.0 64.9
2/7 0.25 0.7 0.8 1.2 218.4 90.9
Table 2: Optimal Parameter Combinations

57
The small W-LS differences between the diverse combinations with comparatively high
standard deviations at the same time indicate that this ranking is not stable. Especially
the five samples per parameter combination are much too few in order to generate a
statistically stable ranking of parameter combinations with significant differences.

Figure 15 shows the plots of the ‘best’ and the ‘worst’ simulation run of the first four
parameter combinations given in Table 2. The red line represents the sigmoid function,
which fits optimal to the data of the simulation run, while the blue line corresponds to
the simulated course of adoption. As can be seen, the residuals between the simulated
and the theoretical curve are rather small. The most important deviations are at the
beginning and the end of the diffusion process: The simulation trends to start jerkily
and to converge too quickly to full adoption.

A statistical analysis of the data created by the systematic screening of the parameter
space allows to analyze the importance of the single parameters for the simulation
results. The relevant results of a univariate analysis of variance are shown for W-LS
(Table 3) and Tw (Table 4) as dependent variables. N is 14370, as all runs in which not all
agents become adopters are excluded from the analysis.

df F Sig Partial "2


Corrected Model 6 475.072 0.000 0.166
Intercept 1 68.512 0.000 0.005
Searly_majority, late_majority 1 78.007 0.000 0.005
Searly_adopter 1 2.543 0.111 0.000
fearly_adopter 1 1382.333 0.000 0.088
fearly_majority 1 2073.887 0.000 0.126
flate_majority 1 66.411 0.000 0.005
flaggard 1 0.357 0.550 0.000
Table 3: Significance and Magnitude of Effect; W-LS as Dependent Variable

As can be seen in Table 3, all parameters apart from flaggard and searly_adopter contribute
significantly to the resulting value of W-LS. The explained variance of W-LS by this linear
model is rather low with an adjusted R2 value of 0.165. There are only two parameters,
fearly_adopter and fearly_majority, contributing to a considerable degree to the quality W-LS,
whereas the magnitude of effect (partial 12) of the latter is approximately 1.43 times as
much as that of the former. Within the investigated range of parameter values, other
influences not included in this statistical analysis seem to be dominant with respect to
W-LS.

58
EFHH EFHH

EHHH EHHH

KHH KHH

JHH JHH

IHH IHH

FHH FHH

H H

a
E J EE EJ FE FJ LE LJ IE IJ ME MJ JE JJ NE NJ KE KJ OE OJ EHE EHJ EEE EEJ EFE EFJ ELE ELJ EIE EIJ EME E J EE EJ FE FJ LE LJ IE IJ ME MJ JE JJ NE NJ KE KJ OE OJ EHE EHJ EEE EEJ EFE EFJ ELE ELJ EIE EIJ EME

EFHH EFHH

EHHH EHHH

KHH KHH

JHH JHH

IHH IHH

FHH FHH

H H

b
E J EE EJ FE FJ LE LJ IE IJ ME MJ JE JJ NE NJ KE KJ OE OJ EHE EHJ EEE EEJ EFE EFJ ELE ELJ EIE EIJ EME E J EE EJ FE FJ LE LJ IE IJ ME MJ JE JJ NE NJ KE KJ OE OJ EHE EHJ EEE EEJ EFE EFJ ELE ELJ EIE EIJ EME

EFHH EFHH

EHHH EHHH

KHH KHH

JHH JHH

IHH IHH

FHH FHH

H H

c
E J EE EJ FE FJ LE LJ IE IJ ME MJ JE JJ NE NJ KE KJ OE OJ EHE EHJ EEE EEJ EFE EFJ ELE ELJ EIE EIJ EME E J EE EJ FE FJ LE LJ IE IJ ME MJ JE JJ NE NJ KE KJ OE OJ EHE EHJ EEE EEJ EFE EFJ ELE ELJ EIE EIJ EME

EFHH EFHH

EHHH EHHH

KHH KHH

JHH JHH

IHH IHH

FHH FHH

H H

d
E J EE EJ FE FJ LE LJ IE IJ ME MJ JE JJ NE NJ KE KJ OE OJ EHE EHJ EEE EEJ EFE EFJ ELE ELJ EIE EIJ EME E J EE EJ FE FJ LE LJ IE IJ ME MJ JE JJ NE NJ KE KJ OE OJ EHE EHJ EEE EEJ EFE EFJ ELE ELJ EIE EIJ EME

Left-hand side: best fit simulation runs; right-hand side: worst fit simulation runs; from top down: (a) f2=0.3; f3=0.6;
f4=1.4; f5=1.4; (b) f2=0.3; f3=0.7; f4=1.2; f5=1.2; (c) f2=0.2; f3=0.8; f4=0.8; f5=0.8; (d) f2=0.4; f3=0.8; f4=1.2; f5=1.6. s2=7
and s3,4=2 for all. The indices stand for the following: 2=early adopter; 3=early majority; 4=late majority; 5=laggard.
The marked regions are discussed in the text.

Figure 15: Best Fit and Worst Fit Plots of Different Parameter Combinations

59
df F Sig Partial "2
Corrected Model 6 5501.342 0.000 0.697
Intercept 1 1901.429 0.000 0.117
Searly_majority, late_majority 1 3935.803 0.000 0.215
Searly_adopter 1 0.169 0.681 0.000
fearly_adopter 1 12730.280 0.000 0.470
fearly_majority 1 2578.534 0.000 0.152
flate_majority 1 2676.632 0.000 0.157
flaggard 1 38.492 0.000 0.003
Table 4: Significance and Magnitude of Effect; Tw as Dependent Variable

The statistical model explaining the duration of a diffusion process Tw explains more of
the variance than the previous model for W-LS as the adjusted R2 is 0.697. All
parameters apart from searly_adopter have a significant interrelation with Tw. The strongest
effect on the duration Tw shows fearly_adopter with a partial 12 of 0.470. In the middle range
are searly_majority, late_majority, fearly_majority, and flate_majority with partial 12 of 0.215, 0.152, and 0.157
respectively. Negligible with respect to the magnitude of effect are searly_adopter and flaggard.

4.2.2.3 Brief Discussion of the Results

Some insights can be gathered from the results presented above. From Table 2, one can
derive the following propositions:

1. As so many different parameter settings can reproduce a sigmoid-like curve, the


developed model seems to be quite robust.
2. The best of the three investigated strength combinations is searly_adopter = 7 while
searly_majority = slate_majority = 2. Thus, the difference between opinion leaders and ‘normal’
agents has to be rather high to produce good results in the framework of this model.
Furthermore, it does not seem advantageous at all, if the strength of the early and
the late majority is above the value 2.
3. Good results seem to have fearly_adopter values, which lie in the lower middle of the
investigated range. Neither too sensitive nor conservative early adopters lead to
good accordance between theory and simulations.
4. There has to be a considerable distance at least between fearly_adopter and fearly_majority.
Parameter settings with overall low or overall high fi values do not reproduce the
sigmoid curve adequately.

The plots presented in Figure 15 support the second proposition insofar, as rather low
fearly_adopter values lead to a jerkily start of the adoption curve. Very high fearly_adopter values
(conservative opinion leaders) result in a delayed start and often do not start at all

60
(please refer to the data in the appendix). The smoothest start of the good fit
combinations shown in Figure 15 shows option (d) with fearly_adopter = 0.4; fearly_majority = 0.8;
flate_majority = 1.2; flaggard = 1.6 and searly_adopter = 7 while searly_majority,late_majority is 2. As this
combination is the only one out of the best solution with such a smooth start, it is
chosen for the following simulations as the standard parameter set. Furthermore, it is
reasonable not to assume a too low fearly_adopter as even in highly innovative systems an
extreme degree of willingness to take risks is not a realistic assumption. Finally,
aesthetics and symmetry are principles often found in nature. The chosen parameter
combination is axially symmetric to f = 1 and all f-values are equidistant. Therefore, both
principles are fulfilled: aesthetics and the parsimony principle.

Interpreting Table 3 and Table 4 against the background of the other results leads to
further propositions:

5. There are not enough different values for the two strength parameters to come to
robust statements on the importance and the mode of functioning of searly_adopter and
searly_majority = slate_majority. Nevertheless, the observations of proposition no. 2 hold.
6. The opinion leaders or early adopters mainly drive the system. The importance of the
different agent groups decreases chronologically, i.e. early adopters are more influ-
ential to the dynamic of the system than the early majority and so on. Thus, the
early adopters are the “organ grinders” while the following groups are the system’s
“monkeys”.
7. The fit between the sigmoid function and the simulated data is strongly determined
by factors, which have not been investigated. It is reasonable to assume that one
important factor is the arrangement of the agents in the social space.

Proposition no. 6 will be investigated further in subsection 4.2.3.

4.2.2.4 Scanning the Parameter Space:


Influence of k and D on the Model Behavior

Not only the various agent characteristics determine the behavior of the model, but also
the global parameter k* and D*. Together they constitute a 2-dimensional parameter
space. This space has been investigated in the range of 1.8 0 D* 0 4.0 [length units /time
2

units] and 0.3 0 k* 0 2.7 [1/time units] (both with steps of 0.2 [units]), using the standard
parameter set defined for the agent characteristics. To achieve a minimal statistical
stability, every combination of k* and D* has been simulated ten times. The numerical
results are given in the appendix. Figures 16 – 19 show graphical representations of the

61
simulation results. In the following subsection 4.2.2.5, these findings are merged into
one aggregated Figure 20.

FMHH
FFMH

FHHH
ENMH
_ W9"

EMHH
EFMH
EHHH
NMH
MHH
FMH
H

I H<EE
L<K
L<J H<EM
],@ L<I
@?6 L<F
,1& L H<EO

k
^1 F<K F<J

%
&6

5
#-
H<FL
5-& F<I
5 ] F<F

R
.-
F H<FN
E<K ]%

Figure 16: Mean W-LS versus D and k

Figure 16 shows the mean W-LS value dependent of the diffusion constant D* and the
decay rate k*. The range of the decay rate in this figure is 0.11 0 k* 0 0.27 [1/time units],
because 0.11 [1/time units] is the smallest tested value of k*, for which almost all
simulation runs achieve full adoption. As can be seen, the squared residuals between
the sigmoid curve and the simulated data are small for high values of k and rather low
values of D, while the opposite configuration leads to high mean W-LS values.

Not only the W-LS value indicates the quality of a model configuration, but also its
variation. Very often, a high variance accompanies a high mean value. Hence, a better
measure to express the robustness of a model configuration is its coefficient of
variation, which is the relative standard deviation !//. Figure 17 illustrates this relative
standard deviation of W-LS dependent of the diffusion constant D* and the decay rate
k*. The figure offers no clear and unambiguous law as Figure 16 does. However, there
seems to be a trend that higher decay rates (> 0.21 [1/time units]) lead to more robust
results in terms of relative variance.

62
H<FN

H<FM

H<FL

]%.-R #-5% B
H<FE
EFHWEIH
EHHWEFH
H<EO KHWEHH
JHWKH
IHWJH
H<EN
FHWIH

H<EM

H<EL

H<EE
E<K F F<F F<I F<J F<K L L<F L<I L<J L<K I

],@@?6,1& ^1&65-&5 ]
Figure 17: Relative Standard Deviation of W-LS versus D and k

Figure 18 shows the mean duration of the diffusion process Tw [time units] in
dependence of k* and D*. The shape of the shown parameter area is quite regular. Its
lowest mean Tw value is at kmax and Dmax. The highest value is at the opposite corner. The
mean Tw seems to increase more by a decrease of k* than by a decrease of D*. Like in the
case of W-LS, higher decay rates seem to coincide with lower relative standard
deviations (see Figure 19).

FMH

FFM
]?(-5,1& 1@ ],@@?6,1& ):

FHH

ENM

EMH

EFM

EHH

NM

MH
E<K

FM
F<F

5]
H
F<J

5-&
H<EE
H<EL

6
L

&
H<EM

^1
H<EN

]%.
L<I
H<EO

&
H<FE

-R ,1
H<FL

@?6
L<K

#-5
H<FM

%k , @
H<FN

]
Figure 18: Mean Duration Tw versus D and k

63
H<FN

H<FM

H<FL

]%.-R #-5% B
H<FE
FIWFN
FEWFI
EKWFE
H<EO
EMWEK
EFWEM
H<EN OWEF
JWO

H<EM

H<EL

H<EE
E<K F F<F F<I F<J F<K L L<F L<I L<J L<K I

],@@?6,1& ^1&65-&5 ]
Figure 19: Relative Standard Deviation of Tw versus D and k

As indicated previously, decay rates k* below 0.11 [1/time units] do not always achieve
full adoption. For k* = 0.3, 0.5, and 0.7 [1/time units], the adoption process did not start
in any of the ten simulation runs; the total number of adopters after 400 time steps
remained always below 60 of 1000 agent in total. In most of the cases, the number of
adopters remained at its initial level. A transition takes place at k* = 0.9 [1/time units] for
the investigated configuration, because for D* below 2.8 [length units /time units] quite
2

high mean shares of adoption and high numbers of full-adoption simulation runs occur.
D* > 2.8 [length units /time units] show a behavior similar to that of k* < 0.9 [1/time
2

units], i.e. almost no full-adoption simulation runs and low mean shares of adoption. k*
> 0.9 [1/time units] lead to full adoption in almost every simulation run (Details can be
found in the appendix).
Another phase transition takes place close to k* = 0.27 [1/time units]. For this parameter
value, only few simulation runs achieve the full adoption criterion. However, the mean
shares of adoption are quite high – in average, only one agent is missing to fulfill the full
adoption criterion (Details can be found in the appendix).

4.2.2.5 Brief Discussion of the Results

The results presented in above indicate that D* seems to be predominantly an adjusting


screw for the duration of the diffusion process Tw. Within the investigated range, the
variation of W-LS caused by a !D* is small compared to the variation caused by a !k*.
Within the investigated range of D*, there are no discrete transitions of the behavior of

64
the system with respect to W-LS and Tw. The relative standard deviations of both
indicator variables are also more or less constant along the D*-axis.
In contrast, there are two phase transitions along the investigated range of the k*-axis:
(1) below 0.11 [1/time units], the adoption process does not start and (2) above 0.25
[1/time units], the system does not reach every agent in every simulation run. k* is
determinant for all presented indicator variables: the mean W-LS value and its relative
standard deviation as well as the mean Tw and its relative standard deviation decrease
with an increasing k* value.

H<L
Adoption Process Does Not Start

H<O
Intermediate State

]%.-R #-5% k TE`5,>% ?&,56U


100 % Adoption
_W9" ,&.(%-6%6

(%3< ! ,&.(%-6%
): ,&.(%-6%6

): ,&.(%-6%6

Almost 100% Adoption. Inform. Does Not Reach Every Agent H<FN

E<K ],@@?6,1& ^1&65-&5 D T3%&=5* ?&,56F`5,>% ?&,56U I<H

Figure 20: Schematic Overview of the Model’s Dependence of k and D

The aggregated scheme depicted in Figure 20 includes all the relevant relationships and
processes presented above. Some conclusions can be drawn from the findings about the
model’s dependency of W-LS and Tw:

8. If the past is too dominant in a social system, innovations have no chance to rise.
Systems with a rather low k*-value (" 0.9) are strongly driven by past experiences.
Thus, conservative systems, which tend to preserve traditions, are modeled by a
rather low k*. This finding is in accordance with Schweitzer’s concept of a social
memory (see p. 38) and Weidlich’s idea of a social field encompassing both day-by-
day interaction as well as culture (see p. 17).
9. If k* has a rather high value, the social net becomes wide-meshed. Some more or less
isolated agents receive not enough information to change their opinion. The
importance of distant agents decreases.

65
10. An optimal range of k* in terms of robustness and controllability is around k* = 0.20:
k* is not yet in that range with a high sum of squares (see Figure 16), while D* has
still a high slope allowing a good controllability of the speed of the diffusion process
(Figure 18).
11. Concluding, k* is a proxy for the tradition-orientedness of the simulated society,
while D* is a measure for the velocity with which information is communicated
through the social space.

4.2.3 Simplifying the Model

One question arising from the findings above is whether five different agent groups are
needed to reproduce the macroscopic characteristic of a sigmoid adoption curve. We
found out that opinion leaders (which are identical with the early adopters in our
model) are the pacemakers of the diffusion process. Therefore, a reasonable
simplification is to merge the five agent groups into three: innovators introducing an
innovation, opinion leaders as first adopters and opinion makers, and the rest as
followers. To evaluate this simplified model, the corresponding parameter space within
the following range has been investigated: fearly_adopter = 0.15 to 0.5 in steps of 0.05; ffollower
= 0.2 to 1.2 in steps of 0.1; searly_adopter = 7 and sfollower = 2; the share of the followers equals
the cumulative share of the former early majority, late majority and laggards.

"5-&'-(' +1'%3 ",>23,@,%' +1'%3

EJHH

EIHH

EFHH

EHHH
% 1@ _W9"

KHH

JHH

IHH

FHH

H
H EH FH LH IH MH JH NH KH OH EHH

#-&B,&= TEHHSa%65 8,5A HS_1(65 8,5U


Figure 21: Comparison Between the Standard and the Simplified Model

66
Figure 21 shows a scatter plot, in which a ranking order of the various parameter
combinations is plotted versus their standard deviation ! of the corresponding W-LS
value. The ranking order is determined by the weighted squared sums of the residuals
W-LS, where the combination with the lowest mean W-LS has a ranking value of 100.
The red triangles stand for parameter combinations tested in the simplified model while
the blue rhombus represents parameter combinations of the standard model.

Within the best 10% of the total set of all parameter combinations (consisting of the
standard model and the simplified model), there are solely four combinations from the
simplified model. This corresponds to approximately 2.8% of all solutions from the sim-
plified model. In the best quarter of all combinations, there are 26 solutions (approxima-
tely 18.3%) from the simplified model. Consequently, the simplified model combinations
are underrepresented in the upper part of the whole set of combinations. With respect
to the standard deviation, the simplified model seems to behave like the standard
model.

fearly_adopter ffollower W-LS !-LS


0.25 1.1 296.0 82.1
0.25 0.9 297.5 127.3
0.15 0.8 310.5 113.9
0.40 1.1 311.2 139.7
0.15 1.0 322.1 139.8
0.45 0.9 329.5 186.2
0.30 1.0 336.6 86.8
0.15 1.1 338.7 157.3
0.45 1.2 339.5 126.9
0.15 0.9 240.0 112.6
Table 5: Optimal Parameter Values for the Simplified Model

The ten best parameter combinations for the simplified model are given in Table 5. Like
in Table 2, W-LS stands for the mean value of the sum of squares of a specific parameter
combination and !-LS for the corresponding standard deviation. Every combination has
been simulated nine times. 293 combinations of the standard model have a lower mean
W-LS value than the best simplified setting with fearly_adopter = 0.25 and ffollower = 1.1.

4.2.3.1 Brief Discussion of the Results

Both the statistical analysis in subsection 4.2.2.2 as well as the systematic screening of
the parameter values shows that the simplified model is capable to reproduce essential
characteristics of diffusion processes. The systematic comparison between the standard
and the simplified model demonstrates that the standard model provides many

67
solutions, which fit significantly better to the theoretical sigmoid curve of adoption than
the best solutions of the simplified model. However, there are four parameter
combinations of the simplified model in the upper 10% of a merged data set of the
standard and the simplified model. As in the case of the standard model, a quite large
distance between the fi-values seems to be eminent for a good fit combination.

Even though the presented simplification offers satisfying good results in terms of low
W-LS, the following simulations rest upon the standard model. Reasons for that are the
literature-based definition of five agent groups and the better simulation results of the
standard model. Furthermore, it is reasonable to claim that a certain share of a system’s
agents has to be risk avers. Risk aversion corresponds with a fi-value substantially higher
than 1.0. As this phenomenon cannot be represented within the frame of the simplified
model, this model might lead to the right results for the wrong reasons.

Nevertheless, these findings indicate that further research on adopter characteristics


and their systemic importance is necessary. Especially the laggards are not necessary
from a modelers perspective to reflect all the important processes and known
characteristics in a diffusion-of-innovations model.

68
4.3 Modeling the Diffusion of Innovations:
Reconsidering Preferences and Utility

The current model is able to reproduce known characteristics of innovation-diffusion


processes. Nevertheless, it is still not capable to discriminate amongst dominant
innovations and those, which are not advantageous compared to the status quo. The
question whether an innovation penetrates society or not is only determined by agent
characteristics, the communication velocity, and the degree of society’s history-orien-
tedness. In this section, a second enhancement of Schweitzer’s model is presented. This
enhanced model amalgamates the previous model characteristics of a rather
sociological model with an evaluative part considering the innovations relative utility.
First, this enhancement is described subsequently. Thereafter, the model behavior is
explained.

4.3.1 Enhancement of the Model

We have learned that human decision-making takes place somewhere in between total
rationality and mindless mimicry. Developing an individual-based model to describe the
diffusion of innovations in real-world systems demands to know when humans behave
rather on the rational actor side and when they just imitate the behavior of others. In
subsection 3.1.3, a concept for the merger of both aspects has been given with the
elaboration likelihood model [ELM]. In this study, the ELM is interpreted as follows: If an
agent is in equilibrium with her local environment, there is no need to revise her current
opinion. An agent, whose opinion is supported by her fellow agents, shows low
involvement and follows the ‘peripheral route’, i.e. applies imitation as a decision
heuristic. If the agent’s current opinion is in opposition to the predominant one in his
social neighborhood, she follows the ‘central route’ and evaluates both possible alter-
natives thoroughly. By this interpretation of the ELM, a theory is provided, which
combines the views of the homo sociologicus and the homo economicus in an explicit,
rule-based approach.

4.3.1.1 Integrating Cost-Benefit-Considerations by a Logit-Model

One of the objectives of this work is to provide an interface to the NSSI project
‘‘Technology breakthrough modeling’’ (see chapter 2). Up to now, this research group
applies a multinomial logit model to calculate the decision-making processes of their

69
MAS (Müller & de Haan, 2006). Within this approach, an aggregated utility is assigned
to every alternative. Müller and de Haan calculate the utilities for the different car types
in their model by using assumptions underlying the Multi-Attribute Utility Theory
(Keeney & Raiffa, 1976; Scholz & Tietje, 2002, Jungermann et al., 1998): (1) different
attributes are comparable, (2) different attributes can compensate each other, and (3) a
total utility of an alternative can be calculated by adding the weighted partial utilities of
the alternative’s attributes. Accordingly, the total utility Ui obeys the following equation:

n
U i 4 9 K i j Ci j (9)
j 41

Where K i j stands for the weighting factor for attribute j of alternative i and Ci j denotes
the partial utility of attribute j of alternative i. In a logit-model, the probability pi that
alternative i is picked out of a choice set I is given as:

exp;U i <
pi 4 (10)
9 exp;U j <
j LI

So, in the case of a binary choice set, an alternative with a clearly higher utility has a
disproportionately high probability to be selected. In a logit-model, there is – just as in
real life – always a probability that an inferior alternative is chosen. Nevertheless, the
corresponding actor is likely to behave rational.

4.3.1.2 Process Scheme of the Second Enhancement

Figure 22 is a sketch of a process-structure model for an integrated socioeconomic


model of decision-making. The final model is implemented according to this scheme.
The core of the improved model is still the local comparison of both components of the
communication field h (symbolized by the red-blue-striped circle). If an agent is in
disequilibrium with the opinions of her social neighborhood, she takes the central route.
Disequilibrium means, that the agent would have reversed her opinion in the previous
model. If she and her peer-group share the same opinion, the peripheral route is chosen.
In the peripheral route, the choice set consists only of the alternative, which is currently
applied by the agent and, thus, her current opinion is kept. The central route includes an
evaluation of the alternatives using the logit-model: In a random process, the agent
selects one of the two alternatives, whereas the selection probability is determined by
equations 9 and 10. In the next time step, this decision is reevaluated by the same
procedure.

70
,?> Yd
, 3,Q( ?,3
, Q(
d? ,?>
6%
],

b,=* G&/13/%>%&5 91: G&/13/%>%&5

]%.,6,1& a-6%'
c%%2 72,&,1&
1& Y/-3?-5,1&

0165W]%.,6,1&
Y/-3?-5,1& #-5,1 1@ Q15* .1>21&%&56 1@ h

Figure 22: Process Scheme of a Socioeconomic Decision Model

This pre- and post-decision evaluation scheme is reasonable, because

1. such a scheme is in accordance with an intuitive understanding of rational behavior:


One expects from a rational decision-making process that ‘bad’ alternatives are
eliminated early in the process and that an alternative is chosen, which is probably
the ‘best’. As decisions in a real world situation are mostly decisions under uncer-
tainty, it is not guaranteed that a selected alternative keeps what has been pro-
mised. Therefore, one would expect a rational decision-maker to reevaluate her
decision critically after a while;
2. the corresponding model discriminates better between really good innovations and
others, due to the laws of joint probability (see Table 6): Clearly superior innovations
have a good chance to be selected twice, whereas slightly better or inferior
innovations have a diminished chance compared to a model without reevaluation.
This feature is also in accordance with intuition.

After the execution of either the central or the peripheral route, the entire process starts
again with the comparison of both components of h.

71
Probability pi x pi to be
Probability pi to
picked and to endure
be picked once
the reevaluation
0.10 0.0100
0.15 0.0225
0.20 0.0400
0.25 0.0625
0.30 0.0900
0.35 0.1225
0.40 0.1600
0.45 0.2025
0.50 0.2500
0.55 0.3025
0.60 0.3600
0.65 0.4225
0.70 0.4900
0.75 0.5625
0.80 0.6400
0.85 0.7225
0.90 0.8100
0.95 0.9025
1.00 1.0000
Table 6: Joint Probability

4.3.1.3 A Secondary Feedback:


Dependence between the Share of Adopters and the Utility

To cover more cases by this model, not only the relative utility of an innovation should
be considered, but also its development in the course of time. A well-investigated
relation between the cumulative number of adopters and the innovation’s utility are
experience or learning curves. This phenomenon was first observed in the 1930s as a
decline of working hours necessary to produce one airplane (Day & Montgomery, 1983).
Since then, the systematic decrease of production costs by the cumulative output of a
company has been proven manifold. An increasing utility coinciding with an increasing
amount of adopters cannot only be justified by sinking production costs, but also by
other effects:

1. It is reasonable to assume, that the also the quality of a product increases with
an increasing cumulative output because of learning effects. “Learning by
doing” and “technological advances” (Day & Montgomery, 1983, p. 46) do not
only minimize the production costs, but also improve the quality of the product
and eliminate malfunctions.

72
2. There are path-dependence effects according to which the utility increases with
the cumulative output: For instance, a new type of cars is useless, if there are no
fueling stations providing the required fuel. Another well-known example is the
dominance of QWERTY-keyboards over DVORAK-keyboards (see e.g. Liebowitz &
Margolis, 1990). The latter is often perceived as the better one having no chance
to defeat the former because of its pre-existing market dominance. However, if
an innovation achieves a certain level of adoption, path dependence starts to
work for it.

The literature on learning curve effects assumes that the costs per piece decrease by a
fix relative factor each time the cumulative output doubles (Day & Montgomery, 1983).
Here, the learning curve effects are not only related to the single aspect of costs. It is
assumed that the aggregated utility increases by a fix relative factor each time the
number of adopter doubles. The corresponding function assumes a constant elasticity M
and equals the mathematical formulation used to calculate interest and compound
interest:

U ;N Adopter < 4 U 0 ;1 3 100 <


log 2 ( N Adopter )
M
(11)

Where U0 is the utility at that point in time when the innovation is introduced, NAdopter
denotes the current absolute number of adopters and & is the relative value by which U
increases when NAdopter doubles.

4.3.2 Model Behavior under Certain Case Scenarios

The enhancement presented above leads to a simulation model, which reconciles


aspects of rational action with peer influence. Furthermore, it is now possible to include
a simple development of the relative utility by time into the model. To show the various
possibilities of the model, a couple of hypothetic case scenarios shall be introduced,
simulated, and discussed shortly.
In the following case scenarios, the utility U0 of the innovation and its &-value are
variable. The utility of the ‘old’ alternative is defined as Uold = 10. Please note that the U0
value holds for the state before the innovators became adopters. Accordingly, the
innovation’s utility at the time, when the simulation starts might by higher (or lower)
depending on NInnovator and &. 2256 agents are randomly arranged on a 200 x 113 lattice; T
is set to 0.6, D to 2.6, and k to 0.15. All other initial conditions are the same as in previous
simulations. Moreover, while the units of the axes of the figures are still arbitrary, the
scale is the same for all figures in order to make the simulations comparable.

73
4.3.2.1 Case Scenario A

Case scenario A has a U0 = 6.7 and a & = 6.0. The probability to select the innovation
within the logit-module of the high-involvement branch amounts approximately 38%.
In this scenario the innovation is inferior with respect to quality at the beginning of the
simulation. An example for that might be the wood pellet heating a few years ago: the
technology was not mature and the fuel not reliably available. However, the innovators
– either technology freaks or highly environment-orientated people – bring the inno-
vation to a state near equal utility with respect to the classic solutions. Another example
could be the mobile phone, which was comparatively expensive in its early days and
which makes more sense, if a considerable amount of people uses this technology.

An exemplary simulation result is shown in Figure 23. As can be seen, at the beginning
of the simulation the innovation ‘takes off’ very slowly due to the slightly lower utility of
the innovation compared to the existing alternatives. If a certain ‘point of no return’ is
reached, the share of adopters increases quickly. The adoption rate accelerates due to
the relatively high &-value.
"*-(% 1@ D'125%(6

),>%
Figure 23: Simulation of Case Scenario A

74
4.3.2.2 Case Scenario B

The initial conditions of case scenario B are U0 = 9.4 and & = 0.0. Thus, the probability to
select the innovation in one of the two steps of the high-involvement branch amounts
approximately 35.4%. A potential example for this scenario could be either a new
ideology (which does not necessarily become better, if more people belief it) or a faked
chocolate bar , which tastes slightly worse than the original one. Within the time range
33

plotted in Figure 24, the share of adopters grows only slowly.


"*-(% 1@ D'125%(6

),>%
Figure 24: Simulation of Case Scenario B

Of interest are the spatial patterns emerging under this parameter combination. By and
by, mesoscopic isles of adopters appear, as demonstrated in Figure 25 . The counterpart
34

of this pattern might by the regional patterns of more or less equivalent products or
ideas. For example, in Switzerland there is a quite popular malt-based milk beverage,
which is used equivalent to cocoa drinks in Germany. Path dependence, fluctuations,
and small, regional cultural differences might explain such patterns.
If the current model is an adequate description of some aspects of reality, it should be
possible to find such regional clusters for ideas or products, which differ only slightly in
terms of utility.

33
A faked chocolate bar is not a real innovation according to the definition, that an innovation has to be
something of a real new quality.
Confer also to p. 25
34

75
Figure 25: Spatial Patterns of Case Scenario B (after 201 time units)

4.3.2.3 Case Scenario C

This scenario is a rather artificial one, because U0 = 5.25 and & = 9.25. There are no intui-
"*-(% 1@ D'125%(6

),>%
Figure 26: Simulation of Case Scenario C

76
tively evident examples for this parameter combination. The worth of this scenario is to
exemplify that even small fluctuations can bring a system into another state. At the
beginning, there is almost no net gain of adopters, but shortly after the first third of the
plotted time, the typical sigmoid course of adoption starts. The interplay of U0 and & lead
to a threshold value above which the innovation becomes irreversibly dominant (see
Figure 26).

4.3.2.4 Case Scenario D

Differences between urban and rural regions are often investigated spatial phenomena.
It is a common hypothesis that urban regions are more innovative towards many
innovations than rural regions and there is some evidence for that assumption: First,
most political innovations (new ideologies) started in cities and spread from them as a
homebase into the rural surroundings. The idea of a free and liberal-thinking citizen was
born in the medieval city. As soon as machine-based production processes became
independent from waterpower, the city was the main “germ cell” of technological deve-
lopment. The shift from an aristocratically dominated system towards a bourgeois
system is linked to the developments in the cities. This holds also for the emergence of
socialism or the ecologically oriented movements. Second, many products first spread in
cities before they are adopted in the rural parts of a country: Electricity, the telephone,
television or DSL have been used earlier in urban regions than in the countryside.
However, one has to be careful, as the diffusion of innovations is a coupled demand-
supply-system and the reason for the earlier adoption in cities might be the earlier or
cheaper availability of the innovation.

Nevertheless, the model should be able to reproduce such a gradient between a rural
and an urban area. For doing so, the agent category ‘early adopter’ is split up into rural
early adopters with frural_ea = 0.6 and urban early adopters with furban_ea = 0.4. All the other
agent parameter values remain unaltered. In the following case example, the upper half
of the social space is considered urban, while the lower half is rural. The innovation is
defined as clearly superior U0 = 15.0, while its utility remains constant during the entire
simulation.

As can be seen in Figure 27, this model alteration leads to a perceivable difference
between the city and the countryside. Nevertheless, there are rural areas, in which the
agents adopt prior to some urban counterparts. Due to the interplay between the agent
arrangement and their characteristics, such diversified spatial patterns emerge.

77
(a)

(b)

(c)
Standard parameter setting with rural and urban agents (a) 29 time units; (b) 51 time units; (c) 182 time units.
Figure 27: Simulation Results of an Urban-Rural System

78
5 General Discussion

The general discussion is divided into two parts. As mentioned in chapter 2, a validation
of the model based on a case study has not been feasible because the data needed was
not available and as it is not possible to survey such data under the time constraints of a
diploma thesis. However, it is a claim of this thesis that the model presented should be
applicable to real world cases and, thus, should be testable. Therefore, a strategy for a
model validation based on real world case studies is presented in the first section of this
chapter.
The power of the model as well as its disadvantages and shortcomings are discussed in
section 5.2 of this chapter: First, the features of the model are characterized and
discussed shortly. Thereafter, the adequacy of the model’s level of detail is discussed.
Third, problems related to the initial conditions of the model are examined. Then, the
adequacy of the Euclidean representation of the social relations and of Schweitzer’s
stigmergic approach is discussed. Finally, implications for the research on the diffusion
of innovations are highlighted.

5.1 A Strategy for a Model Validation by Case Studies

Every model for prediction has to withstand a continuous testing of its predictive power.
This testing is done by simulating past events on which sufficient measured data is
available. If the model fits well to the measured data, it can be considered valid as long
as no case example exists, for which the model fails. If a model fails for a certain case
example, it has to be investigated whether this is a systematic failure or not. If the
failure is systematic, a restriction of the range of validity of the model is identified.

A usual way of calibrating and testing models is to measure the parameter values for
the real world example first and thereafter to check the fit between prediction and
measured behavior. This way of a model validation is not completely appropriate for the
model described, as some of the parameter values (in particular k* and D*) are not direct-
ly measurable, because they are only proxies for real world phenomena. In order to vali-
date such a model, three steps are necessary. These steps are described subsequently
and exemplified by a case example from the car sector.

First, the network structure has to be investigated. A typical method to measure structu-
ral patterns of a social network are, for instance, snowball surveys, in which the resear-
cher quasi tries to identify a cascade of acquaintances (e.g. Marsden, 2005). It is impor-
tant to identify which agent groups are clustered and how many social neighbors a

79
typical member of a certain agent group has. With this knowledge it is possible to create
generic social space patterns, which are more realistic (or isomorphic) than the random
arrangements used in this thesis.
For the case example, this could be done by asking the participants of a survey whom
they would ask for advice if they had to buy a car or who influenced them regarding
their last car purchase. These persons could be regarded as opinion leaders. Further, the
participants could be asked, whom they perceive as good friends, close relatives and
important coworkers. The resulting number gives an indication for the number of social
neighbors. Finally, the participants could be asked for their risk-averseness and their
leadership with respect to the purchase of cars. This should be done by relying partly on
standardized questions coming form psychological literature and partly on case specific
questions. The generated data helps to classify the participants according to the five
agent groups. If the snowball principle is applied to the acquaintances of the surveyed
participants a more complete image of the investigated social network can be achieved.

Furthermore, it has to be investigated whether the system is innovative or conservative


towards the innovation to be adopted. This can be done by surveying the attitudes
towards this innovation and by analyzing whether the innovation is in opposition to
some cultural norms or not. While the innovation’s compatibility with the system’s
norms and values and the system’s risk-averseness are a cultural determinant of the
innovation’s fate, there is also an innovation-intrinsic determinant: the relative utility of
the innovation. This utility and its development in the course of time can be
investigated by various methods, some of which are introduced in Jungermann et al.’s
(1998) book about the psychology of decisions or in a working paper by Smieszek and
Mieg (2003).
With respect to the case example, conservativeness could come from the image of
different car brands or engine technologies. Certain brands have a better image com-
pared to others, even if the technical features of their cars are equal. If a company with a
bad image introduces a new technology, this technology might spread slower compared
to a case, in which a favored company does the same. The attitudes towards brands and
technologies can be surveyed. For case examples concerning the car market, utility
values can be estimated by adopting the weighting factors and partial utilities used in
the model developed for the European Directorate General for the Environment (DG-
ENV, 2002).

Finally, the decay rate and the diffusion constant have to be estimated. As both para-
meters are proxies, they cannot be measured directly. In order to estimate k* and D*, a
feasible strategy is to calibrate them by adjusting the model to several (more than two,
as two parameters are unknown) comparable case studies. If it is possible to fit the

80
model to more than two data sets by adjusting k* and D*, it is reasonable to assume that
the decay rate and the diffusion constant are universal for a certain type of innovation
process (e.g. car market in Central Europe). Another possibility is to fit the model to the
measured data of an ongoing process and to assume that the parts of the model results,
which go beyond the measured data, are an adequate prediction of future of the
respective diffusion process.
In the field of car purchase, an example of an ongoing process could be the introduction
of hybrid cars. The hybrid technology is not yet widely established, but it exists long
enough that a short time series exists to which a model could be calibrated. The people
who already possess such a car would have to be related to the five agent groups by a
survey method as described in the paragraph before the preceding paragraph.

After the model has been calibrated by this procedure, it can continuously be tested
with other case examples.

5.2 Strengths and Weaknesses of the Model

The Power of the Model

In the previous chapter, a theory-based simulation model has been developed, which is
capable to explain and reproduce the social diffusion of an innovation. Rogers (2003)
delineates a five step innovation-decision process consisting of “(1) knowledge, (2)
persuasion, (3) decision, (4) implementation, and (5) confirmation” (p. 20). Most of these
steps are mirrored in this final model: The predefined innovators introduce knowledge
about the innovation’s existence into the model; as soon as this information reaches a
certain agent i, she is informed that a new idea, ideology, way of behaving or product is
available. Rogers defines persuasion as the process in which a positive or negative
attitude is formed towards the innovation. Accordingly, persuasion takes place in our
model, when both alternatives are considered and the central route is chosen. The
decision follows straight afterwards when the logit-module leads to one of the two
alternatives. Confirmation follows the decision by the reevaluation. Solely the implemen-
tation step is not included in the model.

The model is able to reproduce the sigmoid adoption curve known from various studies
done on the diffusion of innovations. Spatial effects are resolved and it is possible to
model e.g. spatial heterogeneity. In particular, processes like the mesoscopic formation
of groups can be observed. As claimed in chapter 2, the model combines elements of
rational decision making with social influences: The homo sociologicus and the homo

81
economicus are reconciled under the framework of the elaboration likelihood model. At
the same time, the model is more illustrative than most of the existing network models
due to its 2-dimensional representation (compare Figure 1 and Figure 11) .
35

Other findings from the research on the diffusion of innovation also hold within this
model approach. For instance, opinion leaders mainly drive the entire system. Further, a
clear distance between the strength of opinion leaders and the other agent types is
advantageous to reproduce a clearly sigmoid shape regarding the adoption curve. The
innovativeness of the whole system can be triggered by the innovativeness (fearly_adopter) of
the opinion leaders (early adopters), which is in accordance with findings from case
studies presented by Rogers (2003). If the impressions of past experiences are too
dominant, the system also behaves in a conservative or tradition-oriented way. This type
of model behavior is also in accordance with what is expected intuitively.
Finally, the hypothetic case studies of subsection 4.3.2 show that the model is applicable
for real world cases and, thus, testable. Nonetheless, this model has also some
disadvantages and shortcomings. Some aspects are discussed critically and compared to
other approaches on the subsequent pages.

The Model’s Level of Details

Compared to natural phenomena such as chemical kinetics, barometric pressure, or the


composition of atomic nuclei, social phenomena do not emerge only from complex, but
also from complicated elements (see e.g. Weidlich, 1991). While atoms of the same sort
are all equal, human beings show a vast diversity. Hence, it is always an urging question
whether a social model encompasses all the elements (in an adequate interrelation)
which are necessary to describe the corresponding phenomenon adequately.

In a very general sense, interpersonal facets of communication such as trustworthiness,


empathy, or attraction (Rogers & Bhowmik, 1970) are considered by means of the agent
characteristics since the first enhancement. Nevertheless, the model does not reflect not
of the possibly relevant processes. Especially regarding environmentally relevant issues,
there is very often a gap between communicated attitude and realized behavior.
Sometimes attitude changes represent true conversion, sometimes there is only public
compliance (Nowak et al., 1990). For instance, the fairytale “The Emperor’s new Clothes”
shows that even pretended compliance can influence other agents’ behavior. The model
presented in this study is not able to differentiate between communicated opinion and
realized behavior, but treats both as equal.

The importance of network visualization for gaining scientific insight is discussed in Brandes, Raab, and
35

Wagner (2001).

82
Furthermore, a person who is generally perceived as credible and who has, accordingly, a
strong influence on others, needs not be influential with respect to all of her social
neighbors. If, for instance, a position of an opinion leader clearly lies in another agent’s
latitude of rejection, the rejection by this agent might be even stronger than normal
(refer to the social judgment theory, p. 14).
Finally, the agent characteristics also might follow a certain temporal dynamic. It makes
sense to assume, that the influence of an agent shortly after the adoption of an
innovation is different from the influence a considerable time later. Further, an agent
who often changes her opinion surely has a lower credibility than a rather constant one
(Nowak et al., 1990).

The described mechanisms need not be important for every process of innovation-
diffusion. If one applies the presented or another simulation model to social processes,
the relevance of the mechanisms described should be discussed and, if necessary,
implemented.

Parameterization and Starting Conditions

Social sciences do not know a “luxury as starting de novo” (Ottino, 2004, p. 399). Social
systems are pre-formed, have a history and the opportunities and the behavior of
individuals are context and path dependent. In addition, many measuring quantities are
proxy values or intrinsic variables and therefore not easily or not at all measurable. As a
consequence, an exact and adequate parameterization is both necessary and hardly
possible. Subsequently, some difficulties regarding the definition of the initial
conditions are discussed.

One problem has to do with the definition of ‘innovation’. According to Rogers (2003),
an “innovation is an idea, practice, or object that is perceived as new by an individual or
other unit of adoption” (p.12). As innovation is tied to perception, this is a possible source
of error. A common bias in the diffusion-of-innovation research is the so-called “pro-
innovation bias” (Rogers): A researcher perceives a certain idea, practice, or product as
innovative and therefore defines it as an innovation. Such assumptions taken for
granted can lead to erroneous simulation results of a model for prediction and to
misleading interpretations of descriptive research on the adoption of innovations. Also
with respect to relative utility values, a modeler has to be careful that not ‘the wish is
the father of the thought’. Utilities are difficult to measure and usually not universal.

Another problem is to determine, whether the system is innovative or conservative


towards a certain innovation. As described in subsection 3.1.2.4, existing norms can be

83
decisive for the success of an innovation. In particular, the degree of innovativeness of
the opinion leaders is eminent for the further dynamic of the system (see subsection
4.2.2.3). However, it is not easy to measure this feature of the system in a value-free way
and in advance – which is crucial for models for predictions.

A third problem lies with the arrangement of the agents in the social space. Whenever a
certain decision of a modeler cannot be justified by theory or empiricism, the simplest
solution should be taken to fulfill the parsimony principle (see subsection 3.3.3). The
simplest solution in this case is the arrangement with the lowest information content: a
random arrangement of all agents. However, real social networks are surely not as
randomly distributed as in our model. As Odell (2000) indicates, the behavior of social
network models can be sensitive even to minor changes (see also subsection 4.2.2.3,
proposition no. 7). Accordingly, further knowledge on an adequate distribution of the
different agent groups in a Euclidean social space would be insightful: It might be
decisive whether innovators are social neighbors of early adopters or not. It might also
be important whether early adopters are clustered or repel each other.

Finally, the problem concerning the starting conditions worsens, if an ongoing diffusion
process shall be described. It is theoretically possible, but practically unfeasible to
measure all the necessary variables for adequate initial conditions in the case of an
ongoing process.

Adequacy of the Euclidean Representation of the Social Space

To represent the social relations of the agents in a Euclidean system is a major prere-
quisite to make the system describable by equations taken from physics. Diffusion
processes or the decay of information are only reasonable in Euclidean systems.
However, the concept of a 2-dimensional social space is not a feasible representation for
some phenomena.

For instance, sociometric data on social distances does not have to fulfill basic axioms of
a Euclidean space, such as the triangle inequality. While agent i has a close relationship
to agents j and k, it is possible that j and k detest or even do not know each other (see e.g
Kacperski & Ho"yst, 2000). If e.g. the social distance between i and j as well as between i
and k amounts 1 length unit in each case, the distance between j and k cannot be larger
than 2 length units, due to the triangle inequality. This constraint is counterintuitive and
problematic in some cases, even though the assumption makes sense that the triangle
inequality holds roughly for many real world three-person-combinations.

84
As a result thereof, the aspect of heterophilous bridges between homophilous peer
groups (see subsection 3.1.2.5) cannot be described in a 2-dimensional Euclidean system
adequately. If one member of peer group A shall be able to influence only a specific
member of peer group B, their social distance has to be shorter than that of other group
members. This necessity is unsatisfiable in a Euclidean space because of the triangle
inequality. Furthermore, a realistic arrangement of the innovators and the laggards is
not possible. Innovators and laggards are said to be rather isolated members of a
society. Hence, an optimization algorithm, which increases the distance between a
member of these groups and any other agent while keeping the distances among the
members of all other groups small, would probably arrange the laggards and the
innovators as a wide meshed circle around a dense area composed of the other groups.
Such an arrangement is obviously not realistic, as only the agents at the boundary line
of the inner area would communicate with these outsiders, while the central ones
would only receive negligible signals from them.

One assumption underlying the Euclidean representation of the social space is that the
social space coincides approximately with physical space. Stewart (1941) was able to find
out that the number of students of a given college and a certain hometown is inversely
proportional to the distance of this area to the college. Other studies have shown as well
that the intensity of social contacts tends to decrease with physical distance (Latané et
al., 1995). Latané et al. suggest that the agents’ mutual impact obeys an inverse square
function of distance, as it is also the case with many physical phenomena such as
gravity, sound, or illumination. Even though diffusion processes included in the model of
this thesis do not lead to an inverse square distribution of information, their steady-
state solution has – at least – a kindred shape (confer to Imboden & Koch, 2005).
Nonetheless, it is doubtable whether this assumption is still valid in the age of no-frills
airlines, cheap mobile phones, electronic prompt services, and emails. Latané et al. aver
that the strong relationship between space and social relations still holds and complain
that “the development of convenient and inexpensive technology for the movement of
people and ideas […] has led social psychologists to believe that physical distance no
longer matters in technologically advanced societies”(p. 796). However, this topic is
controversial in the scientific community. For instance, Axhausen (2005) stated three
hypotheses which contradict the assumptions of Latané et al.:

1. “The size of the social network is inversely proportional to the generalised costs
of travel and communication“ (p.11).
2. “Social contacts should become more selective, as persons can choose among a
larger number of possible contacts available within the same cost isoline. There
is no need anymore to socialize with spatial neighbors“ (p.12).

85
3. “The mobile phone increases social selectivity by making it more difficult to
meet persons without prior coordination, as persons are less likely to be reliably
in certain places at certain times“ (p.12).

If such globalization or de-regionalization processes do take place in advanced societies,


a major shift in the composition of social networks has to be expected in the next few
years: An extrapolation of today’s trends in the transportation sector suggests that the
area reachable with a constant time and money budget will continue to grow vastly
(Schafer & Victor, 1999).

We also know that people are part of different networks. As already mentioned, people
interact closely with relatives, friends, and neighbors (belonging to the private realm),
but also with coworkers and business partners (belonging to the business realm). Today
it is not unusual to live in London and work in Luton, it is not unusual to live in Berne
and commute to Zurich daily, it is not unusual to work half the week in Stuttgart and
the other half in Karlsruhe. This leads to a link between formerly unconnected and
completely isolated networks, linked solely by the one agent who commutes. In social
models networks are often assumed either completely regular or completely random.
Schweitzer’s (2003, 2004) representation is a rather regular one. However, we know
(and this is confirmed by the example) that many real world networks lie between these
two extremes.
Such systems, which have both a high degree of clustering (like regular lattices) and
small characteristic path lengths (like random networks), are called ‘small-world’
networks. Small-world networks are tightly woven, but also cross-linked to distant
regions of the social space (Watts & Strogatz, 1998; Travers & Milgram, 1969). One
possibility to mimic social networks within Schweitzer’s 2-dimensional Euclidean
approach could be to define two interlinked social spaces – one representing the private
realm, the other representing the business realm. Every agent would have a distinctive
position in both of these spaces, whereas the distances between the positions in both
spaces would follow a certain distribution (e.g. a normal distribution).

Adequacy of the Stigmergic Representation of Social Interaction

By using the stigmergic representation of the social interaction, these interaction


processes become easily describable by mathematical approaches known from physics
(see subsection 3.2.3). By modeling interaction via a social force field instead of direct
peer-to-peer influence, concepts like the two laws of Adolf Fick are applicable.
Furthermore, it is possible to integrate external (social influence) and internal (past
experiences) processes in one equation. Finally, the joint representation of the

86
communication processes and the individuals’ experiences by a colored contour plot of
the communication field h is quite illustrative and, accordingly, leads to an intuitive
understanding of the ongoing processes.

Notwithstanding, the adequacy of this stigmergic representation has to be discussed.


The communication field is a transformed image of reality, a proxy for two incoherent
processes. Accordingly, a major advantage of MAS is deliberately given up by this
approach: the opportunity to model macroscopic phenomena by microscopic entities
close to reality. This becomes clear if one considers the twofold character of k*: On the
one hand, it controls the tradition-orientedness of a system, because it determines the
lifetime of past experiences. On the other hand, it is a measure for the extent of an
agent’s social network, as the interplay of decay rate and diffusion constant determines
how far relevant information spreads in the Euclidean social space (see propositions 8
and 9, p. 65). Another problem of indirect, stigmergic communication is that all agents,
who are in the specific spatial range of a certain agent, receive a fraction of the
information created by her. However, realistic pathways of communication are binary:
either one receives information or one does not, either one has contact to another
person or one does not. The problems coming from the triangle inequality are also
closely tied to the stigmegic approach.

Finally, even if one accepts the blending of different social aspects in one equation and
some parameters, it is hard to measure the necessary initial conditions and parameter
values if one wants to create a model for prediction (see the previous section). Due to
their proxy-character, neither the diffusion constant D* nor the decay rate k* nor the
agent density are something, which could be measured in reality. The only way to define
them is to adjust them by re-simulating different well-known cases on which sufficient
data is available. This is often not feasible.

Implications for the Research on the Diffusion of Innovations

The model presented in this thesis relies on more or less established concepts from
various disciplines and fields of research, namely social psychology and the research on
the diffusion of innovations. As described in subsection 3.3.1.2, computer simulations in
general and MAS in particular demand explicit mathematical formulations of all rela-
tions and processes. With respect to some aspects, this was not easily and unambi-
guously achievable, which points at weaknesses and gaps in theory. On the other hand,
it could be proven that some aspects and detail discriminations are not necessary from a
modeler’s perspective. For that reason, it should be discussed, whether these details are
dispensable from other viewpoints as well.

87
A first aspect, which should be discussed, is the discrimination of the agents into five
groups. The findings of subsection 4.2.2 and 4.2.3 indicate that at least the laggards are
not needed to reproduce the S-shaped course of adoption. Although the discrimination
in only two groups – opinion leaders and followers – seems to be an oversimplification,
it is doubtable whether five groups are not too many. This holds especially, as the
groups mainly have been defined according to their point of adoption, and not because
of certain characteristics.

Another point is how the concepts of homo sociologicus and homo economicus should
be linked appropriately. To the author’s best knowledge, there is no literature available
addressing this problem explicitly. In this study, the elaboration likelihood model has
been chosen as a framework to tackle this problem. However, the ELM has neither been
developed for this purpose nor is the applied interpretation of the notion ‘involvement’
a common one. One could even argue that in the case of an expensive good, the
involvement should continuously be high. To achieve a higher degree of validity for a
diffusion model, further insights into the exact interplay between rationality and herd
behavior are needed.

A last aspect concerns the structure of the social space. Small societal entities can be
captured in a model by simply measuring the network structure. If one wants to
simulate cities, regions, or entire nations, measuring the structure is not feasible.
Therefore, some general insights regarding the question who influences whom and
which agent groups are interlinked with which other groups would be helpful to achieve
more valid MAS.

88
6 Concluding Remarks and Outlook

The model presented in this study fulfills the requirements defined in the research
objective (chapter 2). Well-studied macroscopic phenomena like the S-shaped curve of
adoption can be reproduced and the widely accepted agent groups sketched by Rogers
(2003) are included in this model. Furthermore, assumed spatial effects like the time lag
of adoption between urban and rural areas or social clusters adopting prior to others
emerge during a simulation run. The model offers a framework that merges the
concepts of rational reasoning and the importance of the word of mouth. It is testable
and thus, in principle, feasible for prediction tasks.

However, the proof that the model is capable to fulfill all predefined requirements does
not imply that this model is also the best means for doing so. As explained in section 3.3,
a model is a good model if
N it is able to perform the tasks it is designed for,
N its elements are close to the system they represent (isomorphism),
N it is as simple as possible and reasonable, and
N it is valid.
At least, with respect to the criteria simplicity and isomorphism, better models are likely
to exist: models, which are simpler and more isomorphic while being valid and fulfilling
their tasks just as well at the same time.

In the introduction, two major lines of diffusion models have been presented:
macroscopic differential equation approaches like that of Bass (1969) and network
approaches like that of Valente (2005). The model presented in this study is – without
any doubt – superior to macroscopic models based on the uniform mixing condition.
These macroscopic models do not meet almost all requirements defined in the research
objective. One advantage of the stigmergic model developed in this thesis compared to
Valente’s model is that it is illustrative. The social space is comprehensibly illustrated by
the 2-dimensional representation; the ongoing processes can be intuitively understood
by a graphical representation of the two components of the communication field (see
Figure 7). The question is, whether these advantages outweigh the advantages of
Valente’s network approach.

Considering that Valente’s network model does not use any proxy parameters, but rests
only on direct interaction, an enhanced network model – similar to the model developed
within this thesis – might be more adequate and easier applicable than a stigmergic
model based on Schweitzer’s (2004) approach. All parameters in such a network model

89
would have a direct real-world counterpart and could be measured directly. This would,
for instance, reduce the number of case studies needed to validate the model.
The disadvantage of network models lies in the fact that their graphical representations
are quite complicated if the number of agents is high. This could be compensated by a
transformation of the network representation into a 2-dimensional representation as it
is used in this study: The interaction is still calculated in a network representation, but
the results are plotted in a 2-dimensional Euclidean space.

Summarizing, it seems superior to model the diffusion of innovations based on a real


network model than to use Schweitzer’s (2004) socio-physical approach. An enhanced
network model would probably be able to reproduce the results of the communication
field approach used for this thesis while being much closer to real world parameters
that are measurable. Nonetheless, some of the problems, which occurred in this thesis,
would also hold if one used a network model like that by Valente (2005). These problems
demonstrate a demand for further research in this field.

One of these problems is the arrangement of the agent, the network structure. As soon
as a research project has a scope beyond a measurable amount of agents – for instance
a whole nation – network structures have to be generated artificially. For a model for
prediction, the generation of a realistic (i.e. highly isomorphic) network structure is
necessary. Further research has to be done to identify typical and relevant structural
network patterns for specific cultural contexts.
Another question to be answered is related to the problem of an adequate integration
of the purely rational and the imitational part of decision-making. In this study, the
elaboration likelihood model of Cacioppo and Petty (1979) has linked both concepts. This
seems to be a reasonable assumption – however, it is not proven knowledge. It is a
necessity to investigate which role the communicational parts and which role the
rational parts play for the diffusion of innovations.
Related to the previous question is the issue, whether the Logit-model is an adequate
representation of a rather rational decision-making and how to estimate utility values
of innovation with regard to existing solutions. It is widely accepted that the concept of
aggregated utilities based on partial utilities is problematic and inadequate for many
cases. Furthermore, it is known that even stable preference orders (a precondition for
utilities) do not exist for many decision tasks.
These problems show that in the field of decision research, there is still a lot of
fragmental knowledge which has to be integrated. The branch of economics offers
some answers, which hold only for some special cases; psychology offers heuristics with
a widened range of application; sociologists again use a different approach. However,
simulation models need clear rules describing which module has to be applied under

90
what circumstances. This thesis has exemplified once again that the research on
decision-making and the diffusion of innovations cannot give an unambiguous answer
to this.

A last point to discuss is the temporal validity of the parameter values of such diffusion-
of-innovation models. In contrast to the natural laws of physics, which are thought to
hold for inconceivable long periods, findings in the social sciences often “expire” after a
certain time due to continuous societal changes. The ongoing change process of social
networks becoming spatially larger and larger and at the same time more and more
unclustered has to be considered in both model types – the stigmergic one presented in
this thesis or Valente’s (2005) approach based on direct interaction. Consequently, every
model predicting the (future) fate of an innovation has to build upon rather weak
assumptions.

Therefore, the presented simulation model or equivalent alternatives can only open up a
space of possible future states – a prediction in a rather deterministic sense is
impossible.

91
7 References

Antoine, J.-Ph. (2001). Statistique et métaphore. Note sur la méthode sociologique de


Tarde. In G. Tarde, Les Lois de l’imitation (pp. 7-42). Paris: Les Empêcheurs de penser en
rond.

Archer, M. (1996). Culture and Agency. The Place of Culture in Social Theory. Cambridge:
Cambridge University Press.

Axhausen, K. W. (2005). Activity Spaces, Biographies, Social Networks and Their Welfare
Gains and Externalities: Some Hypotheses and Empirical Results. Zurich: ETH Zurich, Insti-
tute for Transport Planning and Systems.

Baccini, P., and Bader, H.-P. (1996). Regionaler Stoffhaushalt: Erfassung, Bewertung und
Steuerung. Heidelberg: Spektrum.

Barley, S. R., and Tolbert, P. S. (1997). Institutionalization and Structuration: Studying the
Links between Action and Institution. Organization Studies 18, 93-117.

Barton, A. H. (1968). Bringing Society Back In. Survey Research and Macro-Methodology.
American Behavioral Scientist 12, 1-9.

Bass, F. M. (1969). A New Product Growth for Model Consumer Durables. Management
Science 15, 215-227.

Bonabeau, E. (1999). Editor’s Introduction: Stigmergy. Artificial Life 5, 95-96.

Brandes, U., Raab, J., and Wagner, D. (2001). Exploratory Network Visualization:
Simultaneous Display of Actor Status and Connections. Journal of Social Structure 2.
Retrieved January 17, 2006, from
http://www.cmu.edu/joss/content/articles/volume2/BradesRaabWagner.html

Cacioppo, J. T., and Petty, R. E. (1979). Effects of Message Repetition and Position on
Cognitive Response, Recall and Persuasion. Journal of Personality and Social Psychology
27, 97-109.

CIRAD (2001). Multi-Agent Systems. Retrieved, October 25, 2005, from


http://cormas.cirad.fr/fr/infoleg/infoleg.htm.

92
Collins, B. E., and Guetzkow, H. (1964). A Social Psychology of Group Processes for Decision
Making. New York, NY: Wiley.

Day, G. S., and Montgomery, D. B. (1983). Diagnosing the Experience Curve. Journal of
Marketing 47, 44-58.

Deroïan, F. (2002). Formation of Social Networks and Diffusion of Innovations. Research


Policy 31, 835-846.

DG-ENV (2002). Fiscal Measures to Reduce CO2 Emissions from New Passenger Cars. Final
Report by COWI A/S under a Contract to Directorate General for the Environment,
January 2002. Retrieved December 7, 2005 from
http:://europa.eu.int/comm./environment/co2/cowi_finalreport.pdf

Environmental Modeling and Decision Making (2005, July 15). ETH Zurich – Natural and
Social Science Interface. Retrieved January 2, 2006, from
http://www.uns.ethz.ch/res/emdm

Erneuerbare Energie (2006, January 2). Wikipedia: The Free Encyclopedia. Retrieved
January 2, 2006, from
http://de.wikipedia.org/wiki/Erneuerbare_Energie.

Faires, J. D., and Burden, R. L. (1994). Numerische Methoden. Näherungsverfahren und ihre
Praktische Anwendung. Heidelberg: Spektrum.

FUJITSU LIMITED (1989). SSL II USER’S GUIDE (Scientific Subroutine Library). Retrieved,
January 3, 2006, from
http://www.lahey.com/docs/ssl2_win.pdf.

Gallagher, R., and Appenzeller, T. (1999). Beyond Reductionism. Science 284, 79.

Giddens, A. (1995). Die Konstitution der Gesellschaft [Original title: The Constitution of
Society]. 3 ed. Frankfurt: Campus. (Original work published 1984).
rd

Gillespie, D. T. (1976). A General Method for Numerically Simulating the Stochastic Time
Evolution of Coupled Chemical Reactions. Journal of Computational Physics 22, 403-434.

Gillespie, D. T. (1977). Exact Stochastic Simulation of Coupled Chemical Reactions. Journal


of Physical Chemistry 81, 2340-2361.

93
Granovetter, M. (1978). Threshold Models of Collective Behavior. American Journal of
Sociology 83, 1420-1443.

Grassé, P.-P. (1959). La Réconstruction du Nid et les Coordinations Interindividuelles chez


Bellicositermes Natalensis et Cubitermes Sp.. La Théorie de la Stigmergie: Essai d’ Inter-
prétation du Comportement des Termits Constructeurs. Insectes Sociaux 6, 41-83.

Griffin, E. (1991). A First Look at Communication Theory. New York: McGraw-Hill.

Haag, D. (2000). Models for the Representation of Ecological Systems? The Validity of
Experimental Model Systems and of Dynamical Simulation Models as to the Interaction
with Ecological Systems. Doctoral dissertation, University of Hohenheim, Germany.

de Haan, P. (2005). Modeling Environmental Decision Making in Individual Transport


Using Multi-Agent Systems. Unpublished Presentation.

Haken, H. (2003, October 22). H. Haken: Synergetik (Center of Synergetics). Retrieved


November 16, 2005, from
http://www.itp1.uni-stuttgart.de/arbeitsgruppen/?W=5&T=1.

Hartree-Fock-Methode (2005, August 15). Wikipedia: The Free Encyclopedia. Retrieved


August 15, 2005, from
http://de.wikipedia.org/wiki/Hartree-Fock-Methode.

Hansmann, R., Bernasconi, P., Smieszek, T., Loukopoulos, P., and Scholz, R. W. (in press).
Justifications and Self-Organization as Determinants of Recycling Behavior: The Case of
Used Batteries. Resources, Conservation & Recycling.

Hansmann, R., and Scholz, R. W. (2002). Nutzenargumente und die Akzeptanz von
Videoüberwachung. Eine quasi Experimentelle Studie [Arguments of Usefulness and
Acceptance of Video Surveillance: An Experimental Study]. Swiss Journal of Sociology 28,
425-434.

Hodeweg, P., and Hesper, B. (1990). Individual-Oriented Modelling in Ecology.


Mathematical and Computer Modelling 13, 83-90.

Holland, O., and Melhuish, C. (1999). Stigmergy, Self-Organization, and Sorting in


Collective Robotics. Artificial Life 5, 173-202.

94
Imboden, D., and Koch, S. (2005). Systemanalyse: Einführung in die Mathematische
Modellierung Natürlicher Systeme. Berlin: Springer.

Institution (2005, December 6). Wikipedia: The Free Encyclopedia. Retrieved December 18,
2005, from
http://en.wikipedia.org/wiki/Institution.

Ising, E. (1925). Beitrag zur Theorie des Ferromagnetismus. Zeitschrift für Physik 31, 253-
258.

Janis, I. L. (1982). Groupthink. Boston: Houghton Mifflin.

Johnson, S. (2001). Emergence: The Connected Lives of Ants, Brains, Cities, and Software.
New York: Scribner.

Jungermann, H., Pfister, H.-R., and Fischer, K. (1998). Die Psychologie der Entscheidung.
Heidelberg: Spektrum.

Kacperski, K., and Ho!yst, J. A. (2000). Phase Transitions as a Persistent Feature of Groups
with Leaders in Models of Opinion Formation. Physica A 287, 631-643.

Kadanoff, L. P. (2000). Statistical Physics: Statics, Dynamics, and Renormalization. River


Edge, NJ: World Scientific.

Kadtke, J. B., and Kravtsov, Y. A. (1996). Introduction. In: Y. A. Kravtsov and J. B. Kradtke
(Eds.), Predictability of Complex Dynamical Systems (pp. 3-22). Berlin: Springer.

Keeney, R. L., and Raiffa, H. (1976). Decisions with Multiple Objectives: Preferences and
Value-Tradeoffs. New York: John Wiley & Sons.

Kobe, S. (n.d.). Das Ising-Modell – Gestern und Heute. Retrieved December 15, 2005, from
http://www.physik.tu-dresden.de/itp/members/kobe/isingphbl/

Kolata, G. (1999). Flu: The Story of the Great Influenza Pandemic of 1918 and the Search for
the Virus that caused it. New York: Farrar, Straus and Giroux.

Kramers, H. A. (1940). Brownian Motion in a Field of Force and the Diffusion Model of
Chemical Reactions. Physica 7, 284-304.

95
Kunczik, M., and Zipfel, A. (2001). Publizistik. Köln: Böhlau.

Kurdikar, D. L., Somvársky, J., Dusek, K., and Peppas, N. A. (1995). Development and
Evaluation of a Monte Carlo Technique for the Simulation of Multifunctional
Polymerizations. Macromolecule 28, 5910-5920.

Latané, B., Liu, J. H., Nowak, A., Bonevento, M., and Zheng, L. (1995). Distance Matters:
Physical Space and Social Impact. Personality and Social Psychology Bulletin 21, 795-805.

Lazarsfeld, P. F., Berelson, B., and Gaudet, H. (1969). Wahlen und Wähler: Soziologie des
Wahlverhaltens [Original title: The People’s Choice – How the Voter Makes up his Mind
in a Presidential Campaign]. Neuwied: Luchterhand. (Original work published 1944).

Ledvij, M. (2003). Curve Fitting Made Easy. The Industrial Physicist 9, 24-27.

Levenberg-Marquardt algorithm (2005, October 2). Wikipedia: The Free Encyclopedia.


Retrieved January 2, 2006, from
http://en.wikipedia.org/wiki/Levenberg-Marquardt_algorithm.

Lewenstein, M., Nowak, A., and Latané, B. (1992). Statistical Mechanics of Social Impact.
Physical Review A 45(2), 763-776.

Li, N., Steiner J., and Tang, S. (1994). Convergence and Stability Analysis of an Explicit
Finite Difference Method for 2-Dimensional Reaction-Diffusion Equations. Journal of the
Australian Mathematical Society, Series B, 36, 234-241.

Liebowitz, S. J., and Margolis, S. E. (1990).The Fable of the Keys. Journal of Law and
Economics 33, 1-25.

Liu, W. T., and Duff, R. W. (1972). The Strength in Weak Ties. The Public Opinion Quarterly
36, 361-366.

Lurie, N. H. (2004). Decision Making in Information-Rich Environments: the Role of


Information Structure. Journal of Consumer Research 30, 473-486.

Lyons, M. (2005). Knowledge and the Modelling of Complex Systems. Futures 37, 711-719.

96
Marsden, P. V. (2005). Recent Developments in Network Measurement. In: P. J.
Carrington, J. Scott, and S. Wasserman (eds.), Models and Methods in Social Network
Analysis. Cambridge: Cambridge University Press.

McCombs, M. E., and Shaw, D. L. (1972). The Agenda-Setting Function of Mass Media. The
Public Opinion Quarterly 36, 176-187.

McQuarrie, D. A. (1967). Stochastic Approach to Chemical Kinetics. Journal of Applied


Probability 4, 413-478.

Michalewicz, Z., and Fogel, D. B. (2000). How to Solve It: Modern Heuristics. Berlin:
Springer.

Miljkovic, D. (2005). Rational Choice and Irrational Individuals or Simply an Irrational


Theory: A Critical Review of the Hypothesis of Perfect Rationality. The Journal of Socio-
Economics 34, 621-634.

Modell (1999). Brockhaus – Die Enzyklopädie: in 24 Bände. 20 ed. Leipzig: Brockhaus.


th

Mosler, H.-J., and Brucks, W. M. (2003). Integrating Commons Dilemma Findings in a


General Dynamic Model of Cooperative Behavior in Resource Crisis. European Journal of
Social Psychology 33, 119-133.

Mosler, H.-J., and Tobias, R. (submitted). Simulation und Modellierung. In: E. D.


Lantermann and V. Linneweber (Eds.), Enzyklopädie der Umweltpsychologie Band 1:
Grundlagen, Paradigmen und Methoden der Umweltpsychologie. Bern: Hogrefe.

Müller, M., and de Haan, P. (2006, January). Simulating and Forecasting Individual Car
Choice Decision Processes under Future Governmental Incentive Schemes Promoting Fuel-
Efficient New Cars. Paper presented at the Annual Meeting of the Transportation
Research Board (TRB), Washington D. C.

Müller, R. (1983). Zur Geschichte des Modelldenkens und des Modellbegriffs. In: H.
Stachowiak (Ed.), Modelle — Konstruktion der Wirklichkeit. München: Wilhelm Fink.

Niedrigenergiefahrzeuge (2005, December 28). Wikipedia: The Free Encyclopedia.


Retrieved January 2, 2006, from
http://de.wikipedia.org/wiki/Dreiliterauto.

97
Nowak, A., Szamrej, J., and Latané, B. (1990). From Private Attitude to Public Opinion: A
Dynamic Theory of Social Impact. Psychological Review 97, 362-376.

Odell, J. (2000). Multiagent Systems using Small World Networks. Paper presented at the
2000 Workshop Agents in Industry conducted at the Fourth International Conference
on Autonomous Agents, Barcelona, Spain. Paper retrieved October 26, 2005, from
http://www.ca.sandia.gov/FIPAPDM/odell_pos.pdf

Opel, M. (2005). Magnetismus. Chapter 7. Lecture Notes, Technical University of Munich.


Retrieved August 15, 2005, from
http://www.wmi.badw.de/E23/Lehre/Skript/Magnetismus/Kapitel-7.pdf.

Ormerod, P. (2005). Complexity and the Limits to Knowledge. Futures 37, 721-728.

Ottino, J. M. (2004). Engineering Complex Systems. Nature 427, 399.

Popper, K. R. (1995). Auf der Suche nach einer besseren Welt. 8 ed. München: Piper.
th

Reichert, P. (1994). Concepts Underlying a Computer Program for the Identification and
Simulation of Aquatic Systems. Dübendorf: Swiss Federal Institute for Environmental
Science and Technology.

Reno, R. R., Cialdini, R. B., and Kallgren, C. A. (1993). The Transsituational Influence of
Social Norms. Journal of Personality and Social Psychology 64, 104-112.

Rogers, E. M., and Bhowmik, D. K. (1970). Homophily-Heterophily: Relational Concepts for


Communication Research. The Public Opinion Quarterly 34, 523-538.

Rogers, E. M. (2003). Diffusion of Innovations. 5 ed. New York: Free Press.


th

Sarup, G. (1992). Sherif’s Metatheory and Contemporary Social Psychology. In: D.


Granberg and G. Sarup (Eds.), Social Judgement and Intergroup Realtions: Essays in Honor
of Muzafer Sherif (pp. 55-73). New York: Springer

Schafer, A., and Victor, D. G. (1999). Global Passenger Travel: Implications for Carbon
Dioxide Emissions. Energy 24, 657-679.

Schenk, M. (2002). Medienwirkungsforschung. 2 ed. Tübingen: Mohr Siebeck.


nd

98
Scholz, R. W., and Binder, C. (2003). The Paradigm of Human-Environment Systems.
Zurich: ETH Zurich, Natural and Social Science Interface.

Scholz, R. W., and Tietje, O. (2002). Embedded Case Study Methods. Integrating
Quantitative and Qualitative Knowledge. Thousand Oaks: Sage.

Schweitzer, F. (2004). Coordination of Decisions in a Spatial Model of Brownian Agents.


In: M. Gallegati, A. P. Kirman, and M. Marsili (Eds.), The Complex Dynamics of Economic
Interaction (pp. 303-318). Berlin: Springer.

Schweitzer, F. (2003). Brownian Agents and Active Particles. Collective Dynamics in the
Natural and Social Sciences. Berlin: Springer.

Small, P. (n.d.). Is it Stigmergy? Retrieved October 27, 2005, from


http://www.stigmergicsystems.com/stig_v1/stigrefs/article8.html.

Smieszek, T., and Mieg, H. A. (2003). Bewertung von Umweltgütern: Contingent valuation,
conjoint analysis und andere Bewertungsmethoden im kritischen Vergleich. Zurich: ETH
Zurich, Human-Environmental Interaction.

Stachowiak, H. (1983). Erkenntnisstufen zum Systematischen Neopragmatismus und zur


Allgemeinen Modelltheorie. (pp. 87-146). In: H. Stachowiak (Ed.), Modelle — Konstruktion
der Wirklichkeit. München: Wilhelm Fink.

Stern, P. C. (2000). Psychology and the Science of Human-Environment Interactions.


American Psychologist 55, 523-530.

Stewart, J. Q. (1941). An Inverse Distance Variation for Certain Social Influences. Science
93, 89-90.

Tarde, G. (2001). Les Lois de l’imitation. Paris: Les Empêcheurs de penser en rond. (Original
work published 1890)

Teraji, S. (2003). Herd Behavior and the Quality of Opinions. The Journal of Socio-
Economics 32, 661-673.

Torus (2005, July 29). Wikipedia: The Free Encyclopedia. Retrieved August 18, 2005, from
http://de.wikipedia.org/wiki/Torus.

99
Travers, J., and Milgram, S. (1969). An Experimental Study of the Small World Problem.
Sociometry 32, 425-443.

Turner, R. H. (1990). Some Contributions of Muzafer Sherif to Sociology. Social Psychology


Quarterly 53, 283-291.

Valente, T. W. (2005). Network Models and Methods for Studying the Diffusion of
Innovations. In: P. J. Carrington, J. Scott, and S. Wasserman (Eds.), Models and Methods in
Social Network Analysis (pp. 98-116). Cambridge: Cambridge University Press.

Wächter, M. (1999). Rational Action and Social Networks in Ecological Economics. Doctoral
dissertation, Swiss Federal Institute of Technology Zurich, Switzerland.

Wainwright, J., and Mulligan, M. (2004). Modelling Human Decision-Making. In: J.


Wainwright & M. Mulligan (Eds.), Environmental Modelling: Finding Simplicity in
Complexity (pp. 225-244). Chichester: Wiley.

Watts, D. J., and Strogatz, S. H. (1998). Collective Dynamics of ‘Small-World’ Networks.


Nature 393, 440-442.

Weidlich, W. (1991). Physics and Social Sciences – the Approach of Synergetics. Physics
Reports 204, 1-163.

Weiss, G. (2000). Prologue. In: G. Weiss (Ed.), Multiagent Systems: A Modern Approach to
Distributed Artificial Intelligence (pp. 1-26). Cambridge, MA: The MIT Press.

Wooldridge, M. (2000). Intelligent Agents. In: G. Weiss (Ed.), Multiagent Systems: A


Modern Approach to Distributed Artificial Intelligence (pp. 27-77). Cambridge, MA: The
MIT Press.

Zimbardo, P. G. (1992). Psychologie. 5 ed. Berlin: Springer.


th

100
Table of Figures

Figure 1: Example for a Real Social Network: Women Friendships in Cameroon. ...................5
Figure 3: The Strength of Weak Ties or the Importance of Heterophilous Links ...................18
Figure 4: Scheme of the Two-Step Flow Theory.................................................................................. 19
Figure 5: Process Scheme of Synergetics; after Schweitzer, 2003 ...............................................24
Figure 6: Scheme of a Stigmergic Process; after Grassé, 1959 ...................................................... 27
Figure 7: The Functional Principle of Schweitzer’s (2004) Model................................................ 37
Figure 8: From a Continuous to a Discrete Differential Calculus .................................................41
Figure 9: Transformation from a Plane Area to a Torus .................................................................. 43
Figure 10: Influence of the Social Temperature on the Selection Probability ........................44
Figure 11: Typical Simulation Results of Schweitzer 's (2004) Model ........................................ 46
Figure 12: Innovative versus Conservative Systems.......................................................................... 49
Figure 13: Separation of Five Adopter Groups; after Rogers, 2003 ............................................... 51
Figure 14: A Sigmoid Function and its First Derivation..................................................................... 54
Figure 15: Best Fit and Worst Fit Plots of Different Parameter Combinations .......................59
Figure 16: Mean W-LS versus D and k ......................................................................................................62
Figure 17: Relative Standard Deviation of W-LS versus D and k....................................................63
Figure 18: Mean Duration Tw versus D and k.........................................................................................63
Figure 19: Relative Standard Deviation of Tw versus D and k ........................................................ 64
Figure 20: Schematic Overview of the Model’s Dependence of k and D..................................65
Figure 21: Comparison Between the Standard and the Simplified Model .............................. 66
Figure 22: Process Scheme of a Socioeconomic Decision Model .................................................. 71
Figure 23: Simulation of Case Scenario A............................................................................................... 74
Figure 24: Simulation of Case Scenario B............................................................................................... 75
Figure 25: Spatial Patterns of Case Scenario B (after 201 time units).........................................76
Figure 26: Simulation of Case Scenario C ..............................................................................................76
Figure 27: Simulation Results of an Urban-Rural System .............................................................. 78

Table of Tables

Table 1: Parameter Space of the Agent Characteristics....................................................................56


Table 2: Optimal Parameter Combinations .......................................................................................... 57
Table 3: Significance and Magnitude of Effect; W-LS as Dependent Variable.......................58
Table 4: Significance and Magnitude of Effect; Tw as Dependent Variable............................ 60
Table 5: Optimal Parameter Values for the Simplified Model ......................................................67
Table 6: Joint Probability ............................................................................................................................... 72

101
Appendix

The appendix consists of following information:

On the subsequent pages:

N A table showing (a) the duration of the adoption process, (b) the weighted squared
sums of the residuals, and (c) the ultimate number of adopters in dependence of the
decay rate k and the diffusion constant D (see subsection 4.2.2).
N A FORTRAN subroutine, which includes the code for the development of the
communication field.
N A FORTRAN subroutine, which includes the code for the decision-making according
to section 4.3.
N Two FORTRAN subroutines, which include the code for the decision-making
according to section 4.1.
N Comments on some of the variables used in the presented subroutines.

On the CD-ROM:

N All raw data from the simulation runs of subsection 4.2.2


N All raw data from the simulation runs of subsection 4.2.3

102
k! D" 1.8 2.0 2.2 2.4 2.6 2.8 3.0 3.2 3.4 3.6 3.8 4.0

LKLN ` FHOI XD ` XD XD ` XD LHNI ` FKHE XD ` XD XD ` XD XD ` XD XD ` XD LLKE ` LFKI XD ` XD FOHM ` FKEL FLEI ` FFJN
0.03 XD ` XD XD ` XD XD ` XD XD ` XD XD ` XD XD ` XD XD ` XD XD ` XD H<HE ` H<HJ XD ` XD H<HO ` H<EK H<HN ` H<EE
H ` FK<M H ` FO<E H ` FO<M H ` FK<J H ` FO<E H ` FO<L H ` FO<M H ` FO<M H ` FO<I H ` FO<N H ` FO<O H ` LH<F
JIHK ` FMKN XD ` XD XD ` XD MFNE ` FMJE LOIM ` EHNI MLEI ` EKNE LMKI ` FHMO IHJI ` FINK FOKF ` EOLI FNMI ` FHLN LJNO ` FIOL XD ` XD
0.05 H<HM ` H<HI XD ` XD XD ` XD H<HK ` H<HK H<HJ ` H<HK H<HF ` H<HE WH<HK ` H<LM H<HE ` H<LK H<E ` H<F H<HI ` H<HM H<HO ` H<EK XD ` XD
H ` LH<J H ` FO<O H ` LH<M H ` LE<K H ` LE<L H ` LH<I H ` LE H ` LE<M H ` LH<N H ` LE<H H ` LH<F H ` LE<J
JMM ` FEN OLL ` JHI ELKN ` EMLL FNMO ` LHKI MEOJ ` LFMJ MIMF ` FKJN NFJL ` EJKI MHFE ` ELIN JHNL ` EKIL IKFI ` EMMF IOMO ` EJLL ILJJ ` ENJH
0.07 EF<K ` FE<H E<J ` E<N EL<I ` LK<J J<F ` EI<L E<O ` M<I H<LI ` H<NM H<HN ` H<HM H<HO ` H<HN H<HN ` H<HL H<HK ` H<EH H<HK ` H<HK H<HO ` H<HM
H ` KI<O H ` IN<J H ` MJ<N H ` MF<M H ` IH<N H ` LL<E H ` LL<E H ` LF<L H ` LL<H H ` LF<N H ` LL<H H ` LL<K
LFI ` MJ LHI ` NM FOM ` IN FNJ ` ME LEI ` EKF LIK ` FFH KFK ` EJKE IEI ` EIN FKHJ ` LEHK ILIK ` LMNK MOKM ` LEJL JMJI ` LHNL
0.09 KIK ` JFO EFKO ` NMI KKH ` LKI EIOL ` KFI OMK ` OIH ELFJ ` OFH MHL ` IHF FON ` IKK FFJ ` INE I<L `N<N LEM ` OOI IH ` EFJ
L ` KOI<I M ` OIO<O N ` OIJ O ` OJK<K J ` NNI<F M ` NNI<I F ` MFH<E E ` LFI<E E ` FLJ<N H ` MH<J E ` ELL H ` NI<K
FMH ` ML FHO ` LH<M FHM ` LM EJE ` FH<M ENH ` EO<F EIK ` FF<N EML ` FO<I EMH ` FL<J EJN ` LO<F ENI ` LF<O ENE ` LM EKL ` IM
0.11 JNF ` IML NLM ` LNE EEKK ` EEFN KKI ` IJO EFKM ` OJN OLN ` JIL KIN ` ILO EFNJ ` NIO ENIF ` NJN EJNF ` MLL FFOF ` EHLE EOHM ` KKN
EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH O ` ONK<M K ` OFM<J
FHM ` FF EJM ` FM<M ELI ` EJ<K ELH ` EL<J EFE ` FE<M EHK ` EJ<L EEJ ` EH<O EHO ` EL<N EHK ` EJ<K EHN ` EK<E EHL ` EL<F EHM ` EO<F
0.13 LOH ` EEF ILL ` FFF JLH ` LEF JJL ` MJE KMK ` MOE EHFO ` IIN MIK ` LFH JMJ ` IIO OLF ` MJJ ELLF ` JLH EHKI ` JFI EILO ` INK
EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH
EJE ` LH EIH ` EK EFI ` EL<M EHF ` EI OM ` N<O OM ` EI<O KL ` O<O KN ` K<K NO ` EH<O NI ` O<N KH ` EL<M NN ` EI<M
0.15 IEO ` LMN IEM ` FMM IOE ` FOE MLE ` FHF LLK ` ENI IIJ ` ENN JIL ` NOK KOH ` NMI IJH ` FMF KKL ` MHE JFK ` LJI KJM ` IMN
EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH
ELN ` FE EEN ` EJ EHH ` N<L KK ` O<M KF ` O<I KH ` N<H NO ` K<E NH ` M<E JJ ` N<H JJ ` N<N JM ` O<K JE ` J<I
0.17 LLN ` EIN IJN ` EMN IMJ ` LIO IIE ` FOF IOH ` LJN LJJ ` FJE IHL ` EME LMN ` ENI JNE ` IJN JFK ` ILK NFH ` JEO NJJ ` IFI
EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH
EFN ` K<M EHI ` EI<M OF ` EL KF ` EH NI ` J<K JK ` J<L JM ` M<I JL ` N<O JE ` M<N MO ` J<I MK ` J<I MJ ` N<J
0.19 FJO ` EHM IFF ` LFJ LHL ` EII LEI ` EKH LHM ` EKO MLJ ` MHE LEN ` EMN IOF ` LOH LJO ` EOF IJK ` FNJ ILJ ` FOM LHO ` EIM
EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH
EEI ` EF<M OL ` EF KI ` M<O NM ` M JK ` N<L JM ` K<L MO ` J<L MJ ` J<E MI ` J<O MI ` I<E ME ` J<J IO ` I<M
0.21 FNO ` EMN FHE ` EEH FEE ` KM<M FKI ` EJL FOM ` EJE LHO ` FEM FKL ` ELK FJF ` EIJ IFI ` FEM IOJ ` LIJ IFJ ` FIM IMK ` EMN
EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH
EHL ` EH<J KN ` N<J NJ ` J<J NE ` N<H JF ` N<L MN ` I<L MJ ` M<N ML ` I<E MF ` M<H IK ` I<K IN ` M<N IJ ` I<I
0.23 LIH ` EOJ LIF ` FLI FHH ` NN FON ` FIK FIL ` NO FNE ` EHE FEF ` OH FOF ` EHM FNK ` EII LKN ` EMJ LOE ` FEJ LOM ` FME
EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH EH ` EHHH
ON ` EE KE ` O<J NF ` J<N JI ` M<E MN ` M<E MJ ` N<H MH ` M<F MH ` M<I IK ` M<J IM ` I<O II ` M<L II ` I<M
0.25 LJJ ` EOE ILK ` EKJ FLM ` MO LFE ` EKL FOE ` EFO LHJ ` EFI LKF ` EIE LHL ` EHN LON ` ENF LMI ` EKH ILF ` FEO LKI ` EMH
EH ` EHHH EH ` EHHH O ` OOO<O O ` OOO<O EH ` EHHH K ` OOO<K M ` OOO<M J ` OOO<J K ` OOO<K O ` OOO<O O ` OOO<O N ` OOO<N
KK ` O NK ` O<F JO ` J<M JH ` J<E MN ` J<H MI ` M<M MH ` I<E IN ` I<I IM ` I<F IL<J ` L<O IE ` L<E IE ` I<E
0.27 ILN ` FLJ LKJ ` EHK IIF ` FFH LIO ` EJO IFL ` EJK IFO ` EFN IHM ` OI MIJ ` EOO JHL ` FME JEH ` FLE JJH ` EIK JLL ` EJI
M ` OOO<M I ` OOO<I M ` OOO<M I ` OOO<I F ` OOO<H F ` OOO<E F ` OOO<E H ` OOK<O I ` OOO<E E ` OOK<N E ` OOK<K E ` OOK<O
1st line: av. duration of the process Tw / % of Tw; 2nd line: av. weighted sum of squares W-LS / % of W-LS; 3rd line: no. of runs, in which all agents converted / av. no. of agents converted.

103
FORTRAN CODE – Development of the Communication Field

t_step = 1

SUBROUTINE iterate (t_step)

USE global_variables; USE HGR; USE Library; IMPLICIT NONE


LOGICAL :: too_large
INTEGER :: x, y, cnt, cnt2, location, lx, ly, rx, ry, ux, uy, dx, dy, lloc, rloc, uloc, dloc
REAL :: delta_t, t2_step, temp, temp_lat
REAL, INTENT(IN) :: t_step

temp_lat = lattice_space**2
delta_t = 0.20*(lattice_space**2)/dif
temp = t_step
too_large = .true.

DO WHILE (too_large)

IF (delta_t<temp) THEN
t2_step = delta_t
ELSE
t2_step = temp
END IF

h_deriv = 0.

DO cnt = 1,nr_agents
x = agents(cnt*5-4)
y = agents(cnt*5-3)
location = (y-1)*nr_cols + x
IF (agents(cnt*5-2)==1) THEN
h_deriv(location) = dirac*REAL(agents(cnt*5-1))
END IF
END DO

h_deriv = h_deriv - (h1*decay)

DO y = 1,nr_rows
DO x = 1,nr_cols

104
lx = x - 1
ly = y
rx = x + 1
ry = y
ux = x
uy = y - 1
dx = x
dy = y + 1
IF (lx<1) lx = nr_cols
IF (rx>nr_cols) rx = 1
IF (uy<1) uy = nr_rows
IF (dy>nr_rows) dy = 1
location = (y-1)*nr_cols+x
lloc = (ly-1)*nr_cols+lx
rloc = (ry-1)*nr_cols+rx
uloc = (uy-1)*nr_cols+ux
dloc = (dy-1)*nr_cols+dx
h_deriv(location) = h_deriv(location) + &
dif*(h1(rloc)-2*h1(location)+h1(lloc))/temp_lat +&
dif*(h1(dloc)-2*h1(location)+h1(uloc))/temp_lat
END DO
END DO

h1 = h1 + (h_deriv*t2_step)

h_deriv = 0.

DO cnt = 1,nr_agents
x = agents(cnt*5-4)
y = agents(cnt*5-3)
location = (y-1)*nr_cols + x
IF (agents(cnt*5-2)==-1) THEN
h_deriv(location) = dirac*1.2*REAL(agents(cnt*5-1))
END IF
END DO

h_deriv = h_deriv - (h2*decay)

DO y = 1,nr_rows
DO x = 1,nr_cols

105
lx = x - 1
ly = y
rx = x + 1
ry = y
ux = x
uy = y - 1
dx = x
dy = y + 1
IF (lx<1) lx = nr_cols
IF (rx>nr_cols) rx = 1
IF (uy<1) uy = nr_rows
IF (dy>nr_rows) dy = 1
location = (y-1)*nr_cols+x
lloc = (ly-1)*nr_cols+lx
rloc = (ry-1)*nr_cols+rx
uloc = (uy-1)*nr_cols+ux
dloc = (dy-1)*nr_cols+dx
h_deriv(location) = h_deriv(location) + &
dif*(h2(rloc)-2*h2(location)+h2(lloc))/temp_lat +&
dif*(h2(dloc)-2*h2(location)+h2(uloc))/temp_lat
END DO
END DO

h2 = h2 + (h_deriv*t2_step)

temp = temp - delta_t


IF (temp<=0) too_large = .false.

END DO

[…]

END SUBROUTINE iterate

106
FORTRAN CODE – Subroutine Describing the Agents’ Decision-Making According to Section 4.3

SUBROUTINE opinion_module

USE global_variables; USE HGR; USE Library; IMPLICIT NONE


REAL :: transition1 ! Opinion transition rate
REAL :: transition2 ! " " " other direction
REAL :: rnd_value ! Random value
REAL :: rnd_value2 ! 2nd random value
REAL :: factor2 ! Weigth for h2
REAL :: factor1 ! Weigth for h1
REAL :: UtilInnov ! Total utility of the Innovation
REAL :: ChangeProb ! Selection Probability for the Innovation
INTEGER :: cnt, x, y, location

reeval=reeval-1

DO cnt = 1, nr_agents
IF (reeval(cnt)<0) reeval(cnt)=0
x = agents(cnt*5-4)
y = agents(cnt*5-3)
location = (y-1)*nr_cols + x
CALL random_number(rnd_value)
CALL random_number(rnd_value2)

IF (agents(cnt*5)==2) THEN
factor1=0.4
transition1 = EXP(REAL(agents(cnt*5-2))*(h2(location)-factor1*h1(location))/temperature)
transition2 = EXP(REAL((-1)*agents(cnt*5-2))*(h2(location)-factor1*h1(location))/temperature)
IF ((rnd_value<=(transition1/(transition1+transition2))).OR.(reeval(cnt)==1)) THEN
UtilInnov=StartUtil*((1+(IncreasUtil/100.))**(LOG(REAL(nr_blue))/LOG(2.)))
ChangeProb=EXP(UtilInnov)/(EXP(UtilInnov)+EXP(10.0))
IF ((rnd_value2<=ChangeProb).AND.(agents(cnt*5-2)==1)) THEN
agents(cnt*5-2)=-1
nr_blue = nr_blue+1
reeval(cnt)=2
END IF
IF ((rnd_value2>=ChangeProb).AND.(agents(cnt*5-2)==-1)) THEN
agents(cnt*5-2)=1
nr_blue = nr_blue-1

107
h1(location)=h1(location)+(2*dirac*REAL(agents(cnt*5-1)))
END IF
END IF
END IF

IF (agents(cnt*5)==3) THEN
factor1=0.8
transition1 = EXP(REAL(agents(cnt*5-2))*(h2(location)-factor1*h1(location))/temperature)
transition2 = EXP(REAL((-1)*agents(cnt*5-2))*(h2(location)-factor1*h1(location))/temperature)
IF ((rnd_value<=(transition1/(transition1+transition2))).OR.(reeval(cnt)==1)) THEN
UtilInnov=StartUtil*((1+(IncreasUtil/100.))**(LOG(REAL(nr_blue))/LOG(2.)))
ChangeProb=EXP(UtilInnov)/(EXP(UtilInnov)+EXP(10.0))
IF ((rnd_value2<=ChangeProb).AND.(agents(cnt*5-2)==1)) THEN
agents(cnt*5-2)=-1
nr_blue = nr_blue+1
reeval(cnt)=2
END IF
IF ((rnd_value2>=ChangeProb).AND.(agents(cnt*5-2)==-1)) THEN
agents(cnt*5-2)=1
nr_blue = nr_blue-1
h1(location)=h1(location)+(2*dirac*REAL(agents(cnt*5-1)))
END IF
END IF
END IF

IF (agents(cnt*5)==4) THEN
factor1=1.2
transition1 = EXP(REAL(agents(cnt*5-2))*(h2(location)-factor1*h1(location))/temperature)
transition2 = EXP(REAL((-1)*agents(cnt*5-2))*(h2(location)-factor1*h1(location))/temperature)
IF ((rnd_value<=(transition1/(transition1+transition2))).OR.(reeval(cnt)==1)) THEN
UtilInnov=StartUtil*((1+(IncreasUtil/100.))**(LOG(REAL(nr_blue))/LOG(2.)))
ChangeProb=EXP(UtilInnov)/(EXP(UtilInnov)+EXP(10.0))
IF ((rnd_value2<=ChangeProb).AND.(agents(cnt*5-2)==1)) THEN
agents(cnt*5-2)=-1
nr_blue = nr_blue+1
reeval(cnt)=2
END IF
IF ((rnd_value2>=ChangeProb).AND.(agents(cnt*5-2)==-1)) THEN
agents(cnt*5-2)=1
nr_blue = nr_blue-1

108
h1(location)=h1(location)+(2*dirac*REAL(agents(cnt*5-1)))
END IF
END IF
END IF

IF (agents(cnt*5)==5) THEN
factor1=1.6
transition1 = EXP(REAL(agents(cnt*5-2))*(h2(location)-factor1*h1(location))/temperature)
transition2 = EXP(REAL((-1)*agents(cnt*5-2))*(h2(location)-factor1*h1(location))/temperature)
IF ((rnd_value<=(transition1/(transition1+transition2))).OR.(reeval(cnt)==1)) THEN
UtilInnov=StartUtil*((1+(IncreasUtil/100.))**(LOG(REAL(nr_blue))/LOG(2.)))
ChangeProb=EXP(UtilInnov)/(EXP(UtilInnov)+EXP(10.0))
IF ((rnd_value2<=ChangeProb).AND.(agents(cnt*5-2)==1)) THEN
agents(cnt*5-2)=-1
nr_blue = nr_blue+1
reeval(cnt)=2
END IF
IF ((rnd_value2>=ChangeProb).AND.(agents(cnt*5-2)==-1)) THEN
agents(cnt*5-2)=1
nr_blue = nr_blue-1
h1(location)=h1(location)+(2*dirac*REAL(agents(cnt*5-1)))
END IF
END IF
END IF

END DO

END SUBROUTINE opinion_module

109
FORTRAN CODE – Subroutines Describing the Agents’ Decision-Making According to Section 4.1

SUBROUTINE time_step (t_step)

USE global_variables; USE HGR; USE Library; IMPLICIT NONE


INTEGER :: cnt, x, y, location
REAL, PARAMETER :: ny = 1
REAL :: trans_rate, rnd_number
REAL, INTENT(OUT) :: t_step

trans_rate = 0.

DO cnt = 1, nr_agents
x = agents(cnt*5-4)
y = agents(cnt*5-3)
location = (y-1)*nr_cols + x
trans_rate = trans_rate + ny * EXP(REAL(agents(cnt*5-2))*(h2(location)-h1(location))/temperature)
END DO

CALL random_number(rnd_number)
t_step = -(1/trans_rate)*LOG(rnd_number)

END SUBROUTINE time_step

SUBROUTINE decision_module

USE global_variables; USE HGR; USE Library; IMPLICIT NONE


REAL :: sum_transition, rnd_value, ny
INTEGER :: cnt, x, y, location

ny = 1.
sum_transition = 0.

DO cnt = 1, nr_agents
x = agents(cnt*5-4)
y = agents(cnt*5-3)
location = (y-1)*nr_cols + x
sum_transition = sum_transition + ny * EXP(REAL(agents(cnt*5-2))*(h2(location)-h1(location))/temperature)
END DO

110
CALL random_number(rnd_value)
rnd_value = rnd_value*sum_transition

sum_transition = 0.
cnt = 0

DO WHILE ((sum_transition<=rnd_value).AND.(cnt<nr_agents))
cnt = cnt + 1
x = agents(cnt*5-4)
y = agents(cnt*5-3)
location = (y-1)*nr_cols + x
sum_transition = sum_transition + ny * EXP(REAL(agents(cnt*5-2))*(h2(location)-h1(location))/temperature)
END DO

agents(cnt*5-2) = agents(cnt*5-2)*(-1)

END SUBROUTINE decision_module

111
FORTRAN CODE – Comments on Some of the Used Variables.

agents(xxx) array containing the following information of all agents according to following order:
(1) x-coordinate
(2) y-coordinate
(3) initial opinion [+1 or -1]
(4) strength si
(5) agent group [1=innovator; 5=laggard]

lattice_space !x = !y = 1 of the spatial lattice


dif D*
decay k*
t_step !t – time step
h1 h*
h2 h-*
8h
h_deriv
8t
temperature T – social temperature

reeval determines after how many time steps the reevaluation takes place

delta_t stability condition according to section 4.1

nr_agent, nr_cols, nr_rows total number of agents; number of columns, nr of rows

nr_blue number of agents who use the innovation

112

You might also like