You are on page 1of 146

Harry Varvoglis

History
and
Evolution
of
Concepts
in
Physics

History and Evolution of Concepts in Physics

Harry Varvoglis

History and Evolution


of Concepts in Physics

123

Harry Varvoglis
Unit of Mechanics and Dynamics
Department of Physics
University of Thessaloniki
Thessaloniki
Greece

ISBN 978-3-319-04291-6
ISBN 978-3-319-04292-3
DOI 10.1007/978-3-319-04292-3
Springer Cham Heidelberg New York Dordrecht London

(eBook)

Library of Congress Control Number: 2013957713


Based in part on the Greek language book History and Evolution of Ideas in Physics,  Planetarium
of Thessaloniki (2011).
 Springer International Publishing Switzerland 2014
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of
the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or
information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed. Exempted from this legal reservation are brief
excerpts in connection with reviews or scholarly analysis or material supplied specifically for the
purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the
work. Duplication of this publication or parts thereof is permitted only under the provisions of
the Copyright Law of the Publishers location, in its current version, and permission for use must
always be obtained from Springer. Permissions for use may be obtained through RightsLink at the
Copyright Clearance Center. Violations are liable to prosecution under the respective Copyright Law.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt
from the relevant protective laws and regulations and therefore free for general use.
While the advice and information in this book are believed to be true and accurate at the date of
publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for
any errors or omissions that may be made. The publisher makes no warranty, express or implied, with
respect to the material contained herein.
Printed on acid-free paper
Springer is part of Springer Science+Business Media (www.springer.com)

To Maria, Natasha and Christina, who


listened to me patiently for so many years
talking about physics and astronomy

Preface to the Greek Edition

As the title History and Evolution of Concepts in Physics indicates, this book
essentially encompasses two different approaches of the same topic, which is the
course of evolution of physics throughout history, from historical times to the
present. The first approach, History of Physics, deals with people (great scientists,
important inventors, etc.) and their activities, along with their personalities, their
family and scientific environment, and the social framework of their era. Inevitably, the life and work of every great scientist occupies a separate chapter; if the
scientist has contributed to several areas of physics, all contributions are discussed
in the same chapter.
In doing so, however, one tends to lose the coherence of the Evolution of
Concepts in Physics, which is the second approach. The reason is that, in every
branch of physics, the concepts evolved at different paces over the years,
depending on the available, at the time, experimental data (which, in turn, depend
on the existing laboratory instruments and their precision) and mathematical tools
(which, in turn, depend on the stage of development of the mathematical arsenal).
For instance, the topics of mechanics and gravitation were feasible targets for
Newton to attempt the development of the corresponding theories, while optics
was not, since the latter required more advanced experimental concepts (e.g.,
diffraction and polarization), more advanced mathematics (e.g., complex functions), and longer time span for those concepts to attain a certain level of maturity.
As a result, concepts in every branch of physics evolved at different paces following different routes, a fact suggesting that it may be better to present each topic
separately. This approach alone, however, could compromise the unity of the
presentation, since the work of every great scientist would have appeared fragmented in the various chapters of the book.
It is obvious that these two different approaches for presenting an account of the
evolution of physics over the centuries have, each, advantages as well as weaknesses. In this book, I tried to reconcile the two different approaches; I give
emphasis on the evolution of concepts, including, at the same time, several historical notes for every scientist, in an attempt to present his work and personality
within the framework of his era. Hopefully, the final result will help the reader to
understand the way physics evolved to the present day.

vii

viii

Preface to the Greek Edition

I would like to thank all those who helped in improving the book, pointing out
mistakes, oversights, and ambiguities in the draft. These are, in alphabetical order,
Profs. B. Charmandaris, K. Melidis, S. Persidis, N. Spyrou, and A. Varvoglis.
I would also like to thank my colleagues Profs. E. Meleziadou-Dompoula and
J. Touloumakos, for their help in the paragraph regarding the Museum of
Alexandria, as well as the text editor of the Greek edition and my former student,
S. Oikonomidis, for his useful remarks that helped improve the book.
Thessaloniki, April 2011

Preface to the English Edition

The translation of the Greek original in English was performed while I was a
visiting professor in the Theoretical Astrophysics Section of the Institute for
Astronomy and Astrophysics of the Eberhard Karls University of Tbingen during
spring 2012. I would like to thank for the hospitality Prof. K. Kokkotas. The
English version of the book benefited from the suggestions by Prof. J. Teichmann
and an anonymous referee. I would like to thank Ch. Varvogli for her help in
drawing Figs. 2.2 , 2.3, 4.2, and 4.10, S. Kartsaklis of Klidarithmos Publishing for
his help in drawing Figs. 4.8, 4.11, and 5.1, and Prof. V. Tsamakda for providing
Fig. 7.1 Above all, I would like to thank M. Mikedis of Klidarithmos Publishing
for editing the final version of the book and Dr. Angela Lahee for her help since
the first submission of the manuscript to Springer.
I would be glad to provide, free of charge, to anyone interested a full set of
electronic slides covering the content of the booksuitable for approximately
thirty 45m lectures.
Thessaloniki, November 2013

ix

Contents

Part I

From Ancient Greece to the Renaissance

Physical Sciences and Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . .


1.1 Philosophy of Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 From Natural Philosophy to Physics . . . . . . . . . . . . . . . . . . . .

The
2.1
2.2
2.3

Ideas of Greeks About Nature . . . . . . . . . . . . . . . . . . . . .


The Basic Assumptions of Aristotle on Motion and Gravity .
Success of the Basic Assumptions of Aristotle . . . . . . . . . .
Failure of the Underlying Assumptions and Need to Adopt
New Ones. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.4 Critical Review of Aristotles Theory . . . . . . . . . . . . . . . .
2.4.1 Internal Contradictions. . . . . . . . . . . . . . . . . . . . . .
2.4.2 Experimental Verification . . . . . . . . . . . . . . . . . . .

...
...
...

11
11
13

.
.
.
.

.
.
.
.

.
.
.
.

14
16
17
19

From Classical Era to the Renaissance .


3.1 Hellenistic-Roman Times . . . . . . . .
3.2 Middle Ages and the Renaissance . .
3.3 Layout of the Book . . . . . . . . . . . .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

21
21
23
26

The Major Branches of Physics . . . . . . . . . . . .


4.1 Mechanics . . . . . . . . . . . . . . . . . . . . . . . .
4.1.1 Kinematics: Galileo. . . . . . . . . . . . .
4.1.2 DynamicsGravity: Newton . . . . . . .
4.1.3 Solid Body: Huygens. . . . . . . . . . . .
4.1.4 Analytical Mechanics . . . . . . . . . . .
4.1.5 Nonlinear Mechanics . . . . . . . . . . . .
4.1.6 Mechanics Today . . . . . . . . . . . . . .
4.2 Optics . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2.1 The Period up to The Renaissance . .
4.2.2 Corpuscular Nature of Light: Newton
4.2.3 Wave Nature of Light: Huygens . . . .

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

29
30
30
36
42
43
45
45
46
46
46
49

Part II
4

3
3
8

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

From the Renaissance to the Present Era

xi

xii

Contents

4.3

4.4

4.5

4.6

4.2.4 Establishment of the Wave Theory: Thomas Young . . .


4.2.5 Completion of the Wave Theory: Fresnel. . . . . . . . . . .
4.2.6 Spectroscopy as a Branch of Optics . . . . . . . . . . . . . .
4.2.7 Relationship Between Mechanics and Optics . . . . . . . .
4.2.8 Optics Today . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Static Magnetism and Electricity . . . . . . . . . . . . . . . . . . . . . .
4.3.1 From Antiquity to the Renaissance . . . . . . . . . . . . . . .
4.3.2 Development of Experimentation . . . . . . . . . . . . . . . .
4.3.3 The Law of Electrostatic Force: Coulomb . . . . . . . . . .
4.3.4 Relating Electricity, Magnetism and Gravity . . . . . . . .
Electric Currents and Electromagnetism . . . . . . . . . . . . . . . . .
4.4.1 Invention of the Electric Cell . . . . . . . . . . . . . . . . . . .
4.4.2 Beyond Electric Current: Electrolysis
and Electromagnetism . . . . . . . . . . . . . . . . . . . . . . . .
4.4.3 Experimental Foundations of Electromagnetism:
Faraday . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4.4 Theoretical Foundations of Electromagnetism:
Maxwell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4.5 Incompatibility Between Electromagnetism
and Mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4.6 Electromagnetism Today . . . . . . . . . . . . . . . . . . . . . .
Heat-Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.5.2 Phlogiston and Caloric Fluids . . . . . . . . . . . . . . . . . . .
4.5.3 First Axiom of Thermodynamics: Mayer, Joule,
Helmholtz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.5.4 Second Axiom of Thermodynamics: Carnot,
Thomson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.5.5 Entropy: Clausius . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.5.6 Thermodynamics Today. . . . . . . . . . . . . . . . . . . . . . .
Kinetic Theory of Perfect Gases . . . . . . . . . . . . . . . . . . . . . .
4.6.1 Relationship Between Thermodynamics and the Theory
of Gases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.6.2 Atomic Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.6.3 Distribution Function: Maxwell . . . . . . . . . . . . . . . . .
4.6.4 Entropy and the Arrow of Time: Boltzmann . . . . . . . .

Physics of the 20th Century . . . . . . . . .


5.1 Quantum Mechanics . . . . . . . . . . .
5.2 Theory of Relativity. . . . . . . . . . . .
5.2.1 Special Theory of Relativity.
5.2.2 General Theory of Relativity
5.3 Theory of Chaos . . . . . . . . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

51
53
54
57
58
58
58
60
63
65
65
65

68

73

77

.
.
.
.
.

78
81
82
82
83

86

.
.
.
.

88
91
94
95

.
.
.
.

95
95
97
100

.
.
.
.
.
.

105
107
110
110
113
115

Contents

xiii

Lessons From Three Centuries of Physics .


6.1 Geographical Area . . . . . . . . . . . . . . .
6.2 Methods of Organization . . . . . . . . . .
6.3 Personality of Researchers . . . . . . . . .
6.4 Conclusions . . . . . . . . . . . . . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

121
121
122
122
123

Organization of Teaching and Research . . . .


7.1 Universities Throughout the Centuries . . .
7.1.1 Ancient Greece-Byzantium. . . . . .
7.1.2 Western Europe . . . . . . . . . . . . .
7.2 Research in Europe and the United States
7.3 Dissemination of Research Results . . . . .
7.3.1 Publications . . . . . . . . . . . . . . . .
7.3.2 Conferences . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

125
125
125
127
129
132
132
134

Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

137

Abbreviations

FHW
INFN
Museo Galileo
NASA
NOESIS
Sparkmuseum
STScI

Foundation of the Hellenic World (www.ime.gr)


Instituto Nationale di Fisica Nucleare, Italy
Museo GalileoInstitute and Museum of the History
of Science, Florence, Italy
National Air & Space Administration, USA
Science Center and Technology Museum NOESIS,
Thessaloniki, Greece
John D. Jenkins, www.sparkmuseum.com
Space Telescope Science Institute (Hubble telescope)

xv

Part I

From Ancient Greece to the Renaissance

Chapter 1

Physical Sciences and Physics

1.1 Philosophy of Physics


The history of any discipline is always based on written texts.1 In this way, to
restrict ourselves to texts of Antiquity, the history of the Jewish people is based on
the books of the Old Testament, the history of the Persian Wars on the books by
Herodotus and the history of the Peloponnesian War on the books by Thucydides.
Even the history of the Trojan War is based on Homers written work, although
this was based, in turn, on earlier oral traditions of the Greeks of Homers time.
This rule, of course, cannot find an exemption in the history of physics. This is the
main reason why the history of physics, and hence the evolution of concepts in this
science, necessarily starts from the ancient Greeks. It is certain that other people of
historical times were also involved in scientific activities, such as the Babylonians,
who developed astronomy, and the Egyptians, who developed geometry. But their
aim was to solve practical problems of their everyday life and not to understand
nature and its laws. The geometry of the ancient Egyptians was developed for the
purpose of redistributing land after the annual flooding of Nile, while Babylonian
astronomy was limited to the simple recording of astronomical observations, with
a few surviving examples of predictions of future events. Instead, the interpretation
of nature and its laws, in both these nations, was the responsibility of priests and
kings. In other words, the interpretation of nature for them was not a result of
rational thinking; it was based on truth by revelation. The truth was revealed to
rulers, nobles and priests, and accepted, without questioning, by the rest of the
people. This truth was closely related to the religion of each nation.
One can find indications of this mode of thinking in Greek mythology, where it
is stated that lightning is cast by Jupiter, earthquakes are caused by Hephaestus and
storms are raised by Poseidon with his trident. Generalizing this way of thinking,
one may conclude that any phenomenon, even the simplest one, such as the motion
of bodies, is caused by a god. But in the late 7th century BC there were Greeks
who believed that interpretations of this sort were not very reasonable. So, they
1

This chapter, as well as the next one, is inspired by the ideas presented in the Introduction of
Isaac Asimovs book, History of Physics.

H. Varvoglis, History and Evolution of Concepts in Physics,


DOI: 10.1007/978-3-319-04292-3_1,  Springer International Publishing Switzerland 2014

1 Physical Sciences and Physics

tried to explain natural phenomena through a system of rational reasoning. This


system was called natural philosophy and those who were practicing it were called
natural philosophers.2 The first natural philosophers in Ionia, as Asia Minor was
named at that time, appeared in the late 7th and the beginning of the 6th century
BC. The first important Ionian natural philosopher was Thales of Miletus (ca. 630
BCca. 543 BC), who was considered as one of the Seven Sages of ancient Greece
(Fig. 1.1). Other important natural philosophers of that era, also from Miletus,
were Anaximander (ca. 610 BCca. 547 BC) and Anaximenes (ca. 585 BCca.
525 BC). Besides the Milesians, other major Ionian natural philosophers were
Heraclitus (ca. 544 BCca. 484 BC) of Ephesus and Xenophanes (ca. 570 BCca.
480 BC) of Colophon. Another core of natural philosophers appeared in Great
Greece (Magna Graecia), as were named the Greek colonies in southern Italy and
Sicily, mainly represented by Pythagoras of Samos (ca. 575 BCca. 495 BC),
Empedocles of Agrigento (ca. 495ca. 435 BC) and Parmenides of Elea (ca. 514
BCca. 440 BC) (Fig. 1.2). From the above philosophers, Thales and Pythagoras
did not leave behind any written text, while the others wrote books, which
unfortunately were lost, except for several extensive excerpts of the books of the
latter two (Empedocles and Parmenides). Thus, almost all available information
about the ideas of natural philosophers of the 6th century BC is based on brief
references to their work, found in books of later authors.
The method followed by the first natural philosophers to explain natural phenomena proved to be extremely successful and was used by all scientistsresearchers until the Renaissance. This method seemed more or less similar to the
logical method, which was introduced many years later (in the Hellenistic era) and
in a more sophisticated form by Euclid of Alexandria (ca. 325 BCca. 265 BC) in
developing geometry. The Greek natural philosophers started from a postulate,
which seemed more or less self-evident, and continued with the logical conclusions that could be drawn from it. All conclusions that could be drawn from a set
of initial postulates formed a theory. The initial postulate, which seems obvious
but cannot be proven, in philosophy is called a hypothesis (in mathematics, it is
called assumption or axiom). In literature, one can often come across other terms
with the same meaning, like conjecture and (in German) Ansatz.

It is worth mentioning that today the words philosopher and philosophy have completely
different meaning. In the modern era Philosophy is the study of general and fundamental
problems, such as those connected with existence, knowledge, values, reason, mind, and
language (as defined in Wikipedia). The change is largely due to the philosophical system of
Socrates, who focused his thinking on the exploration of the internal world of man and not on
the understanding of nature. More specifically, Socrates dealt with the consciousness of people,
seeking to understand their behavior, ethics, motivation and response to intellectual problems.
The most eminent representative of this tendency in ancient Greece was Plato, student of the great
Socrates (Fig.1.3).

1.1 Philosophy of Physics

Fig. 1.1 Thales, from


E. Wallis, Illustrated History
of the World, vol. 1 (1875)

The truth of an axiom cannot be proven within the framework of the theory
which is based and developed on it. However, the axiom may be verified by
showing that the theory, which is developed on it, is consistent with relevant
experiments or observations. For example, one of the fundamental axioms of
Euclids geometry is that two parallel lines do not intersect. This axiom seems
obvious, but as understood by later mathematicians, this holds true only in flat
spaces. The surface of the Earth is not flat but curved and that is why two adjacent
meridians, which are parallel on the equator (i.e., for a latitude u 0), intersect at
the poles. Therefore, Euclidean geometry holds only approximately on the surface
of the Earth and only for relatively small distances. When the dimensions of
geometric drawings we study on its surface are of the order of hundreds of kilometers or more, then deviations from the Euclidean geometry start to appear,
such as the fact that the sum of the angles of a triangle is greater than 180.
Therefore, we arrive to the conclusion that an axiom can provide a correct theory

1 Physical Sciences and Physics

Fig. 1.2 Pythagoras, from


History of Philosophy (ca.
1660) by Thomas Stanley

under certain conditions and a wrong one under others. Today, the criterion for
accepting a theory, and consequently the axioms on which it is based, is the
experimental verification of the predictions arising from it. Using this method, we
can prove that a theory is wrong, but we can never prove that it is correct! In
simple terms, if experiments are consistent with the theory, then we continue to
use it. But if they do not agree, then we reject the theory and the axioms on which
it is based, and introduce new axioms. As discussed in the book, this has happened
several times in the recent history of physics, but not in ancient times.
So far, the relation between axioms and theories has not changed significantly.
All theories, from the theories of motion of Galileo and Newton to the general
theory of relativity of Einstein, are based necessarily on hypotheses-axioms. On
the other hand, from what has been said in the previous paragraph, it is evident that
hypotheses are the weak point of any theory. Therefore, it seems reasonable to try
to limit, as much as possible, the number of unproven assumptionsthe axioms
upon which a theory is based. This concept was first introduced explicitly by

1.1 Philosophy of Physics

Fig. 1.3 Plato, by L. Drossis


and Attilio Picarelli; in front
of the entrance of Academy
of Athens (photo by author)

William of Ockham (12851349), an English philosopher of the Middle Ages. For


this reason, the attempt to limit the number of hypotheses on which a theory is
based is commonly referred to as Ockhams razor. According to Ockhams razor:
If two different assumptions lead to conclusions that agree with an observation
or experiment, we prefer the one that explains more phenomena.
Between two theories that explain the same phenomena, we prefer the one
that starts with fewer assumptions.

1 Physical Sciences and Physics

1.2 From Natural Philosophy to Physics


The term natural philosophy, describing the study of natural phenomena, remained
in use from the 6th century BC until many years after the Renaissance. The
modern word introduced in its place, science (from the Latin word scientia, which
means science, knowledge) was introduced only recentlyin the 19th century.
Even today, the higher university degree awarded in Western Europe and the
United States for studies in science bears the name Doctorate in Philosophy
(Philosophiae Doctor, PhD). Furthermore, the word physics is quite recent and its
original meaning encompassed all branches of science. But as the scope of science
became deeper and broader, and knowledge kept accumulating, natural philosophers had to specialize, by selecting a particular field of science. These fields
received distinct names and started separating from the previously unique space of
natural philosophy. The study of abstract relations of shapes and numbers was
named mathematics. The study of the position and motion of celestial bodies was
named astronomy (from the Greek words arsg9 q, which means star, and me9lx,
which means distribute, allot). The study of the physical characteristics of the
Earth, the planet on which we live, was called geology. The study of the structure,
function and interactions of living organisms was named biology. The study of the
composition and interactions of substances was called chemistry, and so forth.
Today, the term physics ended up describing the study of all natural phenomena
that do not come under any of the other independent fields mentioned abovethat
is, those that were separated from the main branch of natural philosophy. For this
reason, today, physics ended up comprising a rather heterogeneous set of
knowledge that is difficult to fit in a general and unique definition. It certainly
includes phenomena such as motion, heat, light, sound, electricity and magnetism.
All these are forms of energy, so the study of (classical) physics can be understood
as mainly the study of interactions of matter and energy. This definition can be
interpreted either in a narrow or in a broad sense. If interpreted in a narrow sense,
then we come to the study program, the syllabus, of a typical Physics Department. But if interpreted in a broad sense, physics includes a large portion of the
remaining fields of science. For example, the chemical bonds between atoms are
due either to electrostatic forces between ions (ionic bond) or to forces of quantum
mechanical origin (covalent bond). Therefore, a large part of chemistry should be
considered as a subset of physics. Following similar reasoning, biology can, as
well, be considered to comprise a large amount of physics, mainly regarding the
synthesis of molecules supporting life and the energy balance of living organisms.
We might even claim that the branch of medicine called physiology, which deals
with the function of human organs, lies within the realm of physics. This is
because our ears and eyes transform sound and light energy into nerve signals
through processes that are applications of acoustics and optics, respectively.
Besides, as we shall see later in the book, many physicists of the 18th and 19th
century were holding degrees in medicine.

1.2 From Natural Philosophy to Physics

According to what we have said so far, the differentiation of science into


disciplines is ultimately an artificial classification, established mainly in the 19th
century. The evolution of science, however, is an ongoing process and so the
above technical classification soon started to lose its strict meaning. As
knowledge continued to accumulate, the boundaries of disciplines became fuzzy
and ultimately many of them started to overlap; as a result, the techniques and
methods of one discipline could be used in other disciplines. For example, in the
second half of the 19th century, techniques used in physics enabled the determination of chemical composition and physical structure of stars. In this way, the
science of astrophysics was born. The study of oscillations excited in the Earths
crust by earthquakes created geophysics. The study of chemical substances by
using methods of physics created physical chemistry. The application of the laws
of physics in the motion and functions of living organisms was called biophysics.
The applications of physics in medicine, e.g., modern imaging methods (CT and
MRI) or the use of radiation for the treatment of cancerous tumors were named
medical physics. As far as mathematics is concerned, it was from the beginning the
basic tool of physics. However, research on the basic principles of physics today is
so highly specialized and requires such an extensive mathematical background,
that this tool has evolved to a degree that is very difficult to differentiate between
an applied mathematician and a theoretical physicist. At this point, it should be
noted that mathematicians that contributed to the development of physics fall into
two categories:
In the first category belong all those mathematicians who described or solved,
using mathematics, known problems in physics (in the narrow or broad sense).
These were, for example, Joseph Louis Compte de Lagrange (17361813), who
worked on gravity and classical mechanics, Johann Carl Friedrich Gauss
(17771855), who worked on gravity and electromagnetism, Jules Henri Poincar (18541912), who worked on mechanics and relativity, etc.
In the second category belong those who developed theories using completely
abstract mathematical structures or models that did not seem, at the time, to bear
any relation with observable nature and its properties, but whose results found
application in physics a posteriori. These include, for example, the non-commutative algebra of Sir William Rowan Hamilton (18051865) and Lie groups
of Sophus Lie (18421899), which find applications in theoretical mechanics,
the Riemann tensor of Georg Friedrich Bernhard Riemann (18261866) and the
Richie tensor of Gregorio Ricci-Curbastro (18531925), which find applications
in general relativity, etc.
As a result, many great scientists of the 18th and 19th century can be considered
as belonging to different disciplines, depending on how one views and approaches
their work. For example, Joseph-Louis Gay-Lussac (17781850) and Michael
Faraday (17911867) can be regarded as chemists, while in this book are considered as physicists. On the other hand, Christiaan Huygens (16291695), Sir

10

1 Physical Sciences and Physics

Isaac Newton (16421727), Charles Augustin de Coulomb (17361806), Galileo


Galilei (15641642) and Gustav Robert Kirchhoff (18241887) may be considered
as mathematicians, but again in this book are classified as physicists.

Chapter 2

The Ideas of Greeks About Nature

2.1 The Basic Assumptions of Aristotle on Motion


and Gravity
Motion was one of the earlier phenomena that were studied by ancient Greek
natural philosophers. One might initially assume that motion is a characteristic of
life: people and animals move freely, while dead men and stones do not. It is
possible of course to make a rock to move, but this usually happens through the
impulse given to it by a living being. This initial impression, however, does not
seem to withstand a critical approach, since it cannot explain the immobility of
plants that are definitely living organisms, while there are also many examples of
motion that have nothing to do with life. For example, celestial bodies move in the
sky without any apparent cause. The same happens with dust or sea waves, that are
raised by the wind. Of course, one could assume that heavenly bodies are pushed
by angels, that the wind is the breath of Aeolus, god of wind, and that storms are
raised by the trident of Poseidon, god of the sea. Such hypotheses were indeed
common in most early civilizations and prevailed until the Renaissance. The Greek
natural philosophers, however, tried to propose interpretations arising from the
implementation of rational thinking and based on phenomena that are perceptible
with our senses. This consideration of nature, therefore, excluded from possible
explanations of natural phenomena the angels and the gods of wind and sea.
Another fact that opposes this theocratic interpretation was the existence of
cases of motion that could not be interpreted easily as a result of divine influence.
For example, the smoke of a fire is not rising vertically, but follows a complex
turbulent motion. A stone, that is released from some height above the Earths
surface, moves directly downward, although no one pushes it in that direction.
Surely, even the most fanatical mystic finds it difficult to accept that every
breath of air and every piece of mater contains a small god (or demon!), who
pushes them here and there.
The Greek natural philosophers created many philosophical systems, that is,
many theories about nature and its phenomena, each based on different hypotheses.
These theories were brought together and codified into a single theory by the
H. Varvoglis, History and Evolution of Concepts in Physics,
DOI: 10.1007/978-3-319-04292-3_2,  Springer International Publishing Switzerland 2014

11

12

2 The Ideas of Greeks About Nature

Fig. 2.1 Aristoteles by


G. Tsaras; in the campus of
the Aristotelian University of
Thessaloniki (photo by
J. Tsouflides)

Greek philosopher Aristotle (384 BC322 BC), who was born in Stagira of
Chalkidiki (Northern Greece), but studied and taught in Athens (Fig. 2.1).
Aristotles theory was based on the following assumptions:
First hypothesis Earth is the center of the universe.
Second hypothesis All material objects are made of the four elements originally
proposed by Empedocles and later adopted by Plato, namely earth, water, air, and
fire.
In order to explain the motion of bodies not being pushed by living things,
Aristotle put forward an extra third hypothesis:
Third hypothesis Each of these elements has its natural place, or physical
location, in the universe.
The natural place of element earth, the main constituent of all solid bodies
around us, is the center of the universe. So, all solid matter is accumulated in the
center of the universe and creates the world in which we live. The ancient Greeks
knew that from all solid geometric shapes with the same volume, sphere is the one
that has the smallest surface area. So, if it is correct that every piece of solid matter
is accumulated as close to the center of the universe as possible, then Earth must
be spherical in shape. In addition, its center shall coincide with the center of the
universe.

2.1 The Basic Assumptions of Aristotle on Motion and Gravity

13

The physical location of the element water is just above the surface of the
earths sphere, forming a water shell with spherical surface.
The physical location of the element air is just above water.
Finally, the physical location of the element fire is above air.

2.2 Success of the Basic Assumptions of Aristotle


Aristotles theory was very successful at the beginning, because observations
seemed to agree with predictions. As far as we can, at least, understand with our
senses, Earth is spherical and is located in the center of the universe, since we are
surrounded by a hemispherical dome (the sky), where the celestial bodies (stars
and planets) are moving. Oceans cover large areas of Earths surface (we now
know that they cover about 2/3 of it), so water is indeed over earth. Air surrounds
earth and sea. Finally, during storms, high in the atmosphere, occasionally appear
indications of fireballs in the form of lightning. The same theory can even explain
the behavior of objects that do not consist of pure elements. For example, wood
floats in the water because it is a mixture of earth and fire. When wood is burned,
the fire is released and moves upwards, while the remaining earth, the ash,
cannot float on the water anymore and heads towards its natural place, below
water.
Furthermore, the hypothesis of natural place could explain the phenomenon
of motion. Assuming that there is a natural place for everything, it was very
reasonable to deduce that whenever an object is removed from its normal position,
it tends to return to it at the earliest opportunity. For example, a stone, held by
someone in the air, manifests its tendency to return to its natural place by
pushing the hand downwards. We could conclude that this is why the stone has
weight. This explains why, if we release it, the stone will fall immediately to the
ground, that is, towards its natural place, without having to assume the intervention
of any higher power. By similar reasoning, we can explain why tongues of fire
move upwards, why pebbles sink when they are thrown into the water and why air
bubbles rise in a glass of beer.
A similar reasoning can also explain the phenomenon of rain. When the suns
heat evaporates water (converts it to air according to Aristotle), water vapors rise
spontaneously, seeking their natural place, which is over earth and water. But once
vapor is condensed, the resulting water falls in the form of drops towards its
natural place, which is the region below air but over earth.
Using the hypothesis of natural place, one may arrive to more advanced
conclusions. Suppose we know that an object is heavier than another. The heavier
object shows greater tendency to return to its natural place. Indeed, observations
seem to confirm this conclusion, since light objects such as feathers, leaves and
snowflakes fall slowly, while stones and bricks fall faster. By symbolizing the
weight of a falling body with B and its velocity with v, we can express Aristotles

14

2 The Ideas of Greeks About Nature

hypothesis of natural place, using modern mathematical notation, with the following equation:
v ds=dt k  B
where k is a constant. Of course, today we know that the mathematical relation
which describes correctly the phenomenon is Newtons second law (axiom):
g d 2 s=dt2 1=m  B
where g is the gravitational acceleration and m the mass of the body. It should be
noted that Aristotle never made explicit reference to a relation of the form
v k  B, because, unlike Plato, he believed that natural laws are not described
quantitatively by mathematical relations, but only qualitatively. Aristotles later
disciples, however, believed indeed that an object weighing 2B falls twice as fast
as another object weighing 1B.

2.3 Failure of the Underlying Assumptions and Need


to Adopt New Ones
Apart from the spontaneous or natural motion, which involves objects moving to
their natural place, Aristotle identified also forced motion, in which objects are
moving away from their natural place, sometimes seemingly in the absence of an
external force. Consider, for example, a stone which is thrown vertically upwards.
Initially, the motion of the stone is due to the force applied by the muscles of our
hand; however, after our hand ceases to be in contact with the stone, it cannot have
any effect to it. So, why the stone does not start falling as soon as it leaves our
hand, but continues to move upwards for some time? To explain forced motion,
Aristotle formulated an additional fourth hypothesis, this of antiperistasis or of the
existence of an intermediate medium:
Fourth hypothesis According to the hypothesis of antiperistasis1 introduced by
Plato, air displaced from the front of the stone moves to its back and pushes it
forward. But as the push is transmitted from one point of the air to another, it
slowly weakens and allows the natural motion of the stone to prevail. As a result, the
upward motion slows down and eventually reverts to a downward motion, causing
the stone to hit the ground. Aristotle modified slightly Platos hypothesis of antiperistasis, stating that our moving hand sets to motion successive layers of air,
which, in turn, push the stone. As the force is transmitted from one layer of air to
another, it decays and finally the natural downward motion of the stone prevails. In
what follows, we will refer to both variants of this hypothesis as antiperistasis.

Antiperistasis (in Greek means mutual substitution) was not a new hypothesis, since it was
conceived initially by Empedocles.

2.3 Failure of the Underlying Assumptions

15

With the available, at that time, observations, which did not include initial
velocities large enough for a body to escape Earths gravitational attraction, one
may conclude that no force of any nature existsneither our hands nor even that
of a catapultthat can eventually overcome the natural motion of the stone.
Therefore we may conclude that natural motion always prevails over forced
motion, and bodies always end up at rest in their natural position. The final
conclusion of the Aristotelian theory, then, is that the natural condition of bodies,
when no force is acting on them, is the state of rest, that is, the absence of any
motion.2
The above interpretation of motion cannot, however, include the motion of
celestial bodies. For example, while the natural motion of various bodies on Earth
is straight (rectilinear), either upward (smoke, fire) or downward (stones, rain),
heavenly bodies seem to follow a circular motion around Earth. Aristotle concluded that there was a need for a fifth hypothesis:
Fifth hypothesis The sky and the heavenly bodies are made of a substance that
is neither earth nor water, air or fire. It is a fifth element which, following the ideas
of earlier natural philosophers (Philolaos, Xenophanes and Parmenides), he named
aether.3 The physical place of this fifth element was beyond the realm of fire,
outside the Moons orbit.
The explanation of the motion of celestial bodies stems from the fifth
hypothesis, in conjunction with a sixth hypothesis:
Sixth hypothesis The laws governing the motion of celestial bodies are different
from those governing motion on Earth. So Aristotle arrived to the conclusion that,
while in the region of universe inside the Moons orbit the natural state of objects
is rest, in heaven the natural state of objects is eternal circular motion.
The practical application of Aristotles hypothesis for the motion of celestial
bodies, namely the geocentric theory of the Solar System, was formulated mathematically by the Greek astronomer Hipparchus (ca. 190 BCca. 120 BC). Later, it
was perfected by the Greek astronomer Claudius Ptolemy (ca. 85 ADca. 165 AD)
and published in his book Almagest (Greater Astronomical Treatise), which was
used as the basic astronomy textbook for fifteen centuries (Fig. 2.2). The geocentric theory of Ptolemy, as a consequence of Aristotles theory of motion of
celestial bodies, was, of course, wrong. Both theories were debunked by Galileo,
with the performance of the first historically recorded experiments (free fall of
bodies and observational confirmation of Aristarchus heliocentric model). As we
shall see, later Newton showed that both the natural downward rectilinear
motion of bodies and the eternal circular motion of the planets are caused by the
same force, the force of gravity.

We note that this is essentially a special case of Newtons first postulate (axiom), whereby, if
no force acts on a body, then it either moves with constant velocity or stays at rest.
3
A word used by Homer and Hesiod to describe the fresh air above the atmosphere or the clear
light of heaven (the Greek verb ai9#x means to burn).

16

2 The Ideas of Greeks About Nature

Fig. 2.2 Ptolemaic model of the solar system. In this model the visible with naked eye (from
Earth) bodies of the solar system are moving on circles (epicycles), whose centers move in
circular orbits (deferents) around the Earth (not in scale, drawing by author)

Finally, we should mention an interesting analogy. Aristotles contribution in


physics was mainly in mechanics and gravity, but in a special way, since according
to him, the cause of motion was just gravity. Galileo and Newton, the first two
great physicists of the modern era, also made significant contributions to the above
mentioned branches of physics, with one difference: they showed that between
these two phenomena, motion and gravity, there is not necessarily a cause-effect
relationship.

2.4 Critical Review of Aristotles Theory


Today we know well that Aristotles theory of motion, as well as Ptolemys
geocentric theory, were completely wrong; both theories have been replaced by
Galileos and Newtons theory of motion and Aristarchus heliocentric model. But
we also know that the theories of Aristotle and Ptolemy were taught for at least
fifteen centuries, without being seriously challenged. Instead, Aristotle had come
to be considered an authority in scientific matters. Was it really impossible for

2.4 Critical Review of Aristotles Theory

17

scientists during this long period to test the correctness of these two theories?
Lets see in more detail how this could have been done.

2.4.1 Internal Contradictions


A logical method to challenge a theory is to show that it can lead to two completely opposite conclusions, which means that it contains internal contradictions
and therefore is not self-consistent. For example, the assumption that the Sun is
made of aether leads to a contradiction. According to Aristotle, hot or cold objects
are made from one of the four elements of the sublunar world and, for this reason,
they are imperfect; as a result, hot or cold objects exchange heat with their
environment over time and eventually end up in thermal equilibrium with it. But
heavenly bodies beyond the Moon are made of the fifth element, aether, and
therefore are perfect in the sense that they do not change with time and follow
an eternal circular motion. According to this reasoning, the Sun, which the ancient
Greeks knew that lies farther away than the Moon, cannot be hot, which gives rise
to the question how is it possible to radiate light and heat (in modern terminology:
infrared radiation).
If we restrict ourselves to Aristotles theory of motion, an argument that leads to
a contradiction is the following: A stone falls more slowly in water than in air;
according to his theory, the speed of the falling object is inversely proportional to
the density, q, of the ambient medium, i.e., in modern mathematical notation:
v  1=q
So, the less dense is the medium through which the stone falls, the faster it
moves. In a medium with half the density of air the stone would fall twice as fast
as in air, while in a medium with one tenth the density of air the stone would fall
ten times faster. In the absence of a medium (if the stone falls through vacuum),
the stone would fall with infinite speed! On the other hand, the Aristotelian theory
also states that a stone, after it is thrown, maintains its initial direction of motion
because of antiperistasisnamely, the force that air is exerting to the stone. If
air is removed, there would be nothing to move the stone through vacuum! Which
of the following two conclusions is therefore correct? The stone will move with
infinite speed through vacuum or the stone will not move at all? Each conclusion
seems equally reasonable! This contradiction has been known since Antiquity, but
it was bypassed by the introduction of an additional seventh hypothesis:
Seventh hypothesis There can be no vacuum in nature (hence the famous saying
of the great philosopher Spinoza, Nature abhors a vacuum).
We should note that it is possible to find other ways to solve the above logical
dilemma, such as to assume that, in vacuum, bodies moving towards their natural
place have indeed infinite speed, but in that case forced motion is not feasible.
Another reasoning, in the context of Aristotles theory of motion, leading to a
contradiction is the following: Suppose we have two stones, stone A weighing one

18

2 The Ideas of Greeks About Nature

Fig. 2.3 The motion of


falling bodies according to
Aristotles ideas. The
positions of a heavy and a
light stone are depicted at
three consecutive time
instants (white, light gray and
dark gray). Aristotles theory
predicts that the heavier body
will fall faster (a). But if we
tie the stones together we
arrive at a contradiction,
because the theory predicts
that the bound stones might
fall either faster (b) or slower
(c) (see text, drawing by the
author)

newton4 and stone B weighing two newtons. According to Aristotles theory, stone
B is heavier and has a greater tendency to move towards its natural place
(Fig. 2.3). Therefore, if we let them fall simultaneously, stone B will fall faster
than stone A (Fig. 2.3a). Assume now that we bind the two stones tightly with a
piece of string and let them fall again. What will happen then, according to
Aristotles theory? Stone B will tend to fall faster than A, but it will be hindered
by stone A, which will tend to fall slower. In contrast, stone A will tend to fall
faster, as it will be carried away by stone B. Therefore, the falling speed of the
system consisting of the two stones will be higher than the falling speed of stone A
alone, but lower than the falling speed of stone B alone (Fig. 2.3c). This gedanken
experiment, however, can also be examined from another perspective. Since stones
A and B are in contact, they form a stone C weighing three newtons, which,
according to the theory, should fall with higher speed than stone B alone
(Fig. 2.3b)! Which of the two eventually happens? The system consisting of stones
A and B tied together will fall faster or slower than stone B? According to
Aristotles theory, both answers seem correct. Again, one could find a logical way
to solve the dilemma; for example, one might assume that the falling speed of the
two bodies in contact depends on how tightly they are tied together.
Reasonings like the previously described, which result in logical contradictions,
can identify the weaknesses of a theory, but can rarely offer convincing arguments

Newton is the unit of force in the SI system and it is equal approximately to the weight of a
100 g mass.

2.4 Critical Review of Aristotles Theory

19

against it. The reason is that, as the great epistemologist Thomas Kuhn
(19221996) said (and becomes apparent from what we have said on the successive corrections of the initial hypotheses of Aristotles theory), When
anomalies occur, they (the scientists) usually devise numerous articulations and ad
hoc modifications of their theory in order to eliminate any apparent conflict.

2.4.2 Experimental Verification


Another method to test a theory, which in fact has become even more useful in
practice than the one mentioned in the previous paragraph, is to arrive at a logically necessary consequence of the theory and then verify experimentally the
result. Lets see how we can apply this method to Aristotles theory of motion.
Suppose we have again two stones A and B, which weigh one and two newtons
respectively. According to the mathematical relationship
v ds=dt k  B
mentioned in Sect. 2.2, stone B will fall twice as fast as stone A. One way to
test this prediction of Aristotles theory of motion would be to conduct an
experiment. That is, to measure the speed at which the two stones fall and to find
out if stone B actually falls twice as fast as stone A. If it does, then we could
continue to use Aristotles theory to interpret the motion of bodies. If not, then
surely Aristotles theory should be modified.
But although it was difficult in Antiquity to conduct experiments that require
measurements, it is remarkable that the ancient natural philosophers did not
consider making simple, comparative experiments. For example:
According to Aristotle, an arrow continues to move after leaving the chord of an
arc, due to the push it receives from the air through the phenomenon of
antiperistasis. Is it possible to put an arrow in motion only by blowing air to it,
yes or no?
A tree leaf falls slowly. The same leaf crumpled falls with the same speed, yes or
no?
Unfortunately, such an experimental control was not performed either by
Aristotle5 or by any other natural philosopher during the 2000 years that followed,
5

It is worth noting that the first to point out the importance of experiments in natural philosophy
was Aristotle himself, in his book On the generation of animals (Book 3, Chap. 10), where he
writes: Such appears to be the truth about the generation of bees, judging from theory and from
what are believed to be the facts about them; the facts, however, have not yet been sufficiently
grasped; if ever they are, then credit must be given rather to observation than to theories, and to
theories only if what they affirm agrees with the observed facts. (Translated by Arthur Platt, The
University of Adelaide). Unfortunately, later scholars commenting on Aristotles works did not
pay the proper attention to this point. Thus, they came to believe that the works of Aristotle
include all knowledge about the world and therefore experiments are unnecessary!

20

2 The Ideas of Greeks About Nature

with the exception perhaps of John Philoponus, to whom we will refer in the next
section. There are three possible explanations for this failure:
The first is theoretical. Ancient Greeks developed, in a highly successful
manner, geometry, which deals with abstract concepts such as dimensionless
points and lines without thickness. In this way, their results achieved great simplicity and generality, which could not have been otherwise reached by measuring
real objects. Thus, they developed the notion that the real world is not suitable, as a
model, to attempt to create abstract theories of the universe. Of course, there were
Greek scientists of the Hellenistic era who designed and conducted experiments, as
we shall see in the next chapter. The prevailing view, however, both in ancient
Greece and the Middle Ages, clearly supported the deduction of conclusions from
hypotheses, rather than the testing of theories through experimentation.
The second explanation has to do with the prevailing notion in ancient Greece
that manual work was not appropriate for free citizens and that it should be carried
out only by slaves. Since experiments required manual labor (beyond scientific
knowledge), they were not considered as an acceptable activity for natural
philosophers.
The third explanation was practical. In ancient times it was not easy to conduct
experiments based on measurements. Today, it seems easy to measure the speed of
a falling body, because we have accurate clocks and precise electronic methods of
measuring small time intervals. Suffice to say that accurate instruments capable of
measuring short time intervals became available only three centuries ago, let alone
the fact that, before that, instruments of any kind were extremely rare and
expensive.

Chapter 3

From Classical Era to the Renaissance

3.1 Hellenistic-Roman Times


During the Hellenistic period,1 natural sciences continued to develop through the
work of natural philosophers of Greek education and culture (not necessarily
Greeks) mainly in Greek colonies and in Alexanders empire, as well as in the
mainland. More specifically, the great intellectual centers of the time were Great
Greece (Southern Italy and Sicily), Egypt, and Syria. It is worth mentioning that
during this period some scholars attempted the first tentative experiments in natural philosophy with a double objective: first, to collect results that would allow
the formulation of physical laws and second, to verify theories. The most prominent scholars of that era who conducted experiments were (in chronological order)
Aristarchus, Archimedes, Eratosthenes and Heron. Aristarchus, who was born in
Samos and lived in Alexandria (310230 BC), measured the distances between the
Earth and the Moon and between the Earth and the Sun, and determined the ratio
of the radii of the two celestial bodies. Archimedes (287212 BC) lived in Syracuse and his most important achievement was the explanation of the phenomenon
of buoyancy through experimentation (Fig. 3.1). Eratosthenes, who lived in
Alexandria (276196 BC), measured the radius of the Earth. Finally, Heron, who
lived in Alexandria during the 1st century AD, discovered the power of steam and
invented applied hydraulics (Fig. 3.2).
Romans, who during that period dominated militarily and politically in the
Mediterranean Basin, were mostly interested in the practical side of life rather than
in understanding nature and its laws. In modern terms, one could say that they
preferred applied over basic research, an approach that tends to prevail as well
nowadays. The result of this attitude was that Romans preserved accurately and in
detail the views of earlier Greek natural philosophers but, apart from the simple
application of known theories in everyday life and in technical works, they did not
attempt to interpret nature. For example, Pliny (2379 AD) initiated the systematic
classification of plants into families, a project that was completed during the
1

The period that starts after the death of Alexander the Great.

H. Varvoglis, History and Evolution of Concepts in Physics,


DOI: 10.1007/978-3-319-04292-3_3,  Springer International Publishing Switzerland 2014

21

22

3 From Classical Era to the Renaissance

Fig. 3.1 Eighteenth century


model of Archimedes screw,
an invention for pumping
water or fine-grained
materials (Museo Galileo,
photo by author)

Fig. 3.2 Herons


aeolosphere, the first steam
engine (NOESIS, photo by
author)

Renaissance by Linnaeus and forms the basis of modern botany. Vitruvius (1st
century BC) consolidated the views of the Greeks for meteorology, while
Lucretius (9555 BC), among other things, adopted and popularized the atomic

3.1 Hellenistic-Roman Times

23

theory of Democritus, acting later as a bridge between Democritus and Dalton,


that is between the first scientist that proposed the atomic theory and the one that
reinvented it in contemporary times.

3.2 Middle Ages and the Renaissance


After the Greek natural philosophers of the Hellenistic period, scientific progress
in Europe declined during the dark Middle Ages. Although there is not a generally
acceptable date for the beginning of Middle Ages, it is usually considered to be the
early 6th century AD (around 500 AD). Of course nothing happens abruptly in
nature and, thus, the spirit of late Antiquity persisted for some time during the
Middle Ages. Good examples of this are two Greek natural philosophers,
Philoponus and Simplicius, and their debate, which will be presented later in this
paragraph. However, science, in general, as well as arts and letters, came into a
standstill, and the reason was that during the first period of Middle Ages (from the
6th to the 13th century AD), dominated a culture which was completely different
from that of the Greek spirit. Life in that period can be summed up in the Latin
motto semper idem, which means always the same. One could say that Middle
Ages, as far as the society is concerned, resembled the era of dinosaurs: it was very
stable because it was not evolving at all! Throughout the first period of Middle
Ages, an era usually characterized as the Dark Ages, people wore the same clothes,
worked the land in the same way, listened to the same music, painted in the same
manner and wrote the same form of literature. Science, of course, was not an
exemption. The rationalism of the Greeks was succeeded by the mysticism of
religion and the effort to understand nature was replaced by the commentary of
Bible. Any scientific endeavor was based on the principle that everything understandable is contained in the Bible and the works of Aristotle, and that all one
needs in order to understand the world is the systematic study of these works,
within the boundaries defined by the spiritual leaders of that era.
At the same time that science was declining in Europe, it was flourishing in the
Islamic world. Most of the treatises written by Indian, Assyrian, Iranian and, above
all, ancient Greek natural philosophers were translated into Arabic. This enabled
Arab speaking scholars, during the Islamic Golden Age (ca. 750 ADca. 1250
AD), to further develop the ideas and concepts that had been put forward thus far.
From the numerous important Arabic-speaking scholars of this period, two should
be singled out: Avicenna and Alhazen. Avicenna (Latinized name of Abu Ali alHusayn ibn Abd Allah ibn Sina, ca. 980 AD1037 AD) was a Persian polymath. He
wrote more than 400 treatises on a wide range of subjects, including philosophy,
physics, astronomy, alchemy, geology, and mathematics, from which 240 have
survived. His most important works are The Book of Healing, an extensive philosophical and scientific encyclopaedia, and The Canon of Medicine, which was a
standard medical textbook in many medieval universities. Alhazen (Latinized name
of Abu Ali al-Hasan ibn al-Hasan ibn al-Haytham, ca. 965 ADca. 1040 AD) was a

24

3 From Classical Era to the Renaissance

polymath of either Arab or Persian origin. He wrote more than 200 treatises, from
which survive 55. He made significant contributions in the fields of mathematics,
optics and astronomy. In Western Europe, during the late Middle Ages, he was
known for his work in astronomy as Ptolemaeus Secundus (Ptolemy the Second).
In the early 13th century, scientific activity started again, triggered by the
translation into Latin, which was the official language of science and the Church in
the West, of many scientific treatises of Greek natural philosophers and mathematicians. The manuscripts of these books had been preserved in the libraries of
Eastern Europe and Middle East in three languages, Greek, Syrian, and Arabic,
and were transferred to Western Europe through three channels:
through the Arabs, who had conquered Spain,
through the crusaders, on their way back to Europe from the Middle East, and
through the looting of Constantinople in 1204, during its occupation by the
Franks of the Fourth Crusade.
During the next approximately 300 years, these books were disseminated to all
the newly founded universities in Western Europe and constituted the teaching
material for a new generation of scientists, from whom sprung the great minds that
gave rise to the Scientific Revolution of the Renaissance: Copernicus, Kepler,
Galileo, Huygens, Leibniz, Descartes, and Newton.
However, it is worth mentioning that, during the first years of the Byzantine
Empire, Greek philosophical thought continued to exist, and for that with
remarkable results. The most important Greek natural philosopher of this era was
John Philoponus, who refuted by logical arguments most of the offending points
of Aristotles physical theory, and in particular of his theory of motion. Philoponus
was a Christian, born in Alexandria (according to other accounts, in Caesarea) and
spent most of his life (ca. 490ca. 570) in Alexandria of Egypt. He studied philosophy in the neo-platonic school of Alexandria and worked on natural philosophy as well as on theology, trying to reconcile his religion with the Greek
philosophical tradition. From the books he wrote on natural philosophy only fifteen
survived, and not all of them in their complete form. Fortunately, many missing
parts in the surviving books, as well as information about some of his books that
were lost, are found in the writings of another Greek natural philosopher, Simplicius. The latter was one of the last masters of the neo-platonic school of Athens,
which was abolished in 529 AD by a decree issued by Emperor Justinian. Simplicius, who was an Aristotelian philosopher, in an effort to support the views of
Aristotle on nature and motion, attempted to refute Philoponus arguments. Many
of Philoponus ideas have survived, thanks to the methodical mind of Simplicius,
who, in his writings, quotes them as incorrect before presenting his own arguments, which supported Aristotles ideas. From Philoponus surviving writings and
Simplicius criticism, today we know a lot about Philoponus ideas on natural
philosophy, the most important of which are the following.
Philoponus was the first to propose the performance of experiments for timing
the free fall of objects with different weight. Of course, at that time, clocks were

3.2 Middle Ages and the Renaissance

25

not accurate enough to enable drawing any quantitative conclusions, but Philoponus predicted that the difference in the falling times would be very small and
certainly would not correspond to the ratio of the weights of the objects. Unfortunately, we do not know whether he preformed such an experiment or if he simply
restricted himself to its theoretical description.
Philoponus also considered as unreasonable the interpretation of the forced
motion of a body through the concept of antiperistasis. Indeed, he was the one who
noted that, if this hypothesis was correct, then we would not need a bow to shoot
arrows, but it would suffice to place the arrow on a horizontal support and blow air
to its tail by using a bellows! Instead, he argued that the motion of a body, when a
force is not anymore applied to it, is due to some property that is embedded or
imprinted in the body when it is set in motion by the force. The hypothesis of this
imprinted property was reinvented, many centuries later, by the French philosopher Jean Buridan (ca. 13001385) and named impetus. A similar concept is the
one which Newton named quantity of motion and today we call momentum.
Furthermore, Philoponus showed that, by using the hypothesis of momentum,
many of the inconsistencies of Aristotles theory already mentioned may be
eliminated. However, the main conclusion that can be drawn from the hypothesis
of momentum is that the natural condition of a body is not stillness, as Aristotle
argued, but a situation in which momentum is conserved. In other words,
Philoponus formulated the first law of Newton 1,000 years before the birth of the
great physicist! It is worth noting that Galileo had studied Philoponus books (as
we will see later in Chap. 4) and therefore many of the ideas attributed to the
Italian physicist have their roots in the great Greek philosophical school. Unfortunately, this important result was forgotten in later years, when the Greek tradition was ignored and the Byzantine Empire followed the rest of Europe and
plunged into scientific inactivity.
Apart from the hypothesis of antiperistasis, Philoponus had challenged other
Aristotelian views as well. He believed that the universe was created sometime in
the past and therefore is not eternal, that heavenly bodies obey the same laws as
bodies on Earth and that stars are not related to gods. He was even teaching that
the Sun and the stars are fiery bodies, because from the daily experience he knew
that the color of an object depends on its temperature! Finally, he had concluded
that bodies in empty space move with finite speed and, therefore, the existence of
absolute vacuum in nature is not impossible.
Philoponus idea that motion is imprinted to the moving body by the body
that causes motion in the first place (unlike the Aristotelian view that a body is
moving because it is heading to its natural place) is characterized by the great
scholar of contemporary philosophy of physics, Thomas Kuhn, as a paradigm
shift2 and, consequently, a scientific revolution. For this reason, Philoponus has
2

Paradigm shift (or revolutionary science) is, according to Thomas Kuhn in his influential book
The Structure of Scientific Revolutions (1962), a change in the basic assumptions, or paradigms,
within the ruling theory of Science. According to Kuhn, A paradigm is what members of a
scientific community, and they alone, share (The Essential Tension, 1977).

26

3 From Classical Era to the Renaissance

been regarded by several authors as a precursor of Galileo. Unfortunately, the


ideas of John Philoponus fell into obscurity because, apart from natural philosophy, he was also involved in theology. Within the framework of theology,
Philoponus supported the view that the divine nature of Jesus prevailed over the
human one. For this reason, he was accused of being follower of the sect of
Monophysites and was anathematized by the sixth Ecumenical Council in 680 AD.
Consequently, all of his philosophical views were considered to be dangerous for
the Christian religion and were forgotten. It took 1,000 years for these ideas to
reemerge at the forefront of science.
The stagnant condition that prevailed during the Middle Ages changed quickly
at the beginning of the Renaissance. People started to look for innovation in all
aspects of life, and science could not have been an exception to this general trend.
Scientific search for the truth was revived, but this time was based on Ockhams
razor and the subsequent principle of verifiable truth. Mainly for this reason, we
consider the Renaissance as the beginning of modern science.

3.3 Layout of the Book


As already mentioned, in the beginning of the modern era scientific knowledge
was limited and there was no clear distinction between the various disciplines of
science. Consequently, up to the late 19th century, many scientists are mentioned
in the literature sometimes as physicists, sometimes as chemists and sometimes as
mathematicians, depending on the perspective one views their work. For example,
Newton, who is considered by many as one of the greatest physicists of all times,
was professor of mathematics, and Young, who established experimentally the
wave theory of light, was a medicine doctor! This fact alone causes some difficulty
when trying to provide a consistent picture of the evolution of concepts in physics.
Another, equally important, problem is the diverse activities of all great scientists who contributed to the development and evolution of physics in its present
form. Because of this problem, we cannot describe the evolution of concepts in all
major branches of physics by simply mentioning the major scholars and describing
their work in chronological order. For example, Newton is the founder of
mechanics, but contributed significantly in optics as well. Maxwell founded
electromagnetism as well as thermodynamics, and Faraday conducted experiments
in practically all branches of physics known in his time! Therefore, in order to
show the evolution of concepts in physics, it is more appropriate to organize the
book in chapters, according to the way teaching of physics is organized in the
various levels of education. To reconcile the multidisciplinary work of scientists
with the one-dimensional presentation of physics, we will pursue the following
strategy: the life and work of every great scientist will be presented at the first
opportunity, while an extensive biography (analyzing his contribution to all
branches of physics) will be included on a separate paragraph. Subsequent references to the work will be linked to elements of this biography.

Part II

From the Renaissance to the Present Era

Chapter 4

The Major Branches of Physics

The evolution of physics from the Renaissance until today could be the subject of a
book thousands of pages long. But if a book is intended to help the readers in the
organization of physics they already know into a logical structure, it should be
limited to the major branches of this discipline. Furthermore, if this knowledge is of
high school level, then necessarily the selection of these important branches should
start with classical physics, namely that which was known until the late 19th
century. As such we have included in this book the mechanics of particles and solids,
optics, electromagnetism, heat, thermodynamics, and the theory of perfect gases,
because these branches constitute the backbone of classical physics. If some readers
are further interested, they can use this knowledge as a frame to integrate easily
the remaining branches of classical physics, such as acoustics, elasticity, and fluid
mechanics. Finally, for completeness, we briefly present the three branches of
physics, developed in the 20th century, that constitute the so called modern
physics, i.e., the theory of relativity, quantum mechanics and the theory of chaos.
As one might logically expect, scientists select to work on a specific branch of
physics according to the available knowledge and the technical means to conduct
experiments. So, mechanics was, inevitably, the first branch that was studied in
detail, since the motion of bodies is one of the basic phenomena of everyday life
and its experimental study does not require advanced devices and complex techniques. At the same time started the study of optics, since light is also an aspect of
everyday life. However, the completion of optics was delayed due to the fact that
light is fundamentally a quantum phenomenon and the description of optical
macroscopic phenomena reveals its dual character: some phenomena can be
explained through the particle nature of light while others through the wave nature.
There was also a delay in understanding electrical phenomena, because their
experimental study requires anything but simple techniques. In the beginning there
was the problem of production and storage of electrical charges and, when this was
solved, emerged the difficulty of producing electrical currents. Finally, heat is the
macroscopic manifestation of the random motion of atoms and molecules and, as
long as the existence of such small forms of matter was not generally accepted,

H. Varvoglis, History and Evolution of Concepts in Physics,


DOI: 10.1007/978-3-319-04292-3_4,  Springer International Publishing Switzerland 2014

29

30

4 The Major Branches of Physics

it was not easy to reject the (incorrect, as shown later) assumption that heat is a
kind of fluid. Thermodynamics was developed, independently of the concept of
heat, mostly by chemists during the Industrial Revolution in their attempt to
understand the mechanism of chemical reactions for the production of inexpensive
chemicals (mainly paints, pharmaceuticals and fertilizers). Finally, the understanding of the laws of gases through the kinetic theory unified heat, thermodynamics and the kinetic theory of gases in a single theory. This was achieved in the
late 19th century and led many physicists to believe that physics had arrived at an
end and had become a dead science, since all phenomena were understood
and described though corresponding laws. A fact that played a key role in the
development of this idea was that the gravitational, magnetic and electrical forces,
all three forces that were known in the late 19th century, were described by the
same law of the inverse square of the distance. Of course, we now know that the
emergence of quantum mechanics and the general theory of relativity changed
the above picture so radically, that today we are not at all certain whether the
physics we know and teach is the real picture of nature or just a good approximation of a more accurate, but still unknown, theory.

4.1 Mechanics
4.1.1 Kinematics: Galileo
Galileo Galilei (15641642) is indisputably the founder of modern science, since
he was the first to demonstrate clearly the significance of experiments in science
(Fig. 4.1). Galileo was born in the Italian city of Pisa in 1564, approximately
100 years into the Renaissance, the beginning of which is chronologically placed
at the conquest of Constantinople by the Turks in 1453. It is worth mentioning, for
those not familiar with history, that Italy is a relatively young state, as it was
founded in the 19th century. At Galileos time there existed two important organized states in the Italian peninsula: the Republic of Venice in the north, occupying
the northeastern part of the peninsula, and the Kingdom of Sicily in the south. The
rest of the Italian peninsula, which was governed loosely by local feudal lords,
consisted of a number of city-states, similar to those of ancient Greece, under the
secular authority of the Pope of Rome. This structure played a decisive role in
Galileos life, in his scientific achievements, as well as in the dissolution of the
scientific school he attempted to organize in Italy.
It should be emphasized from the very beginning that Galileo is recognized as a
great scientist because he offered two important services to science: he founded the
branch of mechanics called kinematics and confirmed the heliocentric theory.1 It is
1

It is worth noting that the assumption that the Sun is the center of the Solar System was
originally proposed by Aristarchus of Samos in the 3rd century BC and, much later, pulled from

4.1 Mechanics

31

Fig. 4.1 Galileo Galilei by


Sarah K. Bolton, from
Famous Men of Science. NY:
Thomas Y. Crowell & Co.,
1889

difficult to assess which of the two achievements was more important in the
evolution of concepts and ideas in physics. However, it should be noted that
Galileo is known mainly for the opposition of Papal Church to his astronomical
discovery that Earth is not the center of the world, rather than for establishing one
of the most important branches of physics, mechanics.
Galileos life can be divided into three periods, according to his research
interests and the places he lived. The first period is the one during which he studied
and, immediately after, in 1589, started his scientific career in Pisa as professor of
mathematics. During this period he read the works of John Philoponus, something
which is evident from the references to them in his early writings. He served as a
professor for only 3 years, but during this period he laid the foundations for his
later scientific progress. He began to understand, through experiments, the laws of
(Footnote 1 continued)
obscurity by Nicolaus Copernicus (14731543) and supported by Johannes Kepler (15711630),
who used Tycho Brahes planetary observations (15461601). A, somehow, incomplete
heliocentric theory had been proposed, before Aristarchus, by the Greek natural philosopher
Heraclides Ponticus (ca. 390 BCca. 310) in the 4th century BC. According to this theory, Sun is
orbiting the Earth as the other planets do, except for Mercury and Venus, which are orbiting the
Sun.

32

4 The Major Branches of Physics

Fig. 4.2 The motion of


falling bodies according to
Galileo: in contrast to
Aristotles theory (see
Fig. 2.3), all bodies are
falling with the same
acceleration (drawing by
author)

motion and wrote the first draft of his tutorial notes on mechanics, which he
completed and published in books (Mechanics in 1600 and Dialogues and
Mathematical Proofs Concerning Two New Sciences in 1638) during the second
and third period of his life. It is said that Galileo disproved the Aristotelian theory
of motion of bodies with an experiment, during which he dropped from the top of
the Tower of Pisa bodies of same volume but different densities. The bodies
arrived simultaneously on the ground, contradicting in this way Aristotles theory,
which stated that the body with the higher densitythe heavierwill arrive first
(Fig. 4.2). It is highly probable that Galileo never conducted this experiment;
instead, he conducted similar experiments, especially during the next period of his
life, studying the motion of bodies on ramps, where speeds are lower and one can
measure positions and time intervals with greater accuracy.
During the second period of his life, which was the most fruitful in terms of
research, Galileo worked as a professor in the University of Padua (15921610).
The city of Padua is located near Venice and at that time it belonged to the State of
Venice, which had an unusual democratic structure. As a result, scientists, like
Galileo, were able to engage undistracted in research, without fearing the papal
censure that was exercised in other regions of Italy. At about that time, he heard
about the invention of the telescope and, after perfecting its design, he used it for
both practical and scientific purposes. The key practical purpose, which resulted in
a significant financial compensation from the Doge of Venice, was the introduction
of the use of the telescope in the navy of the Venetian Republic, which at that time

4.1 Mechanics

33

was one of the largest naval powers in the Mediterranean Sea. The key scientific
objective was the observation of celestial bodies. In this way, he discovered that
Moon has mountains, that Venus has phases like the Moon, and that four satellites
orbit planet Jupiter. Each one of the above three observations could perhaps be
interpreted within the framework of the Aristotelian theory that Earth is the center
of the Solar System; however, these observations, in conjunction with Copernicus
observations and Keplers calculations, convinced him that the center of the Solar
System is the Sun. This was perhaps the first major application of Ockhams razor!
Furthermore, he discovered sunspots, observed that the image of Saturn is not
circular (due to the existence of rings, which he could not resolve with his
primitive telescope) and found that Milky Way consists of a very large number of
dim stars.
Apart from the above discoveries, in Padua he conducted most of his experiments in mechanics and arrived in a method for the study of motion. This method
is based on two ideas:
the description of the position of a body in a reference system by using coordinates, which was formulated in a strictly mathematical sense and established a
little later, in 1637, by Ren Descartes (15961650), and
the transformation rules of the position (from x to x0 ) and velocity (from v to v0 )
of a body when changing frame of reference, from the original system to another
one moving at a velocity v0 with respect to the first, i.e., x0 = x - v0t and
v0 = v - v0.
This method allowed him to describe the motion of a test particle moving with a
constant velocity and constant acceleration, the same topic with which begins even
today a physics course! It is worth noting that Galileo realized the importance of a
proper handling of infinitesimal quantities for solving the above problems. However, since those days mathematics were not advanced enough for such a treatment, he merely solved kinematical problems using geometrical methods, just like
today when this topic is taught in the first years of high school. More specifically,
he realized that, in a v - t diagram, the distance, s, covered by a particle is equal
to the area under the curve v = f(t). For constant velocity, v, the area has a
rectangular shape, so s = v  t, while for constant acceleration, a, the area has the
shape of a right triangle with height equal to at, so s =   a  t2.
Galileo also introduced the principle of independence of motions that allowed
him to study the parabolic motion of projectiles in the Earths gravitational field,
analyzing it in a horizontal motion with constant velocity and a vertical motion
with constant acceleration. Furthermore, in Padua, he discovered that the oscillations of a pendulum seem to be isochronous, an issue that preoccupied him since
he was a student2 (Fig. 4.3). In general, we can say that during the second period

This idea came to him for the first time when observing the oscillations of a chandelier in Pisas
Cathedral. An interesting point is that Galileo believed that oscillations of any amplitude are
isochronous (accurate clocks had not been invented yet). The fact that this is true only for

34

4 The Major Branches of Physics

Fig. 4.3 The dome of the


Cathedral of Pisa with the
lamp of Galileo (JoJan/
Wikimedia Commons)

of his scientific life, Galileo completed most of his contributions, in astronomy as


well as in physics.
The third period of his life starts in 1610, when, after 18 years of teaching in the
University of Padua, he accepted a professorship in the University of Florence.
This should not come as a surprise, since the University of Florence at that time
was considered as the best university in Italy and, besides that, in this way he was
essentially returning to his homeland, since Florence is just 80 km away from Pisa.
But he did not assess properly the reaction of the papal state to the idea of the
heliocentric system, thinking that he could persuade the religious establishment by
presenting scientific arguments. Initially, with Pope Paul the 5th, who was open
minded and had friendly relations with Galileo, this idea proved feasible. The
second following Pope,3 however, Urbanus the 8th, proved to be tough with
Galileo, although he was a friend of him before ascending to the throne. Urbanus
was influenced by the fanatical opponents of the heliocentric system, especially the
Jesuit monks, who believed that Galileos views in his book Dialogue on the Two
Major Systems4 were heretical. As a consequence, Galileo was persecuted for his
(Footnote 2 continued)
oscillations of small amplitude was realized later, through the use of differential calculus introduced by Newton.
3
The next Pope, Gregory the 15th, remained on the throne for only 2 years.
4
It is worth mentioning that in this book, a discussion is described between a philosopher who
supports the heliocentric theory and an Aristotelian philosopher, whose name is Simplicius, a
further indication that Galileo was aware of the PhiloponusSimplicius debate.

4.1 Mechanics

35

scientific opinions and, at the advanced age of 69, he was forced not only to
renounce them, but to spend the rest of his life in house arrest. However, even at
this difficult time, he did not give up research. So, he attempted to build a gas
thermometer using, as a measure of temperature, the volume of gas trapped in a
tube over a column of mercury. His student, Evangelista Torricelli (16081647),
transformed this instrument into the well known barometer, by completely
removing the air above the column of mercury. He also proposed the use of the
isochronous oscillations of a pendulum for the construction of a clock, an idea that
was implemented a little later by Christiaan Huygens (16291695). During the
third period of his life Galileo had several students, but he did not have much time
for research, since he was engaged in the controversy with the papacy. For this
reason, his research results were limited. He published, however, the last and most
important (in terms of physics) of his books, known with the short title Two New
Sciences, which is the first modern textbook on mechanics. This book includes,
among others, chapters on:

uniformly accelerated motion,


Galilean transformations,
independence of motions,
geometric proof of the relations giving position and velocity in uniformly
accelerated motion, and
an early version of the principle of inertia (which subsequently was put forward
as an axiom by Newton).
The conflict of Galileo with the Church, apart from his personal drama, had
other serious consequences. It discouraged scientists from engaging in research
and set back the establishment of an Italian scientific school by Galileo and his
students. Thus, it is not surprising that the next important research results in
physics came from England and the Netherlands, countries where the influence of
the Catholic Church was minor, while in Italy, the birthplace of modern science, it
took 150 years for the next great Italian physicist to appear, namely Alessandro
Volta (17451827).
Galileo was not the first scientist who conducted experiments; he succeeded,
however, to demonstrate their value in testing theories and to make them generally
accepted by the scientific community. For this reason, we believe today that the
modern or experimental method of science starts with Galileo. However one
should be cautious; it must be made clear that the adoption of the experimental
method, which involves a synthetic way of thinking (from the individual results to
the overlying theory), does not imply the simultaneous dismissal of the productive
way of thinking, in which the ancient Greeks excelled. We recall that in the
productive method one starts from a working hypothesis and tries to figure out
the conclusions arising from it. Modern physics uses both methods. As we shall
see later, the development of the electromagnetic theory by Ampre is a classic
example of the first method, during which the great French mathematician interpreted the already known experimental results of Oersted. Contrary to that, the
general theory of relativity is a classic example of the second method, where

36

4 The Major Branches of Physics

Einstein first presented the mathematical form of the theory, without any experimental hint, and then the experimental physicists attempted to confirm the phenomena predicted by the theorys equations.
Evaluating today the work of Galileo, 350 years after his death, we realize that his
main contribution to physics was the foundation of mechanics, which in turn forms
the basis of the remaining branches of this science. However, he is better known for
the experimental confirmation of the heliocentric theory of the Solar System. These
two achievements have undoubtedly something in common: the experimental proof
that Aristotles two main theories, the first regarding motion of bodies and the
second regarding the structure of the Solar System, were incorrect. In a few decades,
one man achieved what scientists failed to do for almost two millennia!

4.1.2 DynamicsGravity: Newton


The scientist who continued Galileos work in mechanics was Newton (Fig. 4.4).
It is worth noting a semiotic coincidence: Newton was born the year Galileo
died. If Galileo was the father of kinematics, Newton was the father of
dynamics because:
on the one hand, as a great mathematician, he discovered the concepts of the
derivative and the integral, which form the basis of calculus, and
on the other hand, as a great physicist, he used these concepts to formulate the
laws of motion of a point mass under the influence of a force.
Calculus was discovered, almost simultaneously but independently of Newton,
by Gottfried Wilhelm Leibnitz (16461716); however the latter used different
mathematical notation. Today, the formulation of calculus is credited to both
equally. The notation dz/dx, introduced by Leibniz, is used to represent a derivative
in general, while the notation z_ , which was introduced by Newton, is used to
represent a derivative with respect to time. Using calculus, Newton was able to
write (and solve in several specific cases) the differential equations describing the
motion of a point mass to which a force is applied. In this way late 17th century
mechanics had already reached a point of maturity, something that is corroborated
by the fact that today it is taught in schools and the first year of university courses
using the concepts established in the most important book of Newton, Philosophiae
Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy)
(Fig. 4.5). To those who may wonder why an Englishman wrote a book in Latin, we
remind that this language was considered, from the Middle Ages until as late as the
18th century, as the most suitable for expressing scientific concepts.
Newtons theory of mechanics is based on three axioms (or principles, or laws)
of motion that he put forward:
Axiom I: Every object in a state of uniform motion tends to remain in that state of
motion, unless an external force is applied to it.

4.1 Mechanics

37

Fig. 4.4 Sir Isaac Newton by


Sarah K. Bolton, from
Famous Men of Science. NY:
Thomas Y. Crowell & Co.,
1889

Axiom II: The rate of change of the momentum of a body is proportional to the
force acted on it and in the same direction.
Axiom III: For every action (force) there is an equal and opposite reaction
(opposite force).
The first two laws can be combined in the vector equation of motion of a point
mass
Fma
in which the quantity m  a is nothing more than the derivative of momentum
with respect to time in the special case where the mass of the body remains
constant, i.e.,
dp=dt dm  v=dt mdv=dt m  a
Using this law, Newton solved the problem of motion without the presence of
resistance or with resistance either proportional to velocity or proportional to the
square of it. But the most important problem he sought to solve was that of the
motion of the planets around the Sun. To achieve this, he had to formulate his law
of universal gravitation. Having solved the differential equation of motion for

38

4 The Major Branches of Physics

Fig. 4.5 Title page of the


first edition of Philosiphiae
Naturalis Principia
Mathematica by I. Newton

various forms of force, he found that the solutions of the equation for planetary
motion were ellipses with the Sun at one focus, just as predicted by Keplers laws,
when the force is proportional to the inverse square of the distance between the
two bodies (the Sun and one planet at a time),
F GMSun  mplanet =r 2 ;
where G is the gravitational constant (Fig. 4.6). In this way, he took two major
steps towards the completion of mechanics:
using his laws of dynamics, he interpreted Keplers laws, which until then were
of purely kinematic-geometric nature, and
he eliminated the distinction between terrestrial and celestial bodieswhich
was introduced by Aristotleby showing that the very same laws of physics
apply to both.
In other words, the force that makes the Moon move in its orbit around the Earth
and the Earth around the Sun is the same force that attracts bodies toward the center
of the Earth, imparting them the property we call weight. At this point we must
point out that a distinction between the physical laws of very small and very large

4.1 Mechanics

39

Fig. 4.6 The basic setup of the torsion balance used by Cavendish in his experiments, by which
he essentially measured the gravitational constant, G. Two small spheres are suspended by a
string with a mirror attached to it. The small spheres are attracted gravitationally by two large
spheres, fixed on a pivot. Due to gravitational attraction the small spheres rotate by a small angle,
which is measured by the deflection of the light beam on a dial

systems reappeared later, this time within the frame of the physics of 20th century,
due to the fact that the laws of quantum mechanics, which describe microcosm, are
different from those of general relativity, which describe macrocosm. This case is
one of several where concepts in physics may appear to be cyclic.
Newton solved the equations of motion under the influence of gravitational
forces for two cases, in both assuming that the bodies have spherical symmetry, so
that they could be considered as point masses. In the first case, he assumed that one
of the two bodies, the Sun, has practically infinite mass, so it remains fixed and is
orbited by the second body, a planet. In the second case, he assumed that both
bodies have finite mass, so both of them are moving around their common center
of mass. The second case is called problem of two bodies, while the first one
could be called problem of one body. Newton found that the solutions to both
problems are, for negative values of the (mechanical) energy, geometrically
similar ellipses, with coefficient of proportionality (in the latter case) equal to the
ratio of the mass of one body to the sum of the masses of the two bodies. Studying
his writings and noticing the simplicity of these solutions, one gets the impression
that he may have assumed that it would be relatively easy to find the general
solution for the problem of three bodies and use it as an intermediate step towards

40

4 The Major Branches of Physics

the construction of solutions for more bodies, for example, the Solar System; but
he did not attempt to find such a solution. The complete solution of the problem of
two bodies, for any value of the total energy (negative, positive or zero), was
provided by Johann Bernoulli (16671748) in 1710. Since then, finding a solution
to the problem of three bodies had tantalized many prominent astronomers, until
Jules Henri Poincar (18541912) proved in the late 19th century that such a
solution does not exist within the frame of standard analytic functions.5 This result
marked, as we shall see later, the beginning of the theory of chaos.
It is worthwhile to point out the importance of the exact value of the exponent
of the distance in the law of universal gravitation, not only in classical physics but
also within the context of current efforts to integrate gravity in the set of the other
three known forces (electromagnetic, weak nuclear and strong nuclear). The theory
of gravity can be based, apart from the direct axiomatic adoption of the law of the
force introduced by Newton, on another independent axiom, which was introduced
by Gauss. According to this axiom, the surface integral of the gravitational
acceleration, g, on a closed surface S is equal to the mass enclosed by this surface
times 4p, the universal gravitational constant, i.e.:
Z
g  dS 4pGM
S

The two axioms lead to the same theory, and hence are equivalent, if and only if
space has three dimensions and Newtons law depends exactly on the inverse
square of the distance. If the exponent in Newtons law is not exactly equal to 2 or
if space has more than three dimensions, the two axioms lead to different theories,
a fact that would upset the established current structure of classical physics.
Contemporary efforts to formulate a unified theory, which would include all four
known forces, are based on the assumption that space has more than three
dimensions (usually nine or ten), in which case the value of the exponent of the
distance in Newtons law of gravity must be different from 2. Until today, there
have been many experiments to accurately measure this exponent and all of them,
within the accuracy of the experiment, gave a value of 2.
Newtons theory of gravitation has two weak points, which are not adequately
emphasized today. The first is that, in formulating the law of universal gravitation,
Newton introduced a new concept in physics, the action at a distance. Until then,
the application of a force on a body presupposed the contact of this body with
another. For example, we push an object with our hand or pull another one with the
help of a rope. Even the sails of a ship are pushed by the wind, which may be
transparent, but its existence is perceived by humans. The invocation of a force
that does not require physical contact of two bodies raises philosophical
questions such as: how a body knows the existence of another and feels its
attraction?

Functions that are locally given by a convergent power series.

4.1 Mechanics

41

Newton was aware of this problem and in the second edition of Principia he
wrote the famous phrase (on the nature of this force) hypotheses non fingo,
which in English may be translated as I feign no hypotheses.6 Newtons reservation has been forgotten over time, mainly because the same functional form of
the force was found between charges as well as between magnetic poles. The first
to find a way out of this problem was Michael Faraday (17911867), who introduced the concept of field. This concept was then used by Maxwell in his electromagnetic theory, by Albert Einstein (18791955) in the general theory of
relativity and by Ervin Schrdinger (18871961) in quantum mechanics; as a
result, today, the concept of field constitutes one of the cornerstones of physics.
The second weak point is that Newtons theory of gravitation cannot describe
the universe as a whole, whether it is finite or infinite.
If, on the one hand, the universe is finite, all bodies should collapse, in a finite
time, at its center, due to the gravitational forces by which the bodies are
attracted to each other. An apparently successful way out of this problem is to
assume that the bodies that constitute the universe orbit its center in order to
remain in place because (in the rotating non-inertial frame of reference) the
centrifugal force balances the gravitational attraction. Unfortunately, this
hypothesis does not solve the problem, because the balance between centrifugal
and gravitational force is unstable and the equilibrium can be broken by any small
disturbance; the final result is again the collapse of the universe at one point.
If, on the other hand, the universe is infinite, every material body feels an
infinite force from any direction, as is evident from the following simple reasoning. Assume a spherical coordinate system as the one we use in our everyday
life, centered at the center of the Earth. Then the force, dF, that is applied to Earth
from the other celestial bodiessuch as starsin a spherical shell of thickness dr,
which lie at a distance r from Earth and are located in the direction of a particular
latitude, # and longitude, u, will be
dF

GME dm GME qr 2 sin hdhd/dr

GME q sin hdhd/dr


r2
r2

where ME is Earths mass, dm is the mass of these stars and q is the mass density of
the universe. Assuming, for simplicity, that the density of the universe is constant
and integrating this relation with respect to distance, r, from zero to infinity, we find
that the rest of the universe attracts Earth to any direction with an infinite force!
This equilibrium is obviously unstable, since the slightest disturbance of the density, q, will result in the manifestation of an infinite force that will attract Earth to a
certain direction! In simple words, we can say that it is not possible to calculate the
net force (since it is meaningless to add infinite numbers or vectors) and thus it is

The complete reference is I have not as yet been able to discover the reason for these
properties of gravity from phenomena, and I do not feign hypotheses. From Isaac Newton
(1726): Philosophiae Naturalis Principia Mathematica, General Scholium, third edition, page
943 of I. Bernard Cohen and Anne Whitmans 1999 translation, University of California Press.

42

4 The Major Branches of Physics

not possible to apply Newtons second law. The problem of describing the universe
as a whole was solved by Einstein through the formulation of the general theory of
relativity, which thus became the basic tool of the discipline of cosmology.
Newton is best known in the history of science not for his major role in establishing mechanics and solving the problem of motion of bodies acted by forces of
various kinds, but for the formulation of the law of gravitation. We recall that
something similar happened with Galileo, who is best known not for his key role in
establishing mechanics and the use of experiments in physics, but merely for the
experimental verification of the Aristarchus-Copernicus heliocentric hypothesis. It is
interesting to note that, while Einstein proved that the theory of gravity is only
approximately correct, as a limiting case of his general theory of relativity in the case
of weak gravitational fields and small velocities, the equation (second Newtons law)
F dp=dt
describing the motion of a body under the influence of a force (in the way it was
written by Newton and not as it is written today) was proved to be correct even
within the framework of the special theory of relativity! Therefore, one could say
that Newtons theory of motion withstood the test of time better than his theory of
gravity, so that it would be more appropriate to remember him as the founder of
dynamics.

4.1.3 Solid Body: Huygens


Although in Principia Newton includedin addition to the description of the
motion of a point masssome results on the motion of rigid bodies, it should be
noted that this problem was placed on its correct basis by another scientist,
Christiaan Huygens (16291695). Huygens introduced the concept of moment of
inertia, which plays a role similar to that of mass in the equation of the rotational
motion of a rigid body. Furthermore, Huygens made two major contributions to
mechanics:
He put forward the principle of conservation of momentum of a system of
bodies and, through it, he formulated the laws that describe the collision of two
bodies.
He calculated the centrifugal force in the case of a uniform circular motion and,
using this result, he understood the effect of latitude on the value of gravitational
acceleration, g(u), on the Earths surface.
It is noteworthy that Huygens is best known for his contribution to the formulation of the wave theory of light and not for the completion of mechanics, something that reminds us the cases of Galileo and Newton regarding the recognition of
their contributions in physics and astronomy. One reason for this is perhaps the fact

4.1 Mechanics

43

that issues of mechanics of rigid bodies, which were addressed primarily by Huygens, are not usually included in the syllabus of high school programs.

4.1.4 Analytical Mechanics


With the work of the three scientists mentioned above, namely, Galileo, Newton
and Huygens, classical mechanics attained, in the end of the 17th century, a level
of maturity that remains, practically, the same until today and it constitutes the
foundation of the corresponding courses that appear in high school and university
introductory syllabuses. But theoretical physicists had not said their final word yet.
Classical mechanics, which is based on Newtons three axioms, has some weaknesses that are not obvious at first sight.
The most important is that for each body we have to write as many secondorder differential equations (i.e., equations involving second derivatives with
respect to time) as there are the dimensions of the space in which it moves, since
the basic equation of motion of the Newtonian theory is a vector one. So, if we
consider the motion of a point mass in space, we must write three equations of
motion. If the point mass, due to the nature of the problem, is confined to move on
a surface or along a curve, then it has less than three degrees of freedom; however,
the reduction of the number of equations is not a simple task.
The second is that we need to take into account all forces acting on each body,
either external (such as a force that we impose) or internal (such as the
reaction of a body to the action of another force when both forces belong to the
same systemfor example, the tension of a string that connects two bodies). In
complex systems, this requirement leads to a significant increase in the number of
forces acting on a body.
The third and, perhaps, most important weakness, from a physical point of
view, is that, in the three axioms of Newtonian Mechanics, force appears as a basic
concept. Besides the fact that it is difficult to define the concept of force, Newtonian mechanics cannot be applied in physical theories in which the concept of
force does not appear at all. As we shall see in Sect. 5.1.2, within the framework of
general relativity, the trajectory of a body is curved not because a force is acting on
it, but because the space in which it moves is curved. In quantum mechanics, the
concept of motionand therefore the concepts of position and velocity as
wellhas been replaced by the concept of the probability that a particle will be
observed in a specific place at a particular time.
Finally, according to Ockhams razor, it is preferable to have a theory based on
fewer than three axioms.
In view of the above, it should not come as a surprise that, one hundred years
after Newtons death, two more theories of dynamics for the formulation of the
equations of motion of a point mass were introduced, each based on a single
principle and using scalar and not vector quantities. In both theories, the basic
axiom (principle) is the same: the integral of a scalar function has an extremum

44

4 The Major Branches of Physics

(that is, it takes an extreme value, usually a minimum) when calculated along the
trajectory which is a solution of the equations of motion. The idea is not new in
physics; it had already been used, as we shall see in Sect. 4.2.7, by Pierre de
Fermat (1601-1665) to calculate the path of a light beam. According to Fermat,
all the so-called laws of geometrical optics, in which light is considered to
propagate along rays, stem from a single principle. According to this, light
follows the path that provides the minimum propagation time between two points.
The first of the two new theories of dynamics was formulated by the FrenchItalian mathematician Joseph Louis Compte de Lagrange (17361813) and the
other one by the Irish mathematician Sir William Rowan Hamilton (18051865).
The scalar function introduced by Lagrange is represented by L and usually, in the
simplest cases, is equal to the difference between the kinetic energy of a system,
usually denoted by T, and its potential energy, usually denoted by W, i.e.,
L = T - W. The equations of motion, in the case of one-dimensional motion, arise
from the requirement that the value of the integral
Z
Lxt; x_ t; tdt
has an extremum if the function x(t) is the solution of the equations of motion; that
is, when x(t) and x_ t are functions that give the position and velocity as functions
of time.7 The resulting differential equations are of second order and, in the general
case, they are equal in number to the degrees of freedom of the system. Therefore,
for a free point mass we obtain three equations, while for a point mass forced to
move on a surface we obtain two. The scalar function introduced by Hamilton is
usually symbolized by H and, in the simplest cases, equals to the sum of the kinetic
and potential energy of the system (i.e., its total mechanical energy); so, it is
represented symbolically by the relation:
H T W
The axiom (principle) giving the equations of motion is the same as that of
Lagrangian mechanics; however, the resulting equations are first order differential
equations equal in number to twice that of the degrees of freedom of the system.
So, for a freely moving point mass there are six equations and for a point mass
forced to move on a surface there are four.
It has been shown that each one of the above two theories, characterized both
by the general name analytical mechanics, is equivalent to Newtonian mechanics
(and thus they are as well equivalent to each other). What is important, in choosing
which theory to use in specific problems, is to decide which one leads in the easiest
way to the solution. In a nutshell, one can say that, with the exception of problems
more complicated than the ones we learn to solve at school, analytical mechanics

7
It is worth noting that what is varying in the above integral is not the variables x and x_ but the
functions x_ t and x_ t. As a result, the integral is not a function but a different kind of
mathematical object, called functional.

4.1 Mechanics

45

provides answers more easily than Newtonian mechanics. Moreover, we should


note that, traditionally, Lagrangian mechanics finds applications in the general
theory of relativity, while Hamiltonian mechanics in quantum mechanics. For
these reasons, analytical mechanics is taught at length in the last semesters of the
curriculum of any Physics Department.
It should be noted that the foundations of analytical mechanics were established
by the work of two mathematicians, while Newton was holding a chair of mathematics in Cambridge University. This coincidence is anything but coincidence
and in recent years, as stated already, it becomes increasingly difficult to distinguish between applied mathematicians and theoretical physicists.

4.1.5 Nonlinear Mechanics


Mechanics continued to evolve after the era of Lagrange-Hamilton. In the 20th
century, two great mathematicians changed completely the way mechanics treats
nature, namely, Herni Poincar and Andrey Nikolaevich Kolmogorov (19031987).
Poincar worked mainly in celestial mechanics, the branch of mechanics which
deals with the motions of the bodies of the Solar System. But he was interested in
other fields of physics, as well, and he was able to formulate the equations of special
theory of relativity a few months before Einstein! Poincar showed that even the
simplest problems in mechanics, excluding those solved exactly by Newton, can
have very complex solutions, which indicate a seemingly random behavior. This
phenomenon forms the basis of the theory of chaos, which is presented in detail in
Sect. 5.3.
Kolmogorov is well known in the mathematical community as the founder of
the modern theory of probability. In physics, however, he became famous because,
based on the theory of measure which he used in establishing his probability
theory, he managed to reconcile the complexity, discovered by Poincar, with the
fact that in everyday life we can predict the motions of various objects with great
accuracy. He did this by proving that, along with the chaotic solutions of a
dynamic system which were discovered by Poincar, there are always regular
ones. So in some dynamic systems, such as our Solar System, the almost normal
solutions dominate, a fact that enables us to anticipate the motion of planets with
great accuracy for thousands of years, while in other systems, such as the Earths
atmosphere, chaotic solutions prevail, a fact that prevents us to forecast the
weather for more than 2 or 3 days at a time.

4.1.6 Mechanics Today


Today, the teaching of mechanics follows three directions. In secondary education
Newtonian mechanics is taught just as it was formulated 300 years ago. Newtonian

46

4 The Major Branches of Physics

mechanics appears in the curricula of the first or second year of college and university courses, while analytical mechanics, as formulated in the 19th century, is
reserved for the third or fourth year. Finally, non-linear mechanics, as formulated in
the 20th century through the work of Poincar and Kolmogorov, as well as fluid
mechanics, appear in the curricula of the last year of college and university courses.

4.2 Optics
4.2.1 The Period up to The Renaissance
The first branches developed in physics were those for which there were available
accumulated experimental data. Apart from mechanics, this was also the case for
another branch, optics. Light dominates our daily life and is involved into very
common phenomena, such as the phenomenon of shadow. Furthermore, since
ancient times, mankind used various forms of optical instruments, mainly mirrors
and lenses. For example, plane mirrors were known at least since the time of the
Pharaohs and were used for the everyday beauty care of women. It is said that
Archimedes used concave mirrors to set ablaze the Roman fleet, which besieged
Syracuse in 213 BC. Moreover, legend has it that Nero used an emerald, polished
to form a diverging (biconcave) lens, to cope with his myopia. The Roman philosopher and tutor of Nero, Seneca, had described the magnified images produced
by a glass bottle full of water. The famous Greek astronomer Claudius Ptolemy
(ca. 75 ADca. 150 AD) had discovered the phenomenon of refraction of light rays
propagating from air to water, and managed to measure the angles of the incident
and the refracted rays. Finally, the use of corrective lenses for vision (spectacles) was established as early as the Middle Ages by the English monk Roger
Bacon (12141294).

4.2.2 Corpuscular Nature of Light: Newton


Aristotle believed in the corpuscular nature of light. This theory is based on the
assumption that light consists of small particles that are emitted by luminous
bodies and detected by the eye. Newton, who was involved with optics as much as
with mechanics and had written a book entitled Opticks, supported the idea of the
corpuscular nature of light as well. The reason was that this hypothesis could
explain in a very simple way not only the phenomena that consist what we call
today collectively geometrical optics but even the phenomenon of dispersion of
light, which he had studied in detail. When white light passes through a prism, it is
analyzed into the seven colors of the rainbow. If we isolate the rays of one of these
seven colors and direct them through another prism, we will notice that they

4.2 Optics

47

Fig. 4.7 Geometrical optics,


corpuscular model: reflection
(top), refraction (bottom).
The speed of light is larger in
the medium with the larger
refractive index (drawing by
author)

cannot be further analyzed. According to Newton, this happens because the particles of each color constitute the basic components of light and, therefore, they
cannot be analyzed into something simpler. The phenomenon of reflection could
be easily interpreted through the laws of collision, elaborated by Huygens, in
conjunction with the principle of the independence of motions established by
Galileo. When a particle of light reaches a reflecting surface, its velocity is
reversed in the direction perpendicular to the surface, while in the direction tangential to the surface its velocity remains constant. As a consequence, the equality
of the angles of incidence and reflection can be easily explained (Fig. 4.7). The
phenomenon of refraction is explained if one assumes that the velocity of the
particles of light remains always constant within a given optical medium, except
when the particles approach the interface between the two media, where they
feel a brief attracting force from the denser medium. This force increases the
normal, to the interface, velocity, vperp, while the parallel to the interface velocity,
vpar, remains constant. If by vair and vwater we represent the speed of light in air and
water, then from the geometric construction, in the case of refraction at an air
water interface, it is evident that

48

4 The Major Branches of Physics

Fig. 4.8 Snells law:


refraction of light at the
interface between two media
with refractive indices
n2 [ n1

sin#incident vpar =vair and sin#refracted vpar =vwater


so that, finally
sin#incident = sin#refracted vwater =vair
According to Snells law (Willebrord van Roijen Snell, 15801626), the ratio of
sines equals to the ratio of refractive indices in the two media and, therefore, it is
evident that the refractive index must be higher in the medium where the speed of
light is higher (Fig. 4.8). When, in the late 19th century, advances in technology
allowed the measurement of speed of light in various media, this conclusion was
disproved, because it was found that the speed of light in water is lower than the
speed of light in air. But, as we shall see in what follows, Newtons theory of light
had already been disproved since the early 19th century by another, more decisive,
experiment.
In addition to the dispersion of light by a prism, Newton also discovered the
phenomenon of interference (Fig. 4.9) He made this discovery by noticing that if
the flat side of a plane-concave lens, resting with its curved surface on a rectangular block of glass, is illuminated, circular interference fringes are formed. This
phenomenon puzzled Newton, because he could not explain it with his corpuscular

4.2 Optics

49

Fig. 4.9 Newtons rings


seen in two plano-convex
lenses with their flat surfaces
in contact. One surface is
slightly convex, creating the
rings. In white light, the rings
are rainbow-colored, because
the different wavelengths
(corresponding to different
colors) interfere at different
locations

model; however, he believed that this gap in his theory existed because he was
not able to find the correct interpretation and not because his model was wrong.

4.2.3 Wave Nature of Light: Huygens


The basic opponent of Newton, regarding optics, was his thirteen years senior
Huygens (16291695), who supported the wave nature of light. From young age,
Huygens managed to shape lenses and build telescopes, with which he performed
important astronomical observations. It is worth noting that he was the first to
observe the rings surrounding Saturn, since Galileos telescope did not have the
necessary resolution to resolve them. The experience gained from this activity
helped Huygens to realize that all geometrical optical phenomena could be
explained by adopting the wave model of light (which had been proposed already
by Hooke) and an additional hypothesis, the principle of secondary emission, also
known as Huygens principle (Fig. 4.10). According to his theory, light is an
oscillation, which propagates in space from point to point in the form of waves.
When light waves pass through a point in space, this point becomes an emission
center of secondary waves. In this way, the wave-front of the light wave is
formed by the envelope of all secondary waves.
With the help of this model, Huygens could explain the phenomena of reflection
and refraction (Fig. 4.11). But this was not sufficient for Huygens wave theory to
replace Newtons corpuscular theory. Nevertheless, the wave theory could explain
other optical phenomena, such as the double refraction, in which light passing
through certain crystals creates two images. Huygens realized that this phenomenon can be explained by the wave theory, if one assumes that in these crystals the
speed of light is different at different propagating directions, so that the envelopes
of the secondary waves are not spherical but ellipsoidal surfaces. In a manuscript
dated 6 August 1677, Huygens draws for the first time ellipsoidal wave-fronts and

50
Fig. 4.10 Huygens
principle: every point of a
wave front becomes a center
of secondary emission. In this
way the wave front at any
time is the envelope of the
secondary waves at this
specific time (drawing by
author)

Figs. 4.11 (a and b)


Geometrical optics, wave
model: refraction. The
propagation of the wave front
is similar to the change in
direction of a military column
that moves from one surface
to another, reducing at the
same time its walking speed
(drawings by author)

4 The Major Branches of Physics

4.2 Optics

51

writes the Greek word eureka (I found it), as a reference to the famous story
about how Archimedes discovered the concept of buoyancy.
Unfortunately, in his wave theory model, Huygens uses longitudinal waves, like
the sound waves in air, which propagate in the direction of motion of air molecules.
So, apart from the fact that it was not clear what is the medium that oscillates during
the propagation of waves (this became clear only 250 years later, through
Einsteins special theory of relativity), Huygens theory could not explain why the
two beams emerging from a birefringent crystal do not interfere.8 For these reasons,
at that time Huygens wave theory did not replace Newtons corpuscular theory.

4.2.4 Establishment of the Wave Theory: Thomas Young


The situation in optics remained unchanged for approximately 100 years after the
death of Newton and Huygens. The scientific community had adopted Newtons
corpuscular theory and the wave theory had fallen into obscurity. But, in the
beginning of the 19th century (1802), the British physicist Thomas Young
(17731829) demonstrated conclusively the wave nature of light; he achieved this
with his famous double-slit experiment, in which light from two slits interferes on
a screen and forms fringes (Fig. 4.12). Using Huygens theory, he managed to
calculate the distance between two consecutive fringes as a function of wavelength, the separation of the slits and the distance of the screen from the slits. In
this way, he was able to measure the wavelength of each color; he found a
wavelength of 0.7 lm (micrometers) for red and 0.4 lm for violet.
A key factor in interpreting the results of this experiment was that Young used
as a model of the light waves not sound waves, which are longitudinal, but waves
that propagate on the surface of a liquid, which are transverse. In fact, he had
demonstrated similar experiments in the Royal Institution of London, where he
served for 2 years as a professor. The Royal Institution was a research institute
of the time that had been founded by Benjamin Thompson, Count Rumford
(17531814). Thompson was one of the first scientists who realized that heat is a
form of energy; his life and scientific achievements are presented in detail in
Sect. 4.5.2.
Young interpreted also the phenomenon of color vision by proposing the threecolor theory, which maintains that the infinite variety of colors perceived by the
human eye can be explained with the involvement of only three primary colors.
This theory, completed by Helmholtz, forms the basis of color rendering in all
contemporary related technologies, such as color television and color photography.
Besides that, he managed to interpret qualitatively two other optical phenomena
8

Birefringence is the optical property of a material having a refractive index that depends on the
propagation direction of light. Birefringence is responsible for the phenomenon of double
refraction whereby a ray of light, when incident upon a birefringent material, is split into two rays
taking slightly different paths.

52

4 The Major Branches of Physics

Fig. 4.12 Double slit


experiment. Plane light waves
illuminate an opaque screen
with two slits. Light passes
through the slits and
interferes on the screen (P),
creating fringes (Lacatosias,
Stannered/Wikimedia
Commons)

Fig. 4.13 Thin film


interference image from a
diesel spill on a road (John/
Wikimedia Commons)

that were known in his days: thin film interference and the non-interference
between the two rays emerging from a birefringent crystal (Fig. 4.13).
In the first phenomenon, thin sheets of transparent material, or even thin films
of liquids (such as oil) on the surface of water, exhibit iridescent colors. This
phenomenon is due to the interference of the reflected light rays on the upper and
lower surfaces of the thin layer (in the case of an oil film floating on the surface of
water, the two surfaces are the air-oil interface and the wateroil interface). Since

4.2 Optics

53

different colors have different wavelengths, the interference fringes of different


colors appear in different areas. This phenomenon was explained fully, thanks to
the lifework of the French physicist Augustin Jean Fresnel (17881827); the
results of his work will be considered in Sect. 4.2.5, which follows.
In the second phenomenon, experiments have shown that the two rays emanating from a birefringent crystal do not interfere. Young explained this phenomenon by assuming that light consists of transverse waves and that the two rays
emanating from a birefringent crystal oscillate in directions perpendicular to each
other, so it is not possible to exhibit destructive interference; that is, the wave
amplitude cannot be reduced to zero and no dark interference fringes are formed.
This interpretation was essentially correct, but it was not supported by a mathematical analysis of the phenomenon. The mathematical foundations of wave optics
were established, again, by Fresnel.
It is worthwhile to comment on Youngs personality, mainly because it is
surprising that he decided to deal with an issue that seemed to have been settled
100 years before his time. Young, however, was not an ordinary scientist. Thanks
to a large inheritance, he was financially independent and his involvement with
science was more as a hobby rather than as a profession. He was one of the last
great amateur scientists. Moreover, he was engaged in physics research only
peripherally. He was trained as a medicine doctor and this was his main professional activity. He became interested in physics because he wanted to study the
physiology of vision. However, apart from medicine and physics, he had many
other interests. For example, his contribution to the decoding of the hieroglyphics
of ancient Egyptian inscriptions was decisive. Furthermore, he had contributed
articles on more than twenty topics in an encyclopedia, in disciplines as diverse as
egyptology, literature, physics, astronomy, medicine and shipbuilding!

4.2.5 Completion of the Wave Theory: Fresnel


The French physicist Fresnel, who formulated the mathematical wave theory of
light to its final form, had a completely different personality from Young. He
came from a bourgeois family (his father was an architect) and he was able to
study thanks to the political change that took place in his time in France, after
the revolution of 1789. However, he did not have the necessary means to obtain
access to the scientific publications of his time on the branch of physics he was
interested in, that is, optics. As a result, he delved twice on problems that were
known and already solved. The first one was the diffraction of light by a semiplane and the second was thin film interference. In the second case, in particular,
his disappointment was great, because he learned that the problem was already
solved from the very scientist who had solved it. This happened when Fresnel
visited Young in England, accompanied by the President of the French Academy
of Sciences Franois Arago (17861853), in order to present to Young his results
on this problem. Anyone else would have been disappointed and abandoned the

54

4 The Major Branches of Physics

effort, but not Fresnel. So, he decided to select again, for the third time, a
problem in optics: the diffraction of light by a circular opaque disk. His results,
surprisingly, had a big impact in physics, since they provided definitive proof
that light is a wave phenomenon. Although it may seem paradoxical, Newtons
authority had influenced the scientific community to such an extent that, even
after Youngs double-slit experiment, there were still proponents of the corpuscular nature of light! Fresnels results made the balance lean in favor of the
wave theory; it is interesting to see how this happened.
Fresnel, using the assumption that light consists of transverse oscillations,
calculated the intensity of the light rays as a function of the position behind a
circular disc and presented his results in a contest proclaimed by the French
Academy of Sciences. Members of the jury were some of the most prominent
scientists of the era: Pierre-Simon de Laplace (17491827), Jean-Baptiste Biot
(17741862), Simon-Dennis Poisson (17811840), Joseph Louis Gay-Lussac
(17781850) and the Academys president, Franois Arago. Of these, the first
three were proponents of the corpuscular theory, while the other two of the wave
theory. Poisson, who was the greatest mathematician among the members of the
jury, found that, according to Fresnels theory, the shadow that appears on a screen
behind the circular disk is not always completely dark. In some cases, depending
on the wavelength of light, the distance of the screen from the light source, and the
diameter of the disc, a bright spot should appear in the center of the shadow. This
phenomenon could not be interpreted in any way by the corpuscular theory.
Eventually, an experiment was conducted that gave exactly the result predicted by
Fresnels theory, thus persuading even Poisson that light, in fact, consists of
transverse waves. Fresnel won the prize and the wave theory of light finally
prevailed over the corpuscular theory; that is, until quantum mechanics showed
that light has a dual character, that of a wave and a particle!

4.2.6 Spectroscopy as a Branch of Optics


The general acceptance of the wave nature of light, following the work of Young
and Fresnel, was not meant to last for long. Soon paradoxes began to emerge in
experiments, which could not be easily interpreted by the wave theory of the 19th
century. A series of such experiments were related to the speed of light, which now
is considered by the (then unknown) special theory of relativity as constant and
independent of the reference frame. Because at that time it had been proved that
light waves are transverse, it was predicted (by classical physics) that they
propagated in an elastic medium that occupies uniformly all space; this medium
was named aether after the name of the fifth element of Aristotles physics.
Aether should have very high rigidity, because the propagation velocity of
transverse waves is proportional to the square root of the rigidity of the propagating medium, and the speed of light was known to be very high. However,
despite its enormous rigidity, this medium (aether) did not seem to hinder the

4.2 Optics

55

motion of celestial bodies! Nonetheless, besides the fact that aether was never
detected experimentally, other experiments (mainly the Michelson-Morley
experiment, which will be discussed in Sects. 4.4.5 and 5.1.2) showed that the law
of addition of velocities, predicted by the Galilean transformations, does not apply
in aether.
Another series of experiments was related to the quantum nature of light, which
was also unknown before the early 20th century. Light is the most common
phenomenon of everyday life that is directly related to quantum mechanics, which,
however, until the late 19th century physicists ignored. Thus, in many cases
(a) the technicians involved in the manufacturing of optical instruments and
(b) the scientists who used those instruments for research purposes,observed some
peculiar phenomena which, moreover, were difficult to be interpreted
within the existing theories. Classic example of the former category was
Joseph von Fraunhofer (17871826), who lived at the time of Fresnel and was
a glassmaker. Classic examples of the latter case were the experimental
chemist Robert Wilhelm Bunsen (18111899) and the theoretical physicist
Gustav Robert Kirchhoff (18241887), who lived about 100 years after
Fraunhofer.
Fraunhofer attempted to shape high quality achromatic composite lenses consisting of more than two elements (i.e., simple lenses), each made from glass of
different refractive index. The quality of a composite achromatic lens depends
mainly on the refractive index of each lens element. For this reason, Fraunhofer
was trying to determine accurately the refractive index of the glass samples he
manufactured, by shaping them into prisms and measuring the angle of refraction
of the emerging light rays (Fig. 4.14). To his surprise, when using as a light source
the Sun, he obtained not only the continuum emission spectrum, that was known
since the time of Newton, but dark lines as well; these lines had not been observed
until then, because of the low resolution of previously available prisms.9 He
decided to use these lines as reference points in order to measure accurately the
wavelength of each color; to this end, he started to record these lines in a list,
which eventually turned out to contain several hundreds of spectral lines. This list,
although never particularly useful in the construction of optical instruments,
proved valuable to later researchers who dealt with the (quantum) interpretation of
the phenomenon.
Bunsen was an experimental chemist, engaged in research and teaching at the
University of Heidelberg. One of his research objectives was to determine methods
of identification of various chemical elements, from the color they emitted when
heated. He even invented a simple instrument, the Bunsen burner, which is using
coal gas as fuel and produces a high temperature flame, suitable for the incandescence of the chemicals he was using. Bunsens method was simple (many

Fraunhofer had invented a new instrument, the spectroscope, which made the recording of
spectral lines far more effective.

56

4 The Major Branches of Physics

Fig. 4.14 The principal Fraunhofer lines, marked on a continuum spectrum. Line D3 at a
wavelength of 587.5618 nm is one of the helium lines that lead to the discovery of this noble gas
in Suns atmosphere

would say simplistic): he determined the wavelength of the flames light by


directing it through fluids of various colors, which acted as filters, allowing only
the passage of light within a narrow wavelength range.
Kirchhoff10 was a theoretical physicist, 13 years junior to Bunsen. They met at
the University of Breslau, where Bunsen had the opportunity to assess the natural
talent of the young physicist. So when Bunsen was elected professor at the University of Heidelberg, he tried and ultimately succeeded to have Kirchhoff
appointed there as well. During the 20 years that Kirchhoff stayed in Heidelberg,
he collaborated closely with Bunsen, and the two scientists produced a series of
very important research results.
Kirchhoff suggested to Bunsen the use of a spectroscope, which is the instrument invented by Fraunhofer that utilizes a prism in order to analyze a light beam
to its spectrum. With this instrument Bunsen and Kirchhoff discovered, from their
characteristic spectra, two new elements, caesium and rubidium. These two elements were named after the color of their main spectral lines: blue for caesium
(caesius in Latin means light blue) and deep ruby-red for rubidium (from the color
of the precious stone ruby). After this success, they tried to identify the dark
(absorption) Fraunhofer lines of the solar atmosphere comparing them to the
known, from laboratory measurements, bright (emission) lines of the various
chemical elements. They found that most Fraunhofer lines correspond to lines of
known elements on Earth, such as sodium. In 1868, however, during a solar
eclipse, astronomers discovered two lines in the solar spectrum that did not correspond to any known chemical element. These lines were attributed to the
existence of a new chemical element, which was named helium, from the Greek
name Helios (Sun). This success led astronomers to exaggerations, since the discovery of any unknown spectral line in a celestial body was attributed to a new
chemical element. In this way, it was announced the discovery of coronium, in the
corona of the Sun, and nebulium in stellar nebulae. However it was soon found that
10
Kirchhoff is best known to high school students from Kirchhoffs circuit laws (rules), which
are used in the calculation of currents in a node of an electrical circuit as well as voltage drops in
the elementary loops of a circuit.

4.2 Optics

57

these unknown spectral lines belong to oxygen and nitrogen, whose atoms in the
solar corona and stellar nebulae are multiply ionized, either due to high temperature (the corona of the Sun has a temperature of 1,000,000 K) or because of low
number density (gas nebulae are billions upon billions of times thinner than
Earths atmosphere).11
Based on the experimental results of Bunsen, Kirchhoff formulated his now
famous three laws of spectroscopy. The first two are known even by primary
school students.
The first law states that solids and liquids, when heated, emit a continuous
spectrum, in which the intensity of light varies smoothly with frequency and is
given roughly by the law of black body radiation. Gases, instead, emit a spectrum,
where light intensity is zero for most frequencies and nonzero only at some narrow
frequency ranges. Because in the spectrometer those narrow frequency ranges of
non-zero intensity appear as bright lines, the spectrum of gases is usually called
line spectrum.
The second law states that, if a cold gas is placed in front of a body that emits a
continuum spectrum, then the continuum spectrum exhibits dark absorption lines
at the same wavelengths where the heated gas gives emission lines.
The third law is of mathematical form and links the emission and absorption
coefficients of a body to its temperature.
All three laws can be easily explained through quantum theory; however, the
third law is of particular importance, because it led Planck (Max Karl Ernst
Ludwig Planck, 18581947) to the mathematical formulation of the spectral distribution of light emitted by a black body, which was the beginning of quantum
theory.

4.2.7 Relationship Between Mechanics and Optics


Finally, we should point out a relationship that seems to exist between mechanics
and optics. Shortly before the beginning of 20th century, physicists were led to the
conclusion that science had reached an end, not only because they had the
impression that all physical phenomena had been studied, but because a single way
of describing them all started to emerge. So, although at first glance optics and
mechanics appeared to be completely different entities, they seemed to exhibit
some significant analogies. One such analogy is the following: Fermats principle
(formulated by the French mathematician Pierre de Fermat, 16011665), according to which light follows the route that minimizes propagation time, is similar to a
basic postulate in analytical mechanics, proposed by Maupertuis, according to
which a body follows the trajectory that minimizes action (the well known least

11

The dependence of the degree of ionization from temperature and density is given by the
famous Sahas law, which can be found in any standard astrophysics book.

58

4 The Major Branches of Physics

action principle). Fermats principle can be defined mathematically by using the


concept of the optical path, s,
s s1  n1 s2  n2 s3  n3   
which is the sum of the products of the length traveled by the light ray in each
medium times the refractive index of the medium and which is the function to be
minimized.
The second analogy is more indirect and is associated with electromagnetism.
As we shall see in Sect. 4.4, in the late 19th century, after Maxwells work, it was
understood that light is a type of electromagnetic radiation. However, the law of
attraction between electric charges of opposite sign has the same mathematical
form (inverse square) as the law of gravity, which lead to the belief that between
mechanics and optics there exists a deeper underlying relationship that is
responsible for whatever analogies appeared. The existence of these analogies led
many physicists, such as Hertz (Heinrich Rudolf Hertz, 18571894) to attempt to
explain all physical phenomena through mechanics. The effort failed because, as
we shall see in Chap. 5 and (in particular) Sect. 5.2, classical mechanics is not
compatible with electromagnetism. Einsteins special theory of relativity resolved
this incompatibility in favor of Maxwells electromagnetic theory. But the problem
of finding a single unified theory to describe all natural phenomena still remains,
because, as discussed in Chap. 6, the theory of relativity, which explains the
phenomena of the macrocosm, is incompatible with quantum mechanics, which
explains the phenomena of microcosm, including light.

4.2.8 Optics Today


The situation today in optics is as follows: geometrical optics is taught in secondary education in the form it was taught until the time of Young and Fresnel.
Sometimes some elements of Youngs wave theory are also presented. University
curricula include the wave theory of Fresnel and his successors of the late 19th
century. Also, in the last year of undergraduate studies or at postgraduate level are
presented elements of quantum and non-linear optics of the 20th century, which
find application to many modern technologies, such as lasers.

4.3 Static Magnetism and Electricity


4.3.1 From Antiquity to the Renaissance
Unlike mechanics and optics, it took a long time for scientists to integrate electric
and magnetic phenomena in a single theoretical model. The main reason was that

4.3 Static Magnetism and Electricity

59

there were insufficient experimental data, since experiments with static electricity
depend in a very sensitive way on weather conditions (air humidity discharges
quickly any charged body), and experiments with electric currents were not possible, simply because at that time sources of current were not invented yet. The
only available relevant observations were those inherited from ancient Greeks,
about the properties of magnetic minerals and the phenomenon of electrification
by friction, both pioneered by Thales from Miletus. So, it is not surprising that, at
the beginning of the modern era of physics, the first phenomenon systematically
studied was static magnetism.
Pierre Peregrinus de Maricourt (ca. 1269), one of the few experimenters of the
Middle Ages, discovered in the 13th century that there are two kinds of magnetic
charges, which he named south and north, depending on whether they attract or
repel, respectively, the tip of a magnetic needle that points north. It was he who
made the historical mistake to call south magnetic pole of the Earth the one that
corresponds to the north geographical pole, since he named north magnetic pole of
the needle the one that points north!
However, the first to engage in systematic research on magnetism was William
Gilbert (15441603), a contemporary of Galileo and the first scientist who
accepted the heliocentric theory in England. Gilbert had studied medicine and was
appointed physician to Queen Elizabeth I, but his research was focused on physics.
His main work was published in the book De Magnete, where he presented the
results of experiments conducted with terrellae, which are balls made of magnetite
(an iron mineral with magnetic properties) (Fig. 4.15). With these experiments, he
confirmed the conclusion of Peregrinus, namely that the magnetic needle points
north because Earth contains a huge magnet. He was also the first to realize that
magnetic poles always come in pairs, because he noticed that if a bar magnet is cut
in two, we do not obtain two poles, one north and one south, but two new smaller
magnets. His key contribution to physics, however, was that with his experiments
managed to differentiate electric from magnetic properties. In particular, he
pointed out the differences between electric and magnetic phenomena, noting that
magnetic phenomena manifest themselves only in certain metals, especially iron,
and are permanent, while electric phenomena can be observed mainly in light
objects. Based on these results, he defined as electric materials those which
acquire, through friction, properties similar to those acquired by amber (the word
electricity is derived from the Greek word g9 kejsqompronounced electronfor
amber). In this way, he showed that magnetism and electricity refer to different
properties of matter. Unfortunately, his ideas about the nature of these phenomena
were completely wrong, since he believed that electric and magnetic forces are due
to the outflow of an unknown fluid and that the force that attracts the planets to the
Sun is of magnetic nature.

60

4 The Major Branches of Physics

Fig. 4.15 Contemporary


electromagnetic terrella
(Museo Galileo, photo by
author)

4.3.2 Development of Experimentation


Significant progress in the study of electrical phenomena was made possible only
after the invention, in the 17th century, of the first mechanical device producing
static electricity by Otto von Guericke (16021686) (Fig. 4.16). Guericke, who
was mayor of Magdeburg, is known in the history of physics mainly for his
experiments with gas pumps. In one of them, he joined two metallic hemispheres
and removed the air from the resulting sphere by using a pump. Then he demonstrated that the hemispheres could not be easily separated, not even by pulling
with sixteen horses (eight from each side), because they were kept together by the
external atmospheric pressure. The static electricity machine he invented was
primitive, consisting of a sulfur sphere mounted on a potters wheel. By turning the
wheel and, hence, the sphere, and touching the sphere, as a potter does when
shaping a clay pot, one could charge electrically the sphere by friction. Then came
the remarkable work of the Englishman Stephen Gray (16661736), who discovered the phenomenon of electrification by conduction; more specifically, he
noticed that the electrical properties of an electrically charged (through friction)
glass rod can be transmitted by contact to cork and wood. He found that the

4.3 Static Magnetism and Electricity

61

Fig. 4.16 Contemporary


model of an electrostatic
machine designed by father
Filippo Cecchi, director of
the Ximenian Observatory,
Florence (Museo Galileo,
photo by author)

Fig. 4.17 Graphical


representation of Grays
crucial experiment. Electric
charge is transmitted through
a silk thread 80.5 feet long
(from Saggio intorno all
Elettricita de Corpi by J.A.
Nollet, 1747)

phenomenon was also observed when wood was replaced with a silk thread, but
not when the silk thread was hold in place by metal braces (Fig. 4.17). Finally, he
found that a piece of metal cannot be electrified, unless it is mounted on a material
that can be electrified (in other words, on an insulator). Thus, he discovered the
difference between conductors and insulators of electricity.
Shortly afterwards, the French Charles du Fay (16981739) discovered the
difference between positive and negative electric charges. He named them vitreous
and resinous fluids, respectively, because they manifest themselves, through
friction, in glass and amber respectively. Repeating du Fays experiments, John
Canton (17181778) discovered that one can produce both kinds of electric
charges using the same glass rod, provided that the rod is rubbed with a different
material each time.
The next major step was made when scientists found a way to store electric
charge in capacitors, which at that time were called Leyden jars, after the city they
had been invented and their characteristic shape. As it often happens, the discovery
was purely accidental and was made independently by two researchers. Whats
more, in the beginning, none of them could understand the physical mechanism

62

4 The Major Branches of Physics

that gave the new instrument this important property. The German Ewald Georg
von Kleist (17001748) was experimenting with a bottle filled with water and
sealed with a cork, through which was passing a piece of wire. Kleist noticed that,
every time he touched the wire with an electrified object, the object was losing its
electric charge. At the same time he found that, when he touched the wire after this
process, he felt a jolt, indicating that the electric charge of the electrified object
had migrated inside the bottle. He assumed that electricity was some sort of
fluid, which could be stored in a bottle the same way as water does. Soon
afterwards, however, he stopped experimenting with these apparatuses, frightened
by the fact that sometimes the jolt he felt was particularly powerful, and he
never published his results. At about the same time and independently of Kleist,
similar experiments were conducted by the Dutch Pieter van Musschenbroek
(16921761), who was teaching at the University of Leyden. Eventually, he
published his results, and the new device was named by a translator of his works
after the Dutch town where it was invented (Fig. 4.18). Following Musschenbroeks publication, many physicists started experimenting with Leyden jars and
soon found out that the presence of a liquid was not necessary, since the same
result could be obtained by placing a metal foil inside the jar. Adding a second
metal foil on the outside surface of the jar, a primitive form of modern capacitor is
formed, with the glass acting as the dielectric between the two conductors.
The last scientist who made a major contribution in establishing through
experimentation the properties of electricity was the American Benjamin Franklin
(17061790). Franklin was a man with many talents, as was the case with many
scientists before the 20th century. He began his career as a journalist and continued
as a politician. As such, he served as ambassador of Pennsylvania in Paris, at the
time when the American states were still independent under the sovereignty of
Great Britain. Later, he was one of the Founding Fathers who signed the
Declaration of Independence of the U.S. Franklin established experimentally that
the two types of electricity found by du Fay were in fact opposite manifestations of
the same entity, because when bodies charged with different types of electricity
came into contact, a spark appeared and the bodies ceased to be electrified. So, he
arrived to the right conclusion that, under normal conditions, in a body exist equal
amounts of these two kinds of electricity. Electrification of a body occurs when a
quantity of one kind is removed or a quantity of the other is added.
Finally, he noticed that the sparks observed during his experiments were very
similar to lightning; as a result, it was not difficult for him to realize that lightning is
nothing more than an electrical discharge between two clouds or between a cloud
and Earth. In fact, he proved this experimentally by flying a kite during a storm and
showing that sparks could be seen at the other end of the string holding the kite,
when this end was approaching the ground. Franklin was a practical mind, so apart
from his contribution to the understanding of electricity, he thought that the above
phenomenon could be used to discharge thunderclouds gradually, avoiding lightning. So he invented the lightning rod, for which he is better known to most of us.

4.3 Static Magnetism and Electricity

63

Fig. 4.18 Leyden jar of the


end of the 18th century
(Museo Galileo, photo by
author)

4.3.3 The Law of Electrostatic Force: Coulomb


After the gradual understanding of the qualitative nature of electrical phenomena
through the work of many scientists, it was time for the formulation of mathematical laws. Important role in this effort played two British chemists, Joseph
Priestley (17331804) and Henry Cavendish (17311810), but the key character
was, unquestionably, the French engineer Charles Augustin de Coulomb
(17361806). Priestley, better known for the discovery of oxygen, repeated initially an experiment conducted for the first time by Franklin, which (experiment)
showed that there is no net electric force inside a metal container. Acquainted with
the mathematical analysis of phenomena related to the gravitational force,
Priestley concluded that the force that manifests between similar electric charges is
of the same functional form as that of gravity; more specifically, excluding the fact
that it is repulsive, this force is proportional to the inverse square of the distance.
Cavendish arrived at the same conclusion using the mathematical methodology of
the theory of gravity, while trying to describe the capacitance of a spherical
conductor. He could have shown experimentally this law, since he had already
used in the past the appropriate apparatus, the torsion balance devised by

64

4 The Major Branches of Physics

Fig. 4.19 Coulombs


experimental device (torsion
balance); from the original
publication (1785)

Coulomb, with which he essentially calculated the numerical value of the universal
gravitation constant, G. It is not known why Cavendish did not conduct a similar
experiment with electric charges, because very few of his results were published
during his lifetime; what we know today about his research on electricity comes
from the study of his notes after his death.
As already mentioned, the experimental verification of the law of force between
two magnetic or electric charges is attributed to Coulomb. Coulomb was not a
physicist by profession. He had, however, studied engineering in a French military
academy and had good knowledge of engineering mechanics and mathematics. He
was the inventor of the torsion balance used by Cavendish to measure the value of
G. With the help of this apparatus, he measured the force between the homonymous and heteronymous poles of two adequately long rod magnets, so that the
influence of the poles at the other end of each magnet could be ignored. He found
that this force depends on the inverse square of the distance between the poles. He
repeated the experiment using electric charges and arrived to a law of the same
functional form (Fig. 4.19).

4.3 Static Magnetism and Electricity

65

4.3.4 Relating Electricity, Magnetism and Gravity


Coulombs results raised a great deal of interest in the scientific community of his
time for two reasons, one philosophical and one practical. The first reason, the
philosophical, is related to the desire of scientists to understand nature as thoroughly as possible. In this case, the discovery that all three forces (known at the
time) depend on the inverse square of the distance led physicists to the erroneous
belief that these forces are somehow related to each other. This false assumption
led to the conclusion that the integration of all remaining knowledge of physics,
leading eventually to its completion, is imminent. Moreover, because this functional form is, as already mentioned, characteristic, since it equates the axiomatic foundation of Newtons gravity theory to that of Gauss, the international
scientific community finally accepted the concept of action at a distance.
The second reason, the practical, is that no further effort was required anymore
to describe mathematically magnetism and electricity, because all one had to do to
achieve this was to use the already known results from the theory of gravity and
simply change units! This was a very important outcome because, in the time of
Coulombs theory, the classical (i.e., the Newtonian) theory of gravity had already
reached the level of perfection we know today, especially after the work of three
famous French theorists. More specifically, Lagrange had introduced, through
analytical mechanics, the concept of the potential. Laplace had written the equation relating the potential in a region of space with a mass contained in that region,
and then solved the equation for the special case where the region contains no
mass. Finally, Poisson had extended Laplaces result for the case the mass in this
region is not zero. Furthermore, two other great mathematicians, the English
George Green (17931841) and the German Johann Karl Friedrich Gauss
(17771855), had linked the intensity of the gravitational field (i.e., the gravitational acceleration) to the distribution of matter through relationships containing
integrals, leading to a natural alternative (but equivalent, as already mentioned in
the beginning) foundation of the theory of gravity. So, all forces known at that
time were described by a single mathematical model.

4.4 Electric Currents and Electromagnetism


4.4.1 Invention of the Electric Cell
The apparent unification of theories in physics after Coulombs work was
undermined by the discovery of the electric current, something that changed
entirely the course of development of electricity. This change was mainly due to
the work of two Italian scientists, Luigi Aloisio Galvani (17371798) and Alessandro Giuseppe Antonio Anastasio Conte Volta (17451827). Galvani was a
medical doctor who was conducting research on the physiology of various human

66

4 The Major Branches of Physics

Fig. 4.20 Graphical


representation of Galvanis
experiment. The legs of a
frog contract when touched
by a pair of electrodes made
by different metals (source
David Ames Wells, The
science of common things,
Ivison, Phinney, Blakeman,
1859)

organs, such as kidneys and ears. He was, however, particularly known for his
work on the physiology of muscles, because of an accidental discovery he had
made. One of his research objectives at the University of Bologna was the effect of
electricity on animal tissues and, for this reason, his laboratory was equipped with
electrostatic machines and Leyden jars. There are various stories about how he
made his important discovery during a lecture on the anatomy of a frogs leg. One
thing is certain, however, that he discovered that the leg muscles contract when
connected to a source of electricity, such as an electrostatic machine or a Leyden
jar, or when touched by two different metals (Fig. 4.20). So, he arrived to the
(correct) conclusion that contractions were caused by electricity; however, he
believed (incorrectly) that the source of electricity was the frogs body, and for this
reason he named it animal electricity. He published his results in 1791 and his
ideas were adopted by most scientists of his time.
Galvanis observation was further investigated by another Italian physicist,
Alessandro Volta. At the dawn of the 19th century, in 1800, Volta made the
(correct) assumption that the leg muscles of the frog were just a sensitive medium
that detected electricity, which however comes from another source. Therefore, he
conducted an experiment to confirm this hypothesis, which was destined to revolutionize our technological civilization. Instead of touching the leg muscles of the
frog with two plates made of different metals, he immersed the plates in a container filled with a saline solution. He noticed that between the two plates was
developed what we call today a potential difference, because when he brought into
contact two wires, each connected to each one of the plates, he observed a spark.
Apart from the spark, which was a known phenomenon of static electricity, he
noticed a completely new phenomenon. While the two wires were in contact,
Franklins electric fluid was flowing continuously from one plate to another.
This discovery forms the basis of our modern technological civilization, which is
based on electricity, i.e., electrical currents.

4.4 Electric Currents and Electromagnetism

67

Fig. 4.21 A battery, ca.


1880, consisting of twenty
zinc-carbon elements (Museo
Galileo, photo by author)

Then Volta thought, correctly, that if he connected in series the plates of two or
more containers, he could obtain a higher potential difference. Eventually, after
realizing that the transportation of such an array of containers was cumbersome, he
replaced the liquid solution with pieces of cardboard soaked in salty water. Using
disc-shaped metal and cardboard plates and placing them one on top of the other,
in alternating layers, he created a column, which was called, from its shape, an
electric pile. This device was the forerunner of the batteries we use today in
flashlights and other portable devices. In todays terminology, the plates are called
electrodes (a name coined by Faraday), while each pair of electrodes with the
intermediate electrolyte is called a cell element. The voltage between the electrodes depends on the material of the electrodes as well as on the electrolyte, and
varies in the most common elements from 1.3 to 2 V. A typical battery contains up
to six elements (Fig. 4.21).
The invention of the battery, the device on which was based the study of electric
currents for decades before the invention of the electric generator by Faraday (and its
commercial availability!), was by itself an important event. But Volta wanted to
understand in depth the phenomenon of generation of electric current from a
chemical reaction. So in trying to generalize the operating principle of the battery in
order to interpret Galvanis experiments, Volta made a mistake: he assumed that, in
Galvanis experiments, electricity was always of external origin. Galvani tried to
defend his view, showing that leg contractions occur not only when one touches the
muscles with two metals, but even with just one metal or with a nerve from another
muscle. These results could not be interpreted by Voltas theory, so the situation
remained unclear for years. Galvani died in 1798, still believing that the animal
electricity is different from the vitreous and resinous electricity produced by
electrostatic machines. Several years later, it became clear that both Galvani and
Volta were right in one point and wrong in another. In fact, muscles act as probes of
electricity, as Volta assumed correctly, but this electricity can be of either external or
internal origin. It is of external origin when a potential difference appears between

68

4 The Major Branches of Physics

two different metals immersed in an electrolytic solution (whether the solution is


composed of inorganic or bodily fluids), and of internal origin when the muscle is
affected by the electric current flowing through the nerves of a living organism.
Voltas crucial experiment and the subsequent invention of battery and electric
current did not happen all at once, but were the result of painstaking research efforts
with very important intermediate results. For example, Volta improved the first
practical static electricity device, which he called electrophorus. The principle of
operation of this primitive device formed later the basis for the very successful
electrostatic machines of the British James Wimshurst (18321903) in the 19th
century and of the American Robert Jemison van de Graaff (19011967) in the 20th
century. Volta was also involved in chemistry research and he invented a kind of
closed container, the eudiometer, with which one can measure with high accuracy the
quantities of reactants and products of a chemical reaction. Finally, he discovered and
isolated methane (also known today as natural gas). But it is noteworthy that after the
discovery of the battery he did not produce any significant research results.

4.4.2 Beyond Electric Current: Electrolysis


and Electromagnetism
Electrolysis
The changes in research and technology brought about by the invention of the
battery were due to two factors:
First, now much more energy was available than one could get from electrostatic
machines.
Second, the energy was produced at a steady rate.
As a result, it became possible to observe phenomena previously unknown, so
for some time there were still doubts if the electricity generated by batteries is
similar to that produced by electrostatic machines.
The first application of electric current took place just a fortnight after the
publication of Voltas results. Two British physicists, William Nicholson and
Anthony Carlisle, passed electric current through water and found that water
breaks down into hydrogen and oxygen. Some years later, this phenomenon was
named, by the great English chemist and physicist Faraday, electrolysis. Seven
years after the discovery of electrolysis, the English chemist Sir Humphry Davy
(17781829), Faradays mentor, managed to obtain alkali metals (sodium and
potassium) by electrolysing molten salts of the metals using a battery with 3,000
elements (which had a voltage of *4500 V!). It was the first time that these
highly reactive elements were produced in metallic form. Later, Faraday continued
successfully the research work of Davy in electrolysis, and based on his experiments he formulated a series of laws. These laws not only played a decisive role in
the acceptance of the atomic theory, which in modern times was proposed by

4.4 Electric Currents and Electromagnetism

69

Faradays contemporary English chemist John Dalton (17661844) (see


Sect. 4.6.2), but also motivated the ideas of subsequent physicists in formulating
the atomic model that led to the discovery of the electron.
Electromagnetism
A very important application of electricity, besides electrolysis, was the discovery of the magnetic field created by moving charges. This phenomenon was
discovered accidentally (as often happens with great discoveries) by the Danish
physics professor Hans Christian Oersted (17771851). During an afternoon class
at the University of Copenhagen in April 1820, Oersted placed a current carrying
wire over a magnetic needle. To his surprise, he observed that, when the circuit
was closed, the needle turned abruptly, tending to orient itself perpendicular to the
wire. Being a good experimentalist, Oersted decided to investigate further this
unexpected phenomenon and he reversed the currents direction. He observed that
the needle moved again, tending once more to orient itself perpendicular to the
wire, but this time it rotated in the opposite direction (Fig. 4.22). Oersted
announced his discovery in a self-published leaflet in July 1820. The French
physicist Arago, then president of the National Institute (the former French
Academy of Sciences at the time of the First Republic), was informed of Oersteds
discovery and repeated the experiment, slightly modified, replacing the magnetic
needle with a needle of soft iron. He observed that the current carrying wire
attracted the iron needle in the same way as a common magnet does. Hence, he
concluded that the magnetic forces of electric currents are not different from those
of common magnets. He announced these new results at a meeting of the Institute
in September 1820. The meeting was attended by the French mathematician and
member of the Institute Andr Marie Ampre (17751836). Ampre was so
impressed by the new phenomenon that, although he was a mathematician and not
an experimental physicist, he designed and performed four experiments that finally
elucidated the nature of the phenomenon and the law governing it. Within a week
following the presentation of Aragos paper, he had already formulated the law
describing the functional form of the electromagnetic force. First, guided by
Aragos results, he made the important discovery that a magnetic force appears not
only between a current and a metal, but also between two currents. Then, he
designed and conducted the following four experiments (Fig. 4.23):
In the first experiment, he placed two wires, either straight or twisted one
around the other, conducting equal and opposite currents, and he found that the
magnetic needle was not affected.
In the second experiment, he used two wires again, one straight and the other
bent in a wavy form. He found that the result was again nil, as in the case of two
straight wires.
In the third experiment, he bent a wire in the form of a circular arc and secured
it so as to rotate freely around an axis passing through the center of the circle and
perpendicular to it. He connected the ends of the wire to a battery and found that
the wire did not move at all when approached by a natural magnet or another
current carrying wire.

70

4 The Major Branches of Physics

Fig. 4.22 Modern variant of


Oersteds experiment. When
electric current flows along
the vertical wire, the small
magnetic needles, which
initially point north (top),
align with the magnetic field
lines, which are circles
around the wire (bottom)
(NOESIS, photo by author)

Finally, he placed three circular wire loops, each with different radius, perpendicular to an axis passing from their centers and in such a way that he could
modify the position of the middle one, and connected them in series (so that all
three were carrying the same current). He found that if the radii Ri of the circular
wires satisfy the conditions R1/R2 = R2/R3 = r and the distances Lij between the
wires satisfy the conditions L12/L23 = r, then the middle wire is in equilibrium;
that is, the net force from the two outer circular wire loops is zero.
Analyzing the results, he arrived at the following conclusions regarding the
form of the electromagnetic force:
First experiment: the function that gives the force between two wires is proportional to the product of the two currents passing from the wires; the force changes
direction when the direction of one current changes.

4.4 Electric Currents and Electromagnetism

71

Fig. 4.23 Nineteenth


century experimental device
for performing Ampres
experiments (Sparkmuseum)

Second experiment: a current can be resolved into components, in the same


manner as a straight line in a rectangular Cartesian system of reference; therefore,
it is a vector quantity.
Third experiment: the electromagnetic force applied to the current carrying wire is
perpendicular to the wire.
Fourth experiment: the electromagnetic force is inversely proportional to the
square of the distance between the currents.
The final qualitative conclusion is that a circular current behaves like an
elementary magnetic dipole, with the magnetic poles located on either side of the
circle. Ampre even suggested, correctly, that the magnetic properties of all natural permanent magnets are due to elementary currents, at microscopical level, all
oriented in the same direction.
Ampre also assumed (something that does not result from his experiments)
that the force between two elementary lengths of each wire acts along the line
connecting them (a fact that seems reasonable, since it is in agreement with
Newtons third axiom of action and reaction). Based on the results of his experiments and the above additional assumption, Ampre then concluded that the force
between two elementary currents (dI1 and dI2) seems to be of the same functional
form as the other three known forces (gravitational, electrostatic and magnetostatic): it acts along the line connecting dI1 and dI2 and it is inversely proportional
to the square of the distance between them. This fact was then considered as
evidence that the concept of action at a distance, introduced by Newton, is directly
related to the deeper roots of physics. Ampres experiments and their theoretical
interpretation by the great French scientist caused a lot of sensation among the
physicists of the 19th century. Today, as we shall see, the mathematical description
of electromagnetism proposed by Ampre is considered obsolete, because of two
important limitations:

72

4 The Major Branches of Physics

It describes only static phenomena, i.e., phenomena due to the presence of


constant currents. Therefore, it does not apply during the intervals when the
value of a current is changing, such as when one opens or closes a switch, nor to
alternating currents.
It assumes that the force between two currents acts instantaneously, i.e., it is
transmitted with infinite speed.
These two constraints are due to the fact that the concept of field is not included
in Ampres theory; instead, the theory is based directly on the concept of force
acting between currents, just as Newtons gravitational force is acting between
massive bodies. This is the reason why Maxwell named Ampre Newton of
electricity. Today, as we shall see in the following paragraphs, the theory of
Ampre may be considered as a special case of Maxwells theory when currents
are steady and when distances are small, so that we can assume to a good
approximation that forces propagate at an infinite speed.
Finally, it is worth noting something that is of particular importance in the
history of physics. Ampre was not the only member of the National Institute of
France that was impressed by Oersteds discovery. So was Jean-Baptiste Biot,
who, in collaboration with his colleague Flix Savart, conducted his own experiments. The two scientists reported their results in late October, a month after
Ampres first memoire, including a law for the force acting between elementary currents that was different from that of Ampres. In Biot-Savarts law,
the action and reaction between two elementary currents are not collinear and,
hence, the law is not consistent with Newtons third axiom.12 At first glance, it
seems that the two laws (Ampres and Biot-Savarts) are different. This is not
correct, though, because the current in the experiments flows in closed circuits, so
that the quantity measured is not the force between two elementary currents but the
force between two electrical circuits. Mathematically, this force corresponds to the
(double) integral along the loops of the two currents, and that integral turns out to
be the same for both laws! So there is no way to determine experimentally which
one of the two laws is the correct one. Biot-Savarts law is integrated easier
along a straight conductor, so it facilitates the calculation of the force exerted by a
straight conductor of infinite length on the elementary length of a second conductor. In Maxwells language, this is essentially equal to the intensity of the
magnetic field at that place. For this reason, the name Biot-Savart has prevailed
for the law giving the intensity of the magnetic field (or the magnetic induction) in
the vicinity of an infinite straight conductor.

12

It is not consistent with the third law in its strong form, which requires the two forces to be
collinear. However, it is consistent with the third law in its weak form, which requires the two
forces to be just opposite.

4.4 Electric Currents and Electromagnetism

73

4.4.3 Experimental Foundations of Electromagnetism:


Faraday
After the announcement of Ampres experimental results, many physicists tried to
explore in depth the new phenomena. Nobody, however, could match the scope
and depth of Faradays research work, who was undoubtedly the greatest experimental physicist of the 19th century and one of the best ever. That was a great
achievement, especially in view of the fact that Faraday never attended college or
even high school and that the only training he ever received in science was in
chemistry by his mentor, Davy, who was also self-educated!
Michael Faraday was born in 1791 in the outskirts of London. At the age of
thirteen he left school and began to work in a bookbinders shop, where he had the
opportunity to read many books. One of the customers noticed the young mans
love for learning and offered him some tickets for attending a series of lectures
given by the great chemist and director of the Royal Institution, Humphrey Davy.
At that time, when the opportunities for entertainment were few, one had to buy a
ticket to attend lectures given by a famous scientist, so the invitations were greatly
appreciated. Young Faraday attended all lectures, keeping notes. When the series
of lectures was completed, he bound the notes in two volumes and offered them to
Davy. Davy was so impressed by the studiousness and systematic mind of the
young man that hired him as his assistant. At first, Faradays duties seemed more
like those of a servant. But the new assistant was a good student and he quickly
became indispensable in Davys research. Eventually, he succeeded his teacher at
the Royal Institution, where he stayed as professor of chemistry for 40 years (from
1825 until 1865, when he resigned). During most of his research career Faraday
worked mainly in chemistry. So, it should not come as a surprise that chemists
consider Faraday as a chemist. Among Faradays discoveries in chemistry are
benzene, tetrachlorethylene (a solvent used in dry cleaning) and nitrogen triiodide
(an extremely sensitive explosive; a fly walking on it can trigger its explosion).
However, Faradays work in physics was more important. He worked on the
liquefaction of gases and realized the importance of the critical temperature above
which a gas cannot be liquefied, no matter how much one compresses it. Very
important was also his work on electrolysis; besides the introduction of the concept
of the chemical equivalent, and the laws of electrolysis, he coined all relevant to
the subject new scientific terms, such as electrolyte, electrode, anode, cathode,
anion, cation, etc.
However, Faradays main contribution to physics was the experimental foundation of electromagnetism, on which was later based Maxwell to formulate the
theoreticalmathematical description of this branch of physics. This task took
Faraday about ten years, from 1830 until 1839. He first attempted to confirm the
reasonable assumption that, since an electric current produces a magnetic field, a
magnetic field should produce an electric current. This association resulted from
the fact, as mentioned in what follows, that Faraday used to think by analogies.
After one year of continuous efforts, he realized that the phenomenon is a dynamic

74

4 The Major Branches of Physics

Fig. 4.24 Faraday rotor. The


first electric motor invented
by Faraday (from
Experimental Researches in
Electricity by M. Faraday,
vol. 2, Richard and John
Edward Taylor, 1844)

Fig. 4.25 Faradays disk


generator (from mile
Alglave and J. Boulard The
Electric Light: Its History,
Production, and
Applications, English
translation by T. OConor
Sloan, D. Appleton and Co.,
1844)

one, meaning that the conductor through which one wishes to conduct a current
must sweep the magnetic field lines, so the important parameter is not the flux,
U, of the magnetic field through a closed circuit but its time derivative, dU/dt. At
this point becomes evident the advantage of Faradays description of electromagnetism over that of Ampres, which is discussed in the following paragraph.
Faraday built the first electric motor in 1821 and, having understood the phenomenon of induction, managed to build the first electromagnetic generator in
1831 (Figs. 4.24 and 4.25). The invention of the generator was a landmark in the
history of electromagnetism, because it enabled carrying out electrical experiments

4.4 Electric Currents and Electromagnetism

75

without the need of batteries, which were quite expensive. It was also a landmark
in the history of applied research, because it was the first time an innovation
emerged as a direct product of basic research. So it became clear that the industry
may benefit from the results of scientists who work purely in search of scientific
truth. From then on, industrially developed, or developing, countries dedicate a
significant percentage of their budget (about 1 %) in funding basic research,
because they realize that, beyond the purely scientific results, there will certainly
come out innovations, the nature of which is not easy to predict in advance.
After this important result, Faraday worked on four main research directions.
(i) He studied the properties of capacitors and found that capacitance is a
function of the dielectric material interposed between the plates. He interpreted this phenomenon correctly, introducing the concept of polarization of
the dielectric, i.e., the fact that the two outer surfaces of the dielectric, located
in front of the plates of the capacitor, develop induced charges of opposite
polarity to those of the plates. In this way, an electric field, opposite to the
externally applied one (by the plates of the capacitor) is developed inside the
dielectric, which results in the reduction of the overall potential difference
between the plates. This phenomenon, which manifests macroscopically as an
increase in capacitance by the insertion of a dielectric between the plates, was
later expressed quantitatively through Maxwells equations.
(ii) He proved experimentally that all kinds of currents (in particular, those
produced by batteries and by induction) are the same.
(iii) He discovered the phenomenon of diamagnetism, which in a sense is the
opposite of ferromagnetism. Needles made of a diamagnetic material are
oriented perpendicular to the magnetic field lines, as opposed to needles
made of ferromagnetic materials, which are oriented along the field lines.
(iv) He tried to determine if there is some kind of interaction between the known,
in his time, forms of energy, such as gravitational, electric and magnetic
fields, on the one hand, and light, on the other.13 To this end, he investigated
the interaction of light with electric and magnetic fields, and tried to detect
any gravitational effects on the electric field. In the first case, he discovered a
new phenomenon: when linearly polarized light propagates along the lines of
a magnetic field, its polarization plane rotates. This result was complemented
later by the Dutch physicist Pieter Zeeman (18651943), who discovered the
shifting of energy levels of an atom by noticing the splitting of the spectral
lines of an element in a magnetic field. Faraday could not observe the
interaction of light (i.e., electromagnetic radiation) with electric and gravitational fields, not because he failed to design the appropriate experiments,
but because the magnitude of the effects he was seeking to observe was
beyond the accuracy of the measuring instruments he was using.

13

Note that heat was not recognized as a form of energy until 1850.

76

4 The Major Branches of Physics

Perhaps the most important contribution of Faraday in modern physics was the
introduction of the concept of field. The reason for this idea was, as one would
expect from an investigator of his class, the observation of the lines formed by iron
filings on a paper placed over a horseshoe magnet. It should be noted however that,
until then, the same pattern was undoubtedly noticed by dozens of scientists, who
failed to perceive the concept of the magnetic field. On the other hand, it should
also be noted that, although he failed to develop his original idea to a complete
theory, he elaborated on it sufficiently to allow later Maxwell to realize its
mathematical description. At this point it is worth mentioning that two of
Maxwells greatest achievements, the understanding of the electromagnetic nature
of light and the introduction of the vector potential of the magnetic field, were
based on related ideas first put forward by Faraday.
Faraday possessed one of the most important virtues an experimental physicist
can have, namely what we call physical thought. He could anticipate the
existence of phenomena not yet known and to design experiments to test his
predictions. He liked to study each new phenomenon by thinking in analogies,
having in mind a model from another branch of physics. The best example of
such a model is the concept of the field and the corresponding field lines they
describe it. When Faraday was thinking about magnetic field lines, he had in mind
thin elastic bands. It is striking that today, aware of the properties of the solutions
of Maxwells equations, we continue to use this analogy because in fact the
magnetic field lines always tend to assume a minimum length, as is the case with
elastic bands!
The concept of the field allowed Faraday to avoid the hypothesis of action at a
distance, which dominated physics since the time of Newton. His way of thinking,
which was completely different from that of the mathematician Ampre, ultimately
prevailed not only because it was adopted by Maxwell in the mathematical formulation of the laws of electromagnetism, but also because it is conceptually more
attractive. On the other hand, Faraday undoubtedly was not good in mathematics
and therefore it was not easy for him to describe quantitatively his ideas, the results
and the conclusions of his experiments. Perhaps this fact made his thought more
creative and more original, compared to the established ideas of his time. Anyway
the articles he published were difficult to read by his contemporaries; Maxwell was
one of the few who were able to realize their importance and to use them in the
formulation of the mathematical theory of electromagnetism that he proposed.
From what we have said so far, it becomes obvious that Faraday was probably
the last of a generation of physicistsresearchers that resembled a passionate
amateur rather than a well organized professional, as is the case today. Perhaps this
was the reason he never had students and did not set up his own research team,
preferring to have as assistant a retired sergeant. Today, when research work in
science is based more on teamwork rather than individual genius, this behavior
would seem unthinkable.

4.4 Electric Currents and Electromagnetism

77

4.4.4 Theoretical Foundations of Electromagnetism:


Maxwell
Faradays experimental results on electromagnetism found a worthy interpreter in
the face of a contemporary scientist, James Clerk Maxwell (18311879). The two
scientists were somewhat complementary personalities, since Faraday was mainly
based on physical intuition while Maxwell on mathematical education. But at the
same time they had similarities, since both had the tendency to think in analogies.
Maxwell, as it comes out from the introduction he wrote in his paper where he
presented his electromagnetic theory, was inspired by two quite different branches
of physics, namely heat and hydrodynamics. Regarding heat, he noticed that there
is some kind of analogy between the differential equation giving the electric
potential at each point in space and the differential equation giving the temperature
distribution in a body at a steady thermal state (i.e., the temperature at any point
does not change with time). Regarding hydrodynamics, he observed an analogy
between the flux of a fluid through a surface, bounded by a closed curve, and the
flux of an electric or magnetic field through a similar surface, where the flux of a
field, U, is defined as the integral of the fields intensity over this surface.
Maxwell was greatly influenced by the work of Faraday and this is evident from
the following examples.
Maxwell adopted the concept of the field, introduced in physics by Faraday, and
incorporated it in the mathematical formulation of electromagnetism.
Faraday, in a lecture for which he had not enough time to prepare, referred to
the possibility of propagation of transverse perturbations of the magnetic field in
an elastic medium. This idea led Maxwell, as he notes in his corresponding
publication, to the idea of the electromagnetic nature of light.
In a paragraph of his notes, Faraday refers to a new physical quantity that he
would like to introduce, the electrotonic state. The wording used by Faraday was
confusing, so nobody paid attention to this idea. However, Maxwell mentions in
later writings that, based on this very idea of Faraday, he introduced a vector
quantity in the basic mathematical foundations of electromagnetism; today, this
quantity is known as vector potential of the magnetic field. The magnetic field can
be calculated from the vector potential in a similar way as the electric field from
the electric potential.
Based on the analogies already mentioned and using a molecular vortex model
of Michael Faradays lines of force, Maxwell wrote a system of twenty (!)
scalar partial differential equations relating the spatial and temporal variations of
various electrical and magnetic quantities, such as the three components of the
intensity of the electric and magnetic fields (E, B), the current density and the total
current (J, I), etc. Later, these twenty equations were split by Heaviside (Oliver
Heaviside, 18501925) into two sets, and only the first one, that contains four
vector partial differential equations, is today officially known as Maxwells

78

4 The Major Branches of Physics

equations.14 Obviously, it is extremely difficult to find a general solution of


Maxwells equations. But it was relatively easy for him to find a special solution,
which results if we set the charge and current densities equal to zero. In this case,
the equations have a solution that corresponds to waves of the electric and magnetic fields, which propagate bound together, with a velocity equal to the speed
of light!15 Maxwell, having already read Faradays idea that light is a disturbance
of the magnetic field, could not accept that the equality of the speed of electromagnetic waves to that of the speed of light was a mere coincidence. Therefore, he
considered it as a strong indication of the electromagnetic nature of light.
Unfortunately, Maxwell died relatively young and did not live to see the experimental discovery of electromagnetic waves by Hertz (Heinrich Rudolf Hertz,
18571894), the existence of which was a confirmation of his theory.
Maxwells equations predict that the power of electromagnetic waves emitted
by an antenna is an increasing function of frequency. Based on this fact, Hertz
decided to build a transmitter and a receiver to generate and detect, respectively,
electromagnetic waves of sufficiently high, for the time, frequency, that would
give them adequate power to be detected experimentally by his primitive instruments. So he used as a transmitter a simple dipole antenna, which was fed by a
spark produced from the discharge of a capacitor, and as a receiver a circular piece
of wire with a narrow gap. When the capacitor was discharged with a spark, an
alternating current flowed through the antenna, which in turn emitted electromagnetic waves. Those of them passing through the circular wire induced an
electromotive force at the ends of the wire and, as a consequence, produced a spark
in the gap that separated them. With this simple pair of instruments and with the
help of paraffin lenses, metallic mirrors and metal barriers consisting of parallel
wires, Hertz showed that electromagnetic waves are reflected, refracted and
polarized, just like light. Thus, Hertzs experiments confirmed Maxwells theory
that light actually is a form of electromagnetic waves (Fig. 4.26).

4.4.5 Incompatibility Between Electromagnetism


and Mechanics
The era of development of electromagnetism, following the formulation of Maxwells equations, was characterized, as we shall see in the section about heat, by the
effort of individuals to interpret all branches of physics based on mechanics. An
annoying problem in the relationship of electromagnetism with mechanics is
14

It should be noted that one of the equations of this set, known as Ampres law, does not refer
to the force law between currents, proposed by the great French mathematician, but rather to a
theorem he proved in vector calculus.
15
It is important to note that the existence of wave solutions to Maxwells equations is due to a
term that Maxwell added to Ampres law, partly for reasons of symmetry, i.e. on
philosophical grounds.

4.4 Electric Currents and Electromagnetism

79

Fig. 4.26 Early 20th century


intructional apparatuses
(transmitter and antenna) for
creating and detecting
Hertzian waves
(Sparkmuseum)

that, as we show in the next paragraph, Newtons third axiom is not always consistent with electromagnetism. In particular, the axiom does not apply when
studying the forces between non-steady currents or when studying the motion of
individual charged particles, a fact that affects the principles of conservation of
momentum and energy. In both cases, the variation of force, which causes the
variation of the currents or the variation of the particles position, is propagating
(according to Maxwells theory) at a finite speed. So a remote circuit or a remote
charge perceive the variation, and react appropriately, after some time and not
instantly. But during this time, neither the momentum nor the energy of the system
is conserved! The solution to this inconsistency is given by Maxwells theory itself,
according to which, the momentum and the energy lost from the first particle or
current are transferred to the field and the field, in turn, transmits them to the
second particle or current. Until then, however, nobody had measured such
properties in fields nor had detected any electromagnetic waves, other than light.
So it is not surprising that physicists tried, immediately after the publication of
Maxwells equations, initially to detect the electromagnetic waves and then to
measure their speed and, above all, to detect the medium in which these perturbations propagate, which was already named aether. The results of the experiments, however (especially that of Michelson and Morley, which will be discussed
in Sect. 5.1.2) showed that the speed of light is the same in all inertial frames! So
that in the case of electromagnetic waves, the Galilean transformations of space
and time (in particular the relation v0 = v - v0, Sect. 4.1.1) do not apply and
therefore, apart from the assignment of mechanical properties to fields (momentum-energy), electromagnetism is incompatible with the basic principles of
Galileos kinematics! Many physicists have tried to solve this mystery and one of
them was Hertz. Eventually, in the late 19th century, a Dutch physicist named

80

4 The Major Branches of Physics

Hendrik Antoon Lorentz (18531928) proposed a revolutionary idea to reconcile


mechanics with electromagnetism.
Lorentzs idea for the interpretation of experimental results was that the Galilean transformations, the oldest part of modern era physics and the basis of
kinematics, was not accurate and should be modified! More specifically, Lorentz
found that the velocity of electromagnetic waves turns out to be constant and
independent of the reference frame, if one assumes that in a moving inertial
reference frame the length of each material body contracts in the direction of
motion, while the time intervals expand. These changes are significant only at very
high speeds, comparable to the speed of light and other electromagnetic waves,
and this is the reason way these changes had not been detected until then. In fact,
the physicists of the time had suggested a possible explanation of the phenomenon,
hypothesizing that the cause of contraction of lengths should be the resistance
that the bodies experienced in their motion due to the presence of aether. Lorentzs
idea was elaborated by Poincar, who developed it into a full and complete new
theory of motion. Indeed, Poincar was the one who, according to scientific ethics,
gave the new rules of space and time the name Lorentz transformations. Lorentz
himself, however, acknowledged publicly that the new physics was the brainchild
of Poincar, to whom credit should be given. As discussed below, today we know
that Lorentz transformations describe correctly physical phenomena, but they are
neither introduced accidentally, nor as a consequence of the resistance of the
aether. They are simply a consequence of Einsteins special theory of relativity
and, in particular, of the new idea introduced by this great scientist about the
unification of space and time in a single geometrical structure, spacetime.
Lorentz had set up an excellent physics laboratory, where several major
physicists studied and worked and several great discoveries were made. For
example, one of Lorentzs students was Pieter Zeemann, who shared with his
supervisor the Nobel Prize in physics for succeeding to split a single spectral
line in a multiplet in the presence of a magnetic field, as well as for the theoretical
interpretation of this phenomenon. Lorentz himself, apart from his research work
in electromagnetism, earned a place in the history of physics as the first to organize
systematically international conferences for the discussion of specific scientific
problems and the exchange of ideas among scientists.
The organization of conferences began within the context of activities of the
Solvay Institutes for physics and chemistry, the first privately funded scientific
institutions. Solvay, after whom the Institutes were named, was a very wealthy
industrialist in the field of chemicals, especially soda (sodium carbonate, Na2CO3).
Since the synthetic formula of this chemical was his invention, Solvay appreciated
scientific research as a means of progress in science and technology. For this purpose
he donated a significant sum of money for the establishment and operation of the
scientific institutes that bear his name. One of the aims of these institutes was the
organization of scientific conferences devoted to the discussion of highly important
issues of current research between the most eminent scientists of the time. The fifth
of these conference, held in Brussels in 1927 one year before Lorentzs death, is
particularly remembered. The subject of this conference was quantum mechanics

4.4 Electric Currents and Electromagnetism

81

and all the great physicists of the first half of the 20th century participated in it
(Einstein, Planck, Bohr, etc.). This conference marked the official beginning of
the famous debate between Einstein and Bohr on the physical interpretation of the
equations of quantum mechanics, which ended with the prevalence of the interpretation put forward by Bohr (the so-called Copenhagen interpretation).

4.4.6 Electromagnetism Today


Maxwell wrote his equations assuming that the electric current is some form of
fluid, described by the current density, J, and the total current, I. Today we know
that the magnetic force on a current-carrying wire is due to the force exerted by the
magnetic field on the carriers of the current, in this case the electrons. The electron
was discovered in 1897 by J. J. Thomson, but a few years earlier (in 1892) Lorentz
had proposed a formula giving the total electromagnetic force on a charged particle
moving in an electric and magnetic field, which is known today as Lorentz force
F qE qv  B in SI units
The first term on the rhs of the above equation gives the electric force and the
second the magnetic force. The Lorenz force is an axiom independent of
Maxwells equations and whether, how, and under which conditions Maxwells
equations can be derived from the Lorentz force or vice versa is a subject of a long
discussion. For example the Ampre and Biot-Savart laws can be easily derived
from the Lorentz force, as the sum of the magnetic forces acting on electrons
moving in a wire. On the other hand Maxwell equations can be derived from the
Coulomb force law between charges, if we require that this force remains invariant
under Lorenz transformations. What we should keep in mind is that the Lorentz
force and Maxwells equations are fully compatible not only between them, but
with the special theory of relativity as well.
Maxwells equations constitute one of the main landmarks in the evolution of
ideas and concepts in physics. The reason is that they linked three of the four
known, at that time, forms of energy, i.e., electricity, magnetism and light. Only
the fourth form of energy, gravity, was excluded, but in the late 19th century it was
expected that it could be united with electromagnetism into a single theoretical
structure. The reason is that gravity seemed to have many things in common with
the other three forms of energy, one being, for example, the functional form of the
force law, i.e., the inverse square of distance. It is worth noting that since Maxwells time there hasnt been any further development of concepts in classical
electromagnetism, apart from its theoretical unification with the weak nuclear
forces, proposed in the mid 20th century. The ensuing unified theory is, however,
applicable only at extremely high energies. Thus, classical electromagnetism is
still taught in universities today in exactly the same form it was expressed by
Maxwell and Heaviside. We see, then, that after classical gravity, which ceased to

82

4 The Major Branches of Physics

evolve in the 19th century, the same happened with classical electromagnetism.
We could say that the classical physics we learn in schools and universities is
based almost entirely on ideas developed in the 19th century. As we shall see later,
this is also true for the other two branches of physics we are discussing in this
book, thermodynamics and kinetic theory of gases.

4.5 Heat-Thermodynamics
4.5.1 Introduction
According to the definition given at the beginning of this book, physics is the
science that studies the interactions between matter and energy. Up to this point,
we have presented four types of energy: gravity, light, electricity and magnetism.
However, there is another form of energy, which is known to man since prehistoric
times, heat. Since heat manifests itself mainly during the combustion of various
elements and chemical compounds (that is, during their violent reaction with
oxygen), this form of energy was traditionally a subject of research in chemistry.
Its study was based more on phenomenology16 rather than on a structured axiomatic theory and, as a result, its development and integration with all other
branches of physics was delayed. The integration of heat in physics was achieved
only in the mid 19th century, through the collective effort of many chemists and
physicists. However, it was mainly due to the understanding by physicists of
thermodynamics, which was originally developed by chemists in order to interpret
the direction of chemical reactions. For example, the synthesis of ammonia, which
is the basis for the production of fertilizers, became possible only after the pioneering research work by the German chemist Fritz Haber (Fritz Haber,
18681934). Haber discovered that the reaction of hydrogen with nitrogen to yield
ammonia is an endothermic and not an exothermic reaction. Therefore, the reaction does not proceed spontaneously (as all reactions that had been studied until
then) but requires the application of high pressure and temperature in the presence
of a catalyst.
Traditionally, the branch of heat includes a variety of phenomena, which are
linked only phenomenologically, such as the expansion of bodies, melting and
freezing, condensation and evaporation, and the transport of heat. All these phenomena were studied before realizing that heat is, in fact, a form of energy and not a
16

In science, phenomenology is the organization of scientific research on the basis of


classification of observations according to the phenomena and not according to the mechanisms
that cause them. A classic example is the phenomenon of supernovae in astronomy, which
initially were considered as a single type of celestial objects. Then, it was discovered that there
are two, quite different, mechanisms through which a massive star can end up as a supernova
both mechanisms producing the same observed phenomenon, namely, an extremely brilliant
new star (a nova) in the sky.

4.5 Heat-Thermodynamics

83

fluid. The realization that heat is a form of energy was pivotal in the study of
thermodynamics, which became the study of the relationship between heat and
other energy sources. In particular the fact that thermodynamics is based, as we will
see, on three axioms raised the question what is its relationship with the axioms that
provide the foundation for the rest of physics and more specifically, whether the
axioms of thermodynamics can be proven from the axioms of mechanics. As part of
this effort, following the general acceptance of the atomic theory of matter, it was
understood that thermodynamics is closely linked with the physics of perfect
gases.17 For many years, great physicists, such as Maxwell and Boltzmann, tried to
prove that the laws of perfect gases and, therefore, the axioms of thermodynamics,
are a consequence of other basic laws of physics. Ultimately, however, it was found
that the axioms of thermodynamics do not depend on the rest of physics and, hence,
thermodynamics is a singular point in the otherwise more or less uniform edifice
of physics. This situation remains unchanged until today; however there are indications that the integration of thermodynamics in physics might be achieved
through the new science of chaos, which is discussed in Sect. 5.1.3.

4.5.2 Phlogiston and Caloric Fluids


In the past, the concepts of heat and temperature were confused; this may be the
case also today in primary school and in the lower grades of high school. A first
decisive step in the study of heat was therefore the clear distinction between these
two concepts. This distinction was made in the 18th century through the work of
many scholars, who however were based on entirely wrong ideas, outlined below.
Central to the study of a physical phenomenon is the measurement of the quantities
involved in it. In this respect, the study of heat suffered significantly compared
with mechanics, because initially there were no instruments for measuring temperature. Philo of Byzantium (3d century BC) and Hero of Alexandria (1st century
AD) described experiments demonstrating the expansion of air when heated.
Although their work was translated in Latin, only in the early 17th century
scientists, including Galileo, attempted to construct thermometers based on this
propertythe thermal expansion of gases. These instruments were very unreliable,
mainly because their readings depended on atmospheric pressure. In 1632, the
French physician and chemist Jean Rey (ca. 1583ca. 1645) invented a thermometer based on the expansion of water; however, the first reliable mercury
thermometers were constructed only 100 years later, in the early 18th century.
Only then became possible to measure small amounts of heat, DQ, through the
basic equation of calorimetry,
17

A perfect gas is a gas considered to consist of hard point masses that interact only through
elastic collisions. Sometimes the term ideal gas is used interchangeably, but usually a perfect gas
is a simplified model of an ideal gas. The main difference is that the specific heat at constant V,
CV, may be temperature or pressure dependent in an ideal gas, but not in a perfect gas.

84

4 The Major Branches of Physics

DQ C  DT
where C is the heat capacity of the body, whose temperature varies by DS.
Nevertheless, until the late 19th century, there was still considerable confusion
about the nature of heat.
Initially, all interpretations of phenomena associated with heat were based on
two hypothetical fluids: phlogiston and caloric (or calorique). According to the
ideas of that era (1819th century), caloric and phlogiston belonged, along with
light, electricity and magnetism, to a group of fluids named non-ponderable (or
imponderable), because they were considered weightless. Phlogiston was a
hypothetical substance, a successor of the fourth element of Aristotelian physics,
fire. This substance was introduced by the German physician and chemist Johann
Joachim Becher (16351682), and comprised a key element in the comprehensive
theory of heat established by the Scottish chemist Joseph Black (17281799).
According to this hypothesis, phlogiston is contained in all combustible bodies,
such as wood, charcoal and, generally, all objects that can be oxidized, such as
metals. If such a body is heated, phlogiston escapes, leaving behind the rest of
matter that formed the bodyfor example, ash in the case of wood, or oxide in the
case of metals. According to this hypothesis, if one heats an oxide with coal, part
of phlogiston contained in coal is absorbed by the oxide, transforming the latter to
pure metal again. The easiest way to test this hypothesis is to measure the mass of
a body with and without phlogiston. In the case of metals, for example, the
oxidation of a quantity of metal would yield a smaller or, at most, equal amount of
oxide (assuming in the latter case that phlogiston is weightless). The French
chemist Antoine Laurent Lavoisier (17431794) proved exactly the opposite: a
certain quantity of metal is heavier after oxidation than before oxidation; as a
result, the phlogiston hypothesis could only be true if phlogiston had negative
mass. This case seemed quite unlikely, so in the late 18th century the phlogiston
hypothesis became questionable and was soon abandoned.
Another incorrect assumption regarding heat, which prevailed longer than that
of phlogiston, was the hypothesis that heat consists of an indestructible fluid,
which Lavoisier named caloric. This hypothesis could also be tested experimentally relatively easily, by measuring the heat produced by friction. If one can
produce unlimited amounts of heat by friction without reducing the mass of a
body, then the caloric hypothesis will be untenable, even though caloric is assumed
to be non-ponderable (weightless), because it is unreasonable to assume that a
body contains an infinite quantity of caloric. Such experiments were conducted by
Benjamin Thompson (later Count Rumford, 17531814), who measured the heat
generated during shaping a gun barrel with a drill (Fig. 4.27). He found that in
each machining operation for shaping the gun barrel, similar amounts of heat were
given off, a fact that leads to the non-reasonable conclusion that the metal of the
gun contained an infinite amount of caloric. Assuming that heat comes from the
conversion of the mechanical energy of the drill, he was able to roughly estimate
the mechanical equivalent of heathe calculated a value of 5.5 J/cal, not

4.5 Heat-Thermodynamics

85

Fig. 4.27 Measurement of the mechanical equivalent of heat by Benjamin Thompson.


Thompson observed that 12 lt of water in the container cooling the apparatus were brought
from zero to 100 C after 2.5 h of boring by a mechanism driven by a horse (so that the
mechanical power was equal to 1 hp). He calculated the mechanical equivalent of heat from the
simple equation 2.5 h  1 hp = m  DT = 12 lt  100 C (from Thompsons paper Inquiry
concerning the Source of the Heat which is excited by Friction, Philosophical Transactions of
the Royal Society, 1798)

significantly different from the currently accepted 4.18 J/cal. On the other hand, a
great deal of theoretical work was based on the hypothesis of caloric, such as the
derivation, in 1822, of the equation describing the propagation of heat in a solid
body by Fourier (Jean Baptiste Joseph Fourier, 17681830) and the calculation, in
1824, of the performance of an ideal heat engine by Carnot (Sadie Carnot,
17961832), both considered successful, as they were consistent with experimental
results. Related to the caloric hypothesis was also the research work of Laplace
and Poisson, who successfully calculated, within the frame of this hypothesis, the
speed of sound in a gas, assuming that there is no exchange of caloric (i.e., heat)
between the gas and its environment. The hypothesis of caloric survived well into
the 19th century, something that is evident from the fact that the great physicist
Thomson (William Thomson, later Lord Kelvin) was using it in his scientific
publications up to 1848, even though he was aware of the calculation of the
mechanical equivalent of heat by Joule (James Prescot Joule, 18181889) since
1847, which showed without any doubt that heat is a form of energy. Finally in
1851 Thomson was eventually convinced that heat is energy and not a fluid, and
the hypothesis of caloric was abandoned by the international scientific community.

86

4 The Major Branches of Physics

4.5.3 First Axiom of Thermodynamics: Mayer, Joule,


Helmholtz
The first logical statement of the fact that heat is definitely a form of energy is
the so called first axiom of thermodynamics (that is, the generalized principle of
conservation of energy), according to which energy cannot be created or
destroyed. The history of this axiom is long. It could be argued that the first vague
reference to it was made by the Greek natural philosopher Empedocles, who lived
in Sicily in the 5th century BC. Empedocles said that nothing comes from
nothing and what is cannot cease to be. In modern times the principle of conservation of mechanical energy is known since the times of Lagrange and Hamilton, while the first axiom of thermodynamics, which refers to the conservation of
energy in the general case, is usually attributed to Helmholtz. The first scientist
who included heat in the principle of conservation of energy and calculated the
mechanical equivalent of heat was the German physician Mayer (Julius Robert
Mayer, 18141878). Mayers involvement in physics research was accidental. He
was serving as a medical doctor aboard a ship that was sailing to Indonesia, a
country near the equator. One day, he took a blood sample from a vein of a sailor
and was impressed by its bright red color, which contrasted highly with the darker
venous blood of people who live in temperate regions. Mayer assumed (correctly)
that, because of the warm climate, a lower metabolic rate was required to maintain
a body temperature of 37 C and that the bright red color was due to the presence
of increased amounts of oxygen in blood. From this thought he arrived at the ideas
of conservation of energy and of the equivalence of mechanical work and heat,
which he published in 1842. Mayer attempted to calculate the mechanical
equivalent of heat, in which he succeeded without performing a single experiment!
He based his method on a simple idea and on the values of the molar specific heat
at constant volume, CV, and constant pressure, CP, which had been measured with
acceptable precision in 1811 by the French scientists Franois Delaroche
(17801813) and Jacques tienne Brard (17891869).
As it is known, the difference of the two molar specific heats (which by definition is the universal gas constant, R) is equal to the work,18 W, done when one
mole of gas is heated by one degree Celsius and expands, under constant pressure
p0, from an initial volume V0 to a final volume V = V0 ? DV, where p0 and V0
correspond to the normal conditions (1 atmosphere and 0 C). It should be noted
that the value of R is usually given in calories per mole per degree Celsius.
Therefore, the work W is given, in thermal units, by the relation
WH R CP  CV

18

It should be noted that work and energy are relatively new concepts. Clausius was using the
concept of work systematically only since 1850 (see next paragraph); as far as the concept of
energy is concerned, although it had been introduced by Young, it was routinely used by Rankin
(William John Macquorn Rankine, 18201872) at about the same time.

4.5 Heat-Thermodynamics

87

But it is known that this work is equal to p0  DV. On the other hand, we know
from the Gay-Lussac law that V = V0 . (1 ? a#) (where # the final temperature in
degrees Celsius) so that
DV V0  1 a#  V0 V0  a  #
Finally, by increasing the temperature by one degree above zero (# = 1), we have
DV V0  a
Thus, we find that, in mechanical units (erg or J), the work W is given by
WM p0  DV p0  V0  a
The coefficient, a, of thermal expansion of gases was measured by Volta in 1791
and was found, in the Celsius scale, equal to 1/273. Equating the two values of
energy, WH = WM, we can derive the relation between calorie and the unit of
mechanical energy. Mayer, using the (incorrect) values of Delaroche and Brard,
found in 1842 a value for the mechanical equivalent of heat (in curent units) equal
to 3.58 J/cal.
Mayers work was continued by the British brewer and amateur physicist Joule.
At the beginning of his research career, Joule attempted to build a perpetuum
mobile (a perpetual motion machine19) using a battery, which supplied power to
an electric circuit. Of course he did not succeed, since the batterys energy is
provided by chemical reactions, which cease to take place when the available
quantities of reactant chemicals are finally exhausted. However, he found (in 1840)
that the heat released in a wire by an electric current is proportional to the square
of the electric current. From this result he deduced that heat can be converted to
other forms of energy and vice versa at a constant ratio (which, of course, in the
special case of conversion of mechanical into thermal energy, is nothing more than
the mechanical equivalent of heat). He announced this result in 1843 during a
lecture and later he published it in print. In 1875, the Royal Society assigned him
the task to measure, with the higher possible accuracy, the value of the mechanical
equivalent of heat. The result was, in current units, 4.15 J/cal, very close to todays
accepted value of 4.19 J/cal (Fig. 4.28). For his contribution in thermodynamics
namely, stating the first axiom of thermodynamics and calculating the mechanical
equivalent of heat by using experimental methodsthe international scientific
community honored him by naming after him the unit of work and energy in the SI
system.

19

Definition of perpetual motion machine: A device that produces work in violation of the
thermodynamic principles. More specifically, a perpetual motion machine of the first kind is a
machine that produces work without consuming energy (thus, violating the first principle of
thermodynamics), while a perpetual motion machine of the second kind is a machine that
converts thermal energy to mechanical work with 100 % efficiency (thus, violating the second
principle of thermodynamics, see also Sect. 4.5.4).

88

4 The Major Branches of Physics

Fig. 4.28 Joules apparatus


for measuring the mechanical
equivalent of heat (engraving
from the August 1869 issue
of Harpers New Monthly
Magazine)

The scientist who put forward the first axiom (postulate) of thermodynamics in
its final mathematical form was the German Hermann Ludwig Ferdinand von
Helmholtz (18211894). Helmholtz had similar scientific background with Young
(he had also studied medicine) and enjoyed, as Lord Kelvin did, the recognition of
the international scientific community. He was considered the greatest German
physicist of his era. Helmholtz worked on a variety of topics, including the
physiology of the eye and ear, and the conduction of electrical impulses in the
nervous system. In 1847 he published a scientific article, in which he presented the
principle of conservation of energy in great mathematical detail and clarity. For
this reason, and because at that time he was already a renowned professional
scientist (unlike Mayer and Joule, who were amateurs), Helmholtz was initially
considered as the father of the first axiom of thermodynamics. Today, this honor
is jointly assigned to him, Mayer and Joule.

4.5.4 Second Axiom of Thermodynamics: Carnot, Thomson


Although scientists arrived in the first axiom of thermodynamics after collective
work, this was not the case with the second axiom, which emerged suddenly in
physics, through the ideas of only one scientist, Sadie Carnot (17961832). Carnot
lived in the turbulent era of the First French Republic and the restoration of the
monarchy. During that period, education in France was not anymore a privilege of
the rich and the aristocrats, so Carnot was able to study at a prestigious educational
institution, the Polytechnic School. After his graduation in 1814, he joined the

4.5 Heat-Thermodynamics

89

army, from which he retired in 1828 at the age of 32 with the rank of captain. He
died four years later, in 1832, from cholera. During the first years of his service in
the army (18141815), the controversy between monarchical England and
republican France, under Napoleon, reached a climax. Carnot realized that economic matters play important role in a war. At that time, the British had the most
advanced industry in the world, based on steam engines which were invented by
British scientists and engineers, such as Thomas Newcamen, James Watt, Christofer Blackett and others. Carnot considered possible ways to increase the efficiency of steam engines, which at that time was very low (only 5 % of the
combustion heat of coal was converted into mechanical energy). Carnot belonged
to that breed of scientists who liked to think in analogies, so he considered the
watermill as a good mechanical analogue of the operation of the steam engine. The
thermal equivalent of the mass of water was the caloric fluid, while the thermal
equivalent of the height difference of water was the temperature difference
between two heat reservoirs: the boiler, which produces steam, and the atmosphere, where steam escapes from the cylinder, after it has previously expanded
and produced work.
Of course, the first part of the analogy was not correct, since in a watermill the
amount of water is conserved during fall, while in a heat engine the amount of heat
contained in the steam is not conserved during its expansion, as it is reduced by an
amount equal to the work produced. However, in Carnots time the caloric
hypothesis was widely accepted and, furthermore, the amount of heat converted
into work by a thermal engine was too small (5 %, as mentioned) to be detected
with the crude measuring methods available. It is remarkable that, although the
model was wrong, the result at which Carnot arrived was correct! This was so
because, in Carnots final result, the ratio of temperatures is equal to the ratio of
thermal energies (as we shall see below), but also because the second part of the
analogy was accurate, since the rate of heat transfer between the two bodies is
indeed proportional to the temperature difference between them (a law initially
formulated by Newton). However, at the time of Carnot it was well established
that, during the transfer of water from a high standing reservoir to a low standing
water wheel, turbulent flow should be avoided, as it generates friction that reduces
the amount of usable energy. This knowledge led Carnot to the conclusion that the
transfer of heat from the high temperature reservoir A to the low temperature
reservoir B should be carried out slowly, something that is ensured only if the
process is reversible. In this case, the transfer is so slow that heat can be transferred either from body A to body B or vice versa without any finite temperature
difference, heat loss, or work done against friction. This of course means that a
Carnot heat engine can be used either to produce mechanical work from heat or to
generate heat from mechanical work. The second possibility is exploited in
modern air conditioning units which, when set in reverse mode of operation,
generate heat.
Carnots reasoning led him to the established result that the efficiency of an
ideal heat engine, which works reversibly, is equal to

90

4 The Major Branches of Physics

Thigh  Tlow
Thigh

which is similar to the efficiency of a water turbine


g

mghup  mghdown
mghup

where m(gup) is the gravitational potential energy of water in its initial position and
m(gdown) is the gravitational potential energy of water in its final position. From
this result, one arrives to the obvious conclusion that efficiency is independent of
the material used or the specific characteristics of the heat engine. Furthermore, it
is evident that the efficiency of the ideal Carnot engine is the highest possible,
because if this was not the case, one could connect in series two engines with
different efficiencies, working in reverse mode of operation relative to each other,
so that the higher efficiency engine is producing mechanical work. In this way, one
would get a perpetuum mobile of the first kind, which would produce more
mechanical energy than it would consume in the form of heat. By reckoning that
this case is impossible, Carnot put forwardindirectlythe first axiom of thermodynamics. But, as already mentioned, the explicit mathematical formulation of
this axiom was written 23 years later, in 1847, by Helmholtz. It is therefore
surprising that, in the history of physics, the so-called second axiom of thermodynamics appeared before the first axiom. This is a good example of the fact that
ideas and concepts in science do not always evolve in a way that a posteriori seems
reasonable or natural.
Carnots work was deemed to remain unknown, since his results were published
only once, in a booklet printed in 1824 in 600 copies at the authors expenses and
distributed primarily to friends. In 1833, however, the French engineer Clapeyron
(Benot Paul mile Clapeyron, 17991864) found by chance a copy of Carnots
booklet, read it, appreciated its significance and published an adapted version in a
French science journal. Clapeyrons article was read in 1841 or 1842 by William
Thomson (later Lord Kelvin, 18241907), who had just graduated from university
and worked in the laboratory of the known (from his research on liquefaction of
gases) French physicist Henri Victor Regnault (18101878). Thomson began
searching for a copy of Carnots booklet and eventually, in 1848, managed to find
one. Based on Carnots ideas and the first axiom of thermodynamics, which by that
time was widely accepted, Thomson showed that the efficiency, g, calculated by
Carnot was equal to
g

Thigh  Tlow Qhot  Qcold


W

Qhot
Thigh
Qhot

where W is the mechanical work done. The proof of this equality is derived from
the exact calculations of the variations of the, so called, state variables P, T and
V during the thermal cycle considered by Carnot and can be found in any standard
physics textbook.

4.5 Heat-Thermodynamics

91

Because the first axiom of thermodynamics had already been formulated by


Helmholtz a year ago, Thomson concluded, correctly, that the efficiency of the
Carnot engine cannot be greater than one. Therefore, the temperature cannot
assume arbitrarily low values, because otherwise there should be a system for
measuring temperature, in which this quantity could take negative values, which is
not allowed. The last conclusion follows from the fact that, if in a temperature
scale the low temperature (Tlow) in Carnots cycle is negative, then
Thigh  Tlow [ Thigh
whence
Thigh  Tlow
[1
Thigh
This corresponds to an efficiency greater than one, which, according to the first
axiom of thermodynamics, is an impossible value. A temperature scale comprising
temperature values which are not negative is called absolute temperature scale and
is defined as follows:
The unit of absolute temperature is equal to 1 C.
Zero, in this scale, is set so that the temperature of 0 C corresponds to 273.16
units of absolute temperature.
The international scientific community honored Thomson for his discovery by
naming the unit of absolute temperature after him, i.e., kelvin.

4.5.5 Entropy: Clausius


The evolution of thermodynamics did not stop after the formulation of the first two
axioms and their acceptance by the international scientific community; in fact it
had just begun. What was missing from the results and ideas of Carnot, Helmholtz
and Kelvin was the integrated mathematical foundation of this new branch of
physics. This work was carried out successfully by Rudolf Clausius (18221888)
and, until today, provides the basis for all university courses in thermodynamics. A
key element in Clausius work is the mathematical formulation of the two thermodynamic axioms. In the case of the first axiom, this task was not difficult, since
the conservation of a quantity is expressed with an elementary equation, similar to
that expressing the conservation of mechanical energy taught in schools. In thermodynamics, the equation is written in differential form as follows
dQ dU dW dU P  dV
where dQ represents the amount of heat provided to the system, dW the work done
during the heat transfer and dU the heat energy stored in the system, which is
called internal energy.

92

4 The Major Branches of Physics

In the case of the second axiom of thermodynamics, the situation was more
complex. At this point, the contribution of the German theoretical physicist
Clausius was decisive. In his attempt to generalize Kelvins relationship
Thigh
Qhot

Qcold Tlow
so that it holds for any process, not necessarily reversible (as proposed by Kelvin),
Clausius introduced in 1851 a new physical quantity, the quantity of heat contained in a system divided by the absolute temperature of the system, which he
called entropy. He coined the name from the Greek words eme9qceia (energy) and
sqopg9 (turn), because Clausius thought that, semantically, the Greek root would be
more acceptable than any other equivalent German term. His choice was of particular importance because it was in an era of rampant nationalism in Europe and
especially in Germany.
Entropy, which is represented by the symbol S, is defined for a reversible
process by the differential relation
dS

dQ
T

and has the property that its integral is 0 for any closed reversible cycle on the
plane P - V (or, equivalently, the planes T - V or T - P), a property that is
written symbolically as
I
dQ
0
T
The introduction of the concept of entropy in thermodynamics provided a
thermodynamic quantity that has the properties of potential energy in a conservative force field, namely that the change of this quantity depends only on the
initial and final states of the system and therefore is independent of the actual path
followed. This property did not characterize any of the known, up to Clausius
time, thermodynamic quantities (for example, pressure, temperature, heat, work or
volume) and therefore it was difficult to apply known methods of mechanics in the
mathematical formulation of thermodynamics. Therefore we see that the search for
quantities bearing an analogy between different branches of physics provides
sometimes new ways of thinking.
With the introduction of the concept of the entropy of a system, the second
axiom of thermodynamics can be written in mathematical form as
Z B 
dQ
SB  SA 
T
A
where A and B are two states of a systemthe initial state and the final state
respectively. The equality holds for reversible processes and the inequality for
irreversible. Note that entropy reminds us the property of friction:

4.5 Heat-Thermodynamics

93

If there is reversibility, entropy does not increase (respectively, if there is no


friction, there is no mechanical energy loss).
If there is no reversibility, entropy is increasing (respectively, if there is friction,
there is loss of mechanical energy).
From the previous relation an additional important conclusion can be drawn,
which is not obvious at first sight. If the system under consideration is isolated
from the environment, then there is no heat exchange between the system and the
environment, and dQ = 0. Therefore, we conclude that, for any closed system,
entropy is never decreasing; this can be expressed mathematically as
S B  SA  0
The fact that entropy, as defined by Clausius equation dS = dQ/T, applies only
to reversible processes is a result of the presence of temperature, T, in the above
formula. In this respect, it should be emphasized that from the three thermodynamic state variables P, T, and V, only volume has an unambiguous meaning in all
processes. In contrast, pressure, P, and temperature, T, are defined only when a gas
is in thermodynamic equilibrium. Therefore, these variables describe a gas during
a process only when the gas is continuously in thermodynamic equilibrium,
something that happens only during reversible processes. In an irreversible process, a gas is not in thermodynamic equilibrium and, as a result, pressure and
temperature are ill defined. In the following section we will look at another definition of entropy, this time by Boltzmann, which does not involve macroscopic
state variables but only microscopic ones and consequently is, in a sense, more
general than the one given by Clausius. Of course, one cannot change the laws of
nature by simply changing the definition of a physical quantity. Clausius definition can be still used to calculate the change in entropy during an irreversible
process by simply replacing the process by a succession of reversible ones that
share the same initial and final states, since, as we already mentioned, the change
in entropy depends only on the initial and final states of a gas and not on the path
followed.
Based on the above result, it is often said that the entropy of the universe (which
is by definition a closed system, since it contains everything) tends to a maximum
value. In that case, at a particular point in the future entropy would attain this
maximum value and the universe would suffer a thermal death, since all changes
would be isentropic and thus reversible (in other words: if entropy attains a
maximum value it will not change anymore, that is, dS = 0 and hence dQ = 0).
This means two things:
temperature will be the same everywhere (since heat flows from hot to cold) and
since there will be no heat transfer, it will be impossible to produce mechanical
work with any thermal engine.
However, this result does not take into account the fact that in the universe there
are other forces at work, beyond those involved in the elastic collisions that take

94

4 The Major Branches of Physics

place in a perfect gas, which, as we shall see, is the best physical model for
visualizing thermodynamic phenomena. To give an extreme mathematical example, note that from a system of point masses interacting through gravitational
forces we can draw infinite energy. This is possible because the potential energy of
any pair of point masses, that are brought from a finite distance in contact,
becomes infinite and negative, so the available kinetic energy is positive and
infinite. Therefore, although entropy may increase monotonically, we will still be
able to produce mechanical work ad infinitum. The apparent contradiction comes
from the fact that, besides introducing gravitational forces between the point
masses (and thus violating one of the assumptions in the definition of perfect
gases), entropy is a statistical concept, while our argument on the gravitational
potential energy is based on classical mechanics. As we will discuss in the next
chapter, this is exactly the point raised by several scholars against Boltzmanns
interpretation of entropy through the theory of perfect gases. Today, the possibility
of the thermal death of the universe is being examined in the context of modern
cosmology, which includes the current belief that the universe is not only infinite
(and hence, strictly speaking, not a closed system) but is also expanding. One
consequence of the expansion of the universe may be that the maximum possible
entropy of the universe, as a whole, increases (as the universe expands) faster than
its actual entropy does, in which case the actual entropy can never reach a maximum possible value at any future time.

4.5.6 Thermodynamics Today


Classical thermodynamics reached its definitive form with Clausius work and
has not changed significantly since then. Today, thermodynamics is taught in
universities in the form developed by the American physicist Gibbs (Josiah Willard Gibbs, 18391903). Gibbs was the scientist who, continuing the work of
Maxwell and Boltzmann, set the foundations of statistical mechanics (a term
coined by him)the branch of physics which explains the laws of thermodynamics through the statistical properties of large ensembles of particles.
As already mentioned, using Clausius definition of entropy we can calculate
changes only for systems that are in thermodynamic equilibrium. In order to
overcome this problem, Tsallis (Constantino Tsallis, 1943) introduced in 1988 a
new quantity, known since then as Tsallis entropy, which generalizes classical
entropy. Tsallis entropy can describe systems that are not in thermodynamic
equilibrium but, unfortunately, it is non-extensive; this means that it lacks the
property of additivity of classical entropy, namely, that the entropy of two systems
is the sum of the entropies of each individual system.

4.6 Kinetic Theory of Perfect Gases

95

4.6 Kinetic Theory of Perfect Gases


4.6.1 Relationship Between Thermodynamics
and the Theory of Gases
Thermodynamics examines the energy changes in abstract systems, which may
represent completely different physical systems. For example, an abstract system
could describe a chemical solution. However, from the point of view of physics,
the most interesting systems, in which thermodynamics is applied, are gases.
When the atomic nature of matter became generally accepted and physicists
realized the huge number of molecules contained in one mole (*1023, Avogadros
number), they attempted to describe the behavior of gases by using statistical
physics. So, physicists quickly realized that there was a close relation between
statistical physics and thermodynamics, and perfect gases became the tool by
which they tried to understand the relation of thermodynamics with the rest of
physics.

4.6.2 Atomic Theory


Greek natural philosophers were preoccupiedamong other things about nature
by the question whether matter is a continuous physical quantity or a collection of
indivisible particles of a certain size. Because, as already mentioned, at those early
times experiments were not a common scientific practice, both views were supported by philosophical arguments. The majority of philosophers, including
Aristotle, expressed the view that matter was continuous. In particular, Aristotles
view about gases was that they consist of a continuous medium, in which there is a
repulsive elastic substance that acts like a spring and causes pressure. However,
two Greek natural philosophers from northern Greece, Leucippus from Miletus
(who founded a philosophical school in Abderaa city located in the region that
today we call Thrace) and his student Democritus from Abdera, maintained that
matter consists of tiny indivisible particles, which Democritus named atoms20
(Fig. 4.29). Nevertheless, the atomic theory of Democritus, just like the ideas of
Philoponus about motion, quickly fell into obscurity, because it contradicted the
authority of the great Greek natural philosopher Aristotle. In any case, the question
regarding the microscopic structure of matter had for many centuries no practical
significance, since none of the observed phenomena seemed to support unambiguously either theory on the nature of matter.

20

From the Greek word solg9 (cut) and the negation a- in front of it, i.e. something that cannot
be divided (cut) in smaller parts.

96

4 The Major Branches of Physics

Fig. 4.29 Democritus, from a 100-drachmas Greek banknote, issued in 1967 (Bank of Greece)

The situation began to change in the 17th and 18th century with two types of
experiments that renewed interest in the question of the microscopic structure of
matter. The first type of experiments, conducted by the English scholar Boyle and
the French Marriotte, investigated the relationship between pressure and volume of
gases at a constant temperature. The second type of experiments, conducted by the
French chemist Lavoisier, studied the stoichiometric ratio of reactants and products in chemical reactions. The results from both experiments could be interpreted
by assuming that matter is composed of elementary particles and, as a consequence, the theory of Democritus resurfaced again in the early 19th century. The
scientist who formulated the modern atomic theory was John Dalton (17661844),
an English chemist who had also an active interest in physics and meteorology.
Scientists in 19th century concluded that the particles constituting each chemical
element and participating in chemical reactions without losing their identity are the
very same particles whose existence was hypothesized by Democritus.
However, in the beginning of the 20th century, scientists realized that in order
to understand properly the microscopic structure of matter they had to take into
account two important points:
The first point is that there is a difference between the particles constituting a
gas and those that retain their identity during chemical reactions. The elementary
particles of gases are molecules, that is, the smallest quantity of matter that retains
the properties of a substancewhether a chemical element or a chemical compound. On the other hand, an atom is the smallest quantity of matter that retains the
chemical properties of an element. The results of Boyle and Marriotte concerned
molecules, while those of Lavoisier, atoms. Only in noble gases molecules and
atoms coincide.
The second point is that atoms consist of even smaller (subatomic) particles,
namely, protons, neutrons and electrons. As a result scientists assumed that these
subatomic particles are actually the indivisible components of matter that Democritus had proposed; but since the nomenclature had been already established,
the term atom was reserved for those particles that consist of protons, neutrons and
electrons, and retain the chemical properties of a chemical element.

4.6 Kinetic Theory of Perfect Gases

97

In the late 20th century, as the experimental methods for investigating the
structure of matter became more efficient, it was discovered that two of the three
subatomic particles (protons and neutrons) are not, in fact, elementary, since each
one consists of even smaller particles, called quarks. So, once again, the same old
question emerged: do really elementary particles exist that cannot be divided into
smaller particles or, as experimental instruments and methods become more and
more sophisticated, we will continue to discover new elementary particles? The
second prospect, if true, has an important consequence; more specifically, it leads to
a conclusion contradicting the view that has prevailed since the time of Dalton: How
can we maintain that matter consists of particles of different sizes and, hence, that it
is not continuous, if there is no particle that cannot be divided into smaller ones21?

4.6.3 Distribution Function: Maxwell


The first scientist of the modern era who proposed the idea that gases consist of
molecules was the Swiss mathematician Daniel Bernoulli (17001782). In 1738,
Bernoulli showed that the law P  V = constant for a constant temperature, discovered by Boyle and Mariotte, could be interpreted theoretically by assuming that
a gas consists of individual particles. In particular he showed that the pressure of a
gas is related to the volume it occupies and the mean square velocity of these
particles (the ones we call today molecules) through the equation
1
P  V N  m  \v2[
3
where N is the total number of molecules and m their individual mass. For the lay
readers, we note that nowadays the mean square velocity of molecules, \v2[, is
defined by the relation
2

\v [

Z1

f v  v2  dv

1

where f(v) is the distribution function of velocities introduced by Maxwell (see


below). But, in Bernoullis time, the distribution function was an unknown concept
and the nature of heat was not yet determined. So Bernoulli could not link the
mean square velocity with temperature, for the additional reason that he believed
that temperature depends on the momentum rather than on the energy of the
particles, as we know today thanks to the Maxwells research work in statistical

21

In a new mathematical model of theoretical physics, named string theory, the elementary
particles are strings, i.e., one-dimensional objects with a length of the order of 10-35 m, or
twenty orders of magnitude (i.e., a hundred billion, billion times) smaller than a proton. This
leaves ample room for quarks to have a structure.

98

4 The Major Branches of Physics

Fig. 4.30 Maxwell-Boltzmann probability density function (usually referred to, incorrectly, as
distribution function) for three different temperatures (drawing by author)

physics. This work is the second major contribution of the great theoretical
physicist to classical physics. Maxwell was able to calculate the basic law of the
theory of perfect gases, whereby the kinetic state of the gas molecules, in thermodynamic equilibrium, is described uniquely by a particular distribution of
velocities, f(v), which he estimated in principle.22 This function gives the probability for a randomly chosen gas molecule to have a velocity between two values
differing by an infinitesimal quantity dvfor example, between v and v ? dv
(Fig. 4.30). From this function, f(v), it is possible to derive theoretically, through
simple mathematics, all the experimentally known laws of gases and, in particular,
the basic law of perfect gases
pV NkT
where k is Boltzmanns constant, N the total number of molecules and T the
absolute temperature. Combining the law of perfect gases with Bernoullis result,

22
It is worth noting that this pioneeringfor that timeresult was received with skepticism.
Many physicists, including Clausius, thought that all molecules in a gas are moving with the same
speed. As we mention in the next paragraph, the correct mathematical form of this function was
proved shortly after by Boltzmann, and from then on the function f(v) is known as Maxwell
Boltzmann distribution function.

4.6 Kinetic Theory of Perfect Gases

99

mentioned above, we find that temperature is related to the mean square velocity
through the equation
T

m
 \v2[
3k

and, therefore, it is related to the mean kinetic energy per molecule, m\v2[/2. In
addition to the work that lead to this crucial theoretical result, Maxwell, although a
theoretician, conducted very important experiments in the field of statistical
physics. His motivation was an article by Clausius, where the German physicist
introduced the concept of the mean free path of a gas. According to the model of a
perfect gas, gas molecules behave like hard spheres that collide elastically with
each other and with the walls of the gas container. These collisions are random
and, between two successive collisions, each molecule moves in a straight line
with constant speed. Because the collisions are random, each molecule covers a
line segment of different length between two consecutive collisions. Clausius
realized that the mean of these segments, called the mean free path and denoted by
, is a useful statistical quantity in the mathematical description of the properties of
a gas. The mean free path can be calculated, provided that the velocity distribution,
f(v) is known. Since Maxwell had calculated the velocity distribution function of
the molecules of a gas, he was able to calculate the mean free path and to
determine its dependence on other physical parameters of the model. For example
he showed that the friction coefficient, l, of a gas, called viscosity, depends on the
mean free path, , the gas density, q, and the average speed of molecules, \v[,
through the relation
 
Z1
1
f v  v  dv
q   \v[ ; where \v[
l
3
1

But it can easily be shown that the mean free path depends on the cross section23 of
molecules, j, the density of the gas, q, and the mass of molecules, m, according to
the relation
m
p
2j  q
Eventually, we arrive at the relation
 
1 m  \v[
p
l
3
2j
according to which, viscosity is independent of density! If we combine this result
with the law of perfect gases, we conclude that the friction exerted by a gas is the

23

In this case, the cross section is the area of the section of a sphere defined by a plane passing
through the center of the sphere.

100

4 The Major Branches of Physics

same both at high and at low pressures! Maxwell could confirm experimentally
this paradox, although his area of expertise was that of theoretical calculations
and not experimentation. After the experimental corroboration of this unexpected
result, the phenomenon was also understood theoretically. In a dense gas, molecules that impinge on a moving surface acquire momentum in the direction of the
surfaces motion and then collide after a short time interval with other molecules
transferring, in this way, momentum over short distances. In a thin gas, molecules
that collide with a moving surface and acquire momentum collide with other
molecules after longer time intervals. During this time intervals, they move further
away from the surface and, thus, carry momentum over long distance. Since in the
first case many molecules carry momentum over short distances, while in the
second case few molecules carry momentum over long distances, the end result is
the same in both cases! This result confirmed Maxwells calculations regarding the
velocity distribution in a perfect gas. Moreover, because viscosity and, hence,
mean free path depend on the cross section of the gas molecules, it became
possible for the first time to measure experimentally the cross section of molecules, their mass and Avogadros number.
The same experiment, however, indicates a second anomaly. Because, as
already mentioned, temperature is related to the mean square velocity through the
relation
T

m
 \v2[
3k

it follows that viscosity should depend on the square root of temperature.


Experiments, however, did not agree with this result, since the dependence was
found rather linear. Maxwell was able to explain this anomaly as well, which is
due to the fact that molecules are not hard spheres, as assumed in the theory of
perfect gases, but mere charged particles, interacting through repulsive Coulomb
forces (the interaction is due to their outer electron layers). The higher the molecules velocity, the closer to each other they come and, thus, their cross section
does not remain constant but it is a decreasing function of their speed.
Another important theoretical result by Maxwell was the principle of equipartition of energy among all degrees of freedom of the molecules. In particular,
according to Maxwells theory, each species in a gas mixture has the same mean
energy per molecule and, as a result, lighter molecules have, on average, larger
velocities (so that their kinetic energy equals that of the heavier ones).

4.6.4 Entropy and the Arrow of Time: Boltzmann


Maxwells work on thermodynamics and the kinetic theory of gases was continued
by the Austrian Ludwig Boltzmann (18441906). Boltzmann, from his early years
in research, admired Maxwell, so he worked for the promotion of Maxwells
electromagnetic theory, which at that time had not yet been generally accepted.

4.6 Kinetic Theory of Perfect Gases

101

Along with his teacher, Josef Stefan (18351893), Boltzmann put forward the law
of black body radiation, the well-know StefanBoltzmann law
E r  T4
where E is the energy emission rate per unit surface, r is a constant and T is the
absolute temperature of the surface. However, Boltzmanns main field of research
was related to Maxwells work on thermodynamics and perfect gases, a field where
he broke new ground in physics. One of his first results was the rigorous demonstration of the form of the velocity distribution function for perfect gases, which
was derived by Maxwell in a rather heuristic way; for this reason, this function is
usually called MaxwellBoltzmann distribution function. His most important
result, however, was the relationship between the entropy of a thermodynamic
system and the probability for the system to be in a particular energy state. The
result was incredibly simple and elegant:
S k  log w
where S is the entropy, w the probability and k Boltzmanns constant. This law had
such an impact in the scientific community at that time, that has been inscribed on
Boltzmanns tombstone in Vienna (Fig. 4.31).
Boltzmanns greatest research effort was devoted to a problem that unfortunately has not been solved yet, although the contribution of this great scientist
towards its solution was very important. The problem is the relation of thermodynamics with the rest of physics and the so-called arrow of time. More specifically, we know that on microscopic level the only forces acting on gas particles are
the known forces of physics. But the laws that describe the effect of these forces on
matter are reversible with respect to time. For example, suppose that a body moves
along a straight line with constant velocity. Then, the differential equation of
motion is
dx
v
dt
The solution of the above differential equation gives the position of the body on
the straight line through the general formula
x v  t x0
For a special pair of initial conditions x0 = 0 and

dx

5 the solution is

dx

5, the corresponding

dt 0

x 5t
For another pair of initial conditions, x0 = 0 and
special solution is

dt 0

x 5t
Notice that if we substitute t with (t) the second special solution becomes
identical to the first, that is, the second solution is essentially the first, but describes

102

4 The Major Branches of Physics

Fig. 4.31 Boltzmanns


grave, with the inscription of
the famous relation
S = k logw (photo by author)

motion backwards in time. Both solutions are acceptable, which means that for
every solution of the equation that describes motion towards larger values of x (to
the right) there is a solution that describes motion along the same trajectory but
towards smaller values of x (to the left); this solution is nothing more than the
initial trajectory described backwards in time. Therefore, no one can tell if a
movie showing this body moving along its trajectory is played forward or
backward based solely on the image shown on the screen. For the same reason,
the differential equations of motion of a planet cannot determine if it revolves
clockwise or counterclockwise around the Sun, since both motions are solutions of
the same equations, and therefore equally acceptable, differing only in their initial
conditions.
The above conclusion contradicts everyday experience, which tells us that most
events have a specific time direction. For example, it is easy to determine whether a
movie that shows a glass falling to the floor is played forward or backward, as
the view of fragments sticking together to form a glass that is subsequently lifted
from the floor seems utterly unrealistic. The same applies to gases. If in a closed
room one opens a bottle containing a gas, for example hydrogen sulfide, the gas will
disperse throughout the room; the opposite has never been observedthat is,
hydrogen sulfide already dispersed in a room to be gathered again in the bottle from
which it came in the first place. Thus, statistical phenomena seem to define the socalled arrow of time, indicating that time always flows forward and never backward.
What, really, is the reason for this difference? Maybe the fantastically large number

4.6 Kinetic Theory of Perfect Gases

103

of gas molecules in a room? Modern theory of chaos (see Sect. 5.1.3) points towards
the conclusion that the answer to this question is probably yes, but this remains to be
proved in a rigorous mathematical way. However, from all known physical quantities, only one has a monotonous evolution and, for that reason, is increasing: the
entropy of a system. So, it is reasonable to assume that the arrow of time is connected
to the second axiom of thermodynamics. There are two possibilities:
either the second axiom of thermodynamics can be deduced from the axioms of
the rest of physics, and in particular mechanics, so that the laws of thermodynamics, particularly the arrow of time, can be demonstrated through these laws,
or
or it cannot be deduced from the axioms of the rest of physics, in which case
thermodynamics is a branch of physics independent from the other branches.
Boltzmann devoted most of his research career to this problem. Several times
he thought he had solved it, only to discover later that he had made an assumption
that could not be proven. As mentioned previously, thermodynamics today is
considered as a singular point in physics, in the sense that it cannot be integrated
in the unified edifice of the other branches. However, we believe that mechanics
and thermodynamics should be linked through chaos theory (see Sect. 5.1.3),
which is based on the fact that the majority of solutions of dynamical systems
described by nonlinear differential equations are chaotic, in the sense that two
neighboring orbits diverge exponentially away from each other over time. Because
the position and velocity of a body is inevitably recorded with an accuracy of a
certain number of significant (or decimal) digits (depending on the accuracy of the
measuring instrument), it is obvious that we cannot distinguish between two initial
conditions differing less than the accuracy of the instrument. For example, if our
measuring tape has an accuracy of a millimeter, the positions x = 1.9999 m and
x = 2.0001 m will be recorded as x = 2.000 m. But the real trajectory, with an
initial position x = 1.9999 or 2.0001 m for example, will diverge exponentially
from the one with x = 2.000 m, which we calculate based on the equations of
motion, and after a certain time the true trajectories will differ significantly from
the computed one by so much that we will be forced to believe that the true
trajectory follows a chaotic evolution. If the trajectories we measure or calculate
describe the motions of the molecules of a gas, this means that at this point we no
longer know the real kinetic state of these molecules. This scenario seems
reasonable, but it must be demonstrated mathematically, something that unfortunately has yet to be achieved.

Chapter 5

Physics of the 20th Century

The evolution of physics did not come to an end in the dawn of the 20th century, as
many physicists believed at that time and as school textbooks seem to imply
sometimes. Simply, the new avenue opened by Galilean transformations had come
to an end at the moment when Lorentz transformations were introduced. Experimental results accumulated in the early 20th century, that could not be included in
the edifice created along the way, gave rise to new questions and problems, which
required a new approach to physics. The change in mainstream physics took place
through the establishment of two novel theories: quantum mechanics and theory of
relativity. What were, however, these new questions and problems? Their
knowledge allows us to appreciate better the revolutionary and, at the same time,
effective nature of both these new theories.
Lorentz transformations could not be integrated easily in physics, because their
introduction was completely arbitrary.
There wasnt any experimental evidence for the existence of aether, which was
believed to have both bizarre and contradictory physical properties.
The solution to these two problems was given by Einstein, who introduced the
second axiom of the special theory of relativity, namely that the speed of light in
vacuum is constant for all inertial observers. This axiom, together with the first
axiom of special relativity discussed in the next section, led naturally to the
Lorentz transformations and rendered the assumption about the existence of aether
unnecessary.
The computation of the specific heat of gases showed that diatomic molecules
have fewer degrees of freedom than those anticipated by classical mechanics
(three translational, three rotational and one oscillatory). Similar was also the
case for triatomic molecules.
Experimental studies on the photoelectric effect showed that the emission of
electrons from a material, irradiated by electromagnetic waves of short wavelength (light), is proportional to the flux of radiation, but depends strongly on its
frequency as well. When the frequency is less than a threshold value, no electrons are emitted, no matter how large is the radiation flux (Fig. 5.1).
H. Varvoglis, History and Evolution of Concepts in Physics,
DOI: 10.1007/978-3-319-04292-3_5,  Springer International Publishing Switzerland 2014

105

106

5 Physics of the 20th Century

= 0.70 m
= 1.77 eV

photon

= 0.55 m
= 2.25 eV

= 0.45 m
= 2.76 eV

photon

photon

= 0.29 eV

= 0.80 eV
e

no electrons

Work function of caesium: 1.96 eV

Fig. 5.1 Photoelectric effect. Light illuminates a plate made of caesium, which has a work
function of 1.96 eV, i.e. an electron is emitted from the plate if the electrons energy is increased
by 1.96 eV. Electrons are emitted when the light is blue or yellow-green, but not when it is red.
This result cannot be interpreted classically. The introduction of the concept of photons by
Einstein solves the problem, since red photons have energy below the work function of caesium.
The difference between the electrons energy and the work function appears as kinetic energy of
the electrons (drawing by author)

The theoretical interpretation of the black body radiation, based on the existence
of some, unspecified in principle, oscillators inside the black body, led to the
conclusion that the intensity of the emitted radiation should increase without
bound as frequency increases. Experiments, however, showed that the intensity
reaches a maximum and subsequently, in the ultraviolet spectral range and
beyond, decreases with frequency. This theoretical monotonic, continuous
increase of the intensity with frequency, which was not in agreement with the
experimental results, was called ultraviolet catastrophe (Fig. 5.2).
The existence of stable atoms was a mystery in classical physics. According to
Maxwells theory, an accelerated electron radiates electromagnetic waves.
However, in the prevailing at the beginning the 20th century Rutherfords model
for the atom, electrons revolve around the positively charged nucleus. But they
do not radiate electromagnetic waves, even though they are subjected to centripetal acceleration; otherwise, they would lose energy and eventually fall on
the nucleus!
The discovery of radioactivity by Becquerel was a powerful blow to thermodynamics. Less than 50 years after the universal acceptance of the first axiom of
thermodynamics (according to which total energy is a conserved quantity), a
phenomenon called radioactivity was experimentally observed, during which
heat and radiation are apparently produced without the consumption of another
type of energy.
All five phenomena described above were explained by quantum mechanics.
Finally, the problem of integration of the second axiom of thermodynamics in
the edifice of physics has remained unsolved so far.

5.1 Quantum Mechanics

107

Fig. 5.2 Energy emitted by a


black body at various
temperatures. Ultraviolet
catastrophe is the name given
to the discrepancy observed
at short wavelengths between
the RayleighJeans law
(black curve, labeled
classical theory) and the
experimental result (blue
curve) for the intensity of
radiation emitted by a blackbody. Plancks law predicts
the correct curve

5.1 Quantum Mechanics


Quantum mechanics was founded in 1900 on the idea of the German physicist Max
Planck (18581947) that light and electromagnetic radiation in general is not
emitted continuously but in the form of packets, which he named quanta. The
value of a quantum of energy depends on the frequency of radiation according to
the relation
E hm
Planck had no particular theoretical reason to believe in the concept of quanta.
He introduced the idea simply because the distribution function he managed to
calculate using this concept could explain the experimentally observed distribution
of black body radiation avoiding, in this way, the so-called ultraviolet catastrophe.
Apart from that, neither he nor anyone else seriously believed in the idea of
quanta. Soon, however, in 1905, Einstein, using the concept of quanta, was able to
interpret the photoelectric phenomenon. Still, most of the physicists of that time
were not ready to accept a new model in optics. We should remember that, in
Newtons time, it was believed that light consists of particles. It took 200 years,
and hard work on behalf of Huygens, Young and Fresnel, to establish the wave
nature of light. And now, just 70 years after Fresnels death, the concept of quanta
brought the theory of light back to its starting point: that light consists of particles!
The confrontation between Poincar and the great mathematician and astronomer
Sir James Hopwood Jeans (18771946) during the first Solvay conference, in
1911, is characteristic of the atmosphere that prevailed at that time. Among the
topics discussed in this conference were black body radiation, ultraviolet catastrophe and the theory of quanta. Jeans attempted to explain the discrepancy
between the theoretical ultraviolet catastrophe and the experimental results by

108

5 Physics of the 20th Century

proposing that, in nature, radiating bodies never attain thermodynamic equilibrium, as it is assumed in the theoretical models. He proposed a model in which the
heat capacity of each body is represented by a container connected to the containers of other bodies through a system of pipes and outlets. When Jeans concluded his presentation, Poincar made the following brief comment:
It is evident that by a suitable choice of the dimensions of those connecting pipes between
the vessels, on the one hand, and the magnitude of the losses, on the other, Mr. Jeans can
account for any experimental result. However, that is not the role of physical theories. One
must not introduce as many arbitrary constants as there are phenomena to be explained.
The goal of physical theory is to establish a connection between diverse experimental
facts, and above all to predict.

Poincars observation summarizes a basic principle of modern scientific


method of paramount importance, which has evolved from the principle of
Ockhams razor and lies at the roots of the current explosive growth of research.
From this philosophical point of view, Plancks theory was better than Jeans,
because it was based on just one hypothesis.
The theoretical and experimental progress that followed was rapid, and finally
the quantum theory was generally accepted and related to the structure of atoms
through the so-called old quantum theory of the great Danish physicist Niels Bohr
(Niels Henrik David Bohr, 18851962). This theory is based on a model of the
atom similar to that of the Solar System. In this theory, the fact that the electron
orbiting the nucleus does not radiate electromagnetic waves is interpreted through
the hypothesis that this absence of radiation occurs only at certain special
orbits, the energy and angular momentum of which are characterized by integer
values in a specific system of units (see below). In view of the fact that in the
theory of the two-body problem, where two bodies are attracted by electrostatic
forces, the energy is uniquely related to the radius of a circular orbit, it was
assumed that electrons follow circular orbits, each having (in an appropriate unit) a
radius equal to the square of a natural number (positive integer), that corresponds
to one of the special orbits of the above hypothesis. Using this model, it was
finally possible to interpret theoretically Kirchhoffs three laws of spectroscopy
and understand the creation of line spectra.
As, however, technology progressed, it was discovered that most spectral lines
consisted of many finer ones, lying very close to each other. This phenomenon
is called fine structure of spectra. Later, more detailed analysis showed that fine
structure lines consisted of other lines as well, of even finer width, a phenomenon
characterized as hyperfine structure. To interpret these phenomena, scientists had
to introduce more variables in order to characterize electrons, besides the radius of
their (circular) orbit in the atom. So, at first it was assumed that electrons can
follow also elliptical orbits, the eccentricity of which, given by another integer
non-negative number, could explain the fine structure. Then it was assumed that
electrons rotate about their axis, so that their spin, which is either  or - in a
special system of units, could explain the hyperfine structure. This complex model,
attributed largely to the research work of Bohr and Sommerfeld (Arnold Johannes

5.1 Quantum Mechanics

109

Wilhelm Sommerfeld, 18681951), explains the spectrum of hydrogen-like atoms,


i.e. all atoms and ions that have a single electron. But the spectral lines of other
atoms could not be fully explained by this theory.
Apart from the fact that the Bohr-Sommerfeld theory could not explain the
spectra of atoms with more than one electron, its more general inadequacy became
clear when Heisenberg put forward the uncertainty principle.1 According to this
principle, it is not possible to measure simultaneously the position and the velocity
of an electron with infinite accuracy and, as a result, the concept of trajectories
used in the Bohr-Sommerfeld old quantum theory becomes meaningless: if we
know exactly the energy of the electron, as assumed by this theory, we cannot
know its position! The new quantum theory introduced by Heisenberg in 1925
overcomes this problem. In this theory, which was named matrix mechanics,
electrons are characterized not by numbers (such as the values of radius and
eccentricity of the orbit or the electron spin) but by matrices, i.e. groups of
numbers arranged in rows and columns. The entries in the matrices are related to
physical quantities of the moving electrons in their initial and final states. This
theory, although it was completely mathematically oriented and could not provide
any insight to the physical properties of electrons, could explain the experimental
results. Next year (in January 1926), Schrdinger proposed another theory, also
mathematically oriented, in which the motion of electrons is described not as a
trajectory but, this time, as the probability of finding the electrons at a specific
position. This probability is given by the solution of a differential equation with
partial derivatives, resulting from the context of Hamiltons formulation of the
classical model of electron trajectories. The solution of the differential equation
has the form of a wave and the above-mentioned probability is calculated as the
square of the waves amplitude at each position. Due to the form of the solution,
this new theory was named wave mechanics. A few months later (in May 1926)
Schrdinger proved that Heisenbergs matrix mechanics and his wave mechanics
are completely equivalent. The new HeisenbergSchrdinger theory is usually
called new quantum theory, to distinguish it from the Bohr-Sommerfeld old
quantum theory. Today, Schrdingers theory is routinely used instead of
Heisenbergs theory, because it is better structured mathematically and is linked
naturally with classical mechanics, since both make use of Hamiltons function.
Finally, the structure of the nuclei themselves is today interpreted by yet
another theory, which is virtually a generalization of classical quantum
mechanics; this theory is called quantum chromodynamics.

Heisenberg published an early form of the uncertainty principle in 1927 based on heuristic
arguments. In this initial form, the principle reads Dx  Dp  h, where x is the position and p the
corresponding momentum. Soon afterwards, however, a rigorous mathematical proof was
published, according to which, in a series of measurements, the standard deviations of the
position, rx, and of momentum, rp, satisfy the relation rx  rp  h=4p.

110

5 Physics of the 20th Century

5.2 Theory of Relativity


5.2.1 Special Theory of Relativity
As already mentioned in Sect. 4.5.2, the concept of non-ponderable fluids was
introduced in the 18th century to model a number of physical phenomena, such as
heat, electricity, etc. As physics advanced, during the 19th century, it turned out
that most of these fluids do not really exist, but they were just ad hoc hypotheses
on which theories were developed. A classic example is the disproof of phlogiston
by Lavoisier and of caloric by Mayer and Helmholtz. Of all non-ponderable fluids,
the most successful in physical theories and widely accepted by the majority of
scientists in the late 19th century was aether.2 According to the prevailing ideas at
the end of the 19th century, aether was a transparent, weightless, elastic medium,
which filled uniformly space and through which propagated light and other electromagnetic wavesmaybe even the medium through which gravitational and
electrostatic forces were propagating as well. According to the prevailing views of
this era, aether was a fixed preferential inertial frame of reference, with respect to
which one could measure the speed of all other Galilean inertial systems of reference. Was aether really a natural element or simply another unreasonable
assumption of physicists, just as phlogiston and caloric were?
In the 1880s, the German-American physicist Albert Abraham Michelson
(18521931) designed an experimental device, the interferometer, which would
allow him to conduct an experiment proposed several years earlier by Maxwell:
the measurement of Earths speed with respect to aether. With the help of the
American chemist Edward Morley (Edward Williams Morley, 18381923),
Michelson conducted the experiment and in 1887 they published the result, which
was negative. Michelson and Morley were not able to measure the Earths speed
with respect to aether, simply because they could not detect any change in the
speed of light along, or perpendicular to, the motion of the Earth with respect to
aether, contrary to what the Galilean transformation of velocities predicted (see
Sect. 4.1.1) (Fig. 5.3a and b). This experimental result came as a shock to the
physicists of the time who, consequently, were divided into two groups. There
were those scientists who inferred immediately that there is no aether, the most
2

The concept of aether, and its modifications and improvements, is reminiscent of Aristotles
theory of motion. In order to save an inadequate theory one has to devise more and more
improvements, which eventually complicate the theory and render it unreliable. Aether was
initially assumed to be simply transparent and weightless, like air; so, it was logical to presume
that only longitudinal waves could propagate through it. When it was determined experimentally
that light waves are transverse, another property was attributed to aether, that of rigidity. But it
turned out that, in order to explain the extremely high value of the speed of light, aethers rigidity
had to be considerably higher than that of steel! Finally, in order to justify the phenomenon of
length contraction, aether should exert a high pressure to bodies that move fast, but not to those
that move slowly! These contradicting qualities introduced serious questions regarding the
existence of aether.

5.2 Theory of Relativity

111

Fig. 5.3 a In classical


theory, light propagates in a
transparent medium named
aether. The aim of the
Michelson-Morley
experiment was to measure
the Earths motion relative to
aether (Cronholm144/
Wikimedia Commons).
b Earths motion could be
measured by detecting
variations in the interference
pattern observed by using the
apparatus. The apparatus was
mounted on a stone block
floating on mercury, in order
to isolate it from spurious
motions of Earths surface.
(From the original
Michelson-Morley
publication On the Relative
Motion of the Earth and the
Luminiferous Ether,
American Journal of Science
34, 333345, 1887)

ardent supporter of this view being the Austrian physicist Ernst Mach
(18381916), and those who attempted to explain the negative experimental result
by introducing new hypotheses. Among the second group of scientists was the
Irish physicist George FitzGerald (George Francis FitzGerald, 18511901), who
suggested that all objects interact with aether as they move through it and, as a
consequence, their length shrinks in the direction of motion. If this really was the
case, then the negative result of the Michelson-Morley experiment could be
explained in a rather straightforward way.
FitzGeralds idea was developed mathematically by Lorentz, who concluded that,
apart from the phenomenon of length contraction, the interaction of matter with
aether should have two additional effects: time dilation and increase of mass with
speed. The final form of this theory was presented on the 5th of June 1905 by Poincar
at a convention of the French Academy of Sciences (see Sect. 4.2.6 as well). Poincar
had developed a theory of spacetime transformations, from which emerged a new
law of velocity transformations, different from that given by the Galilean transformations. According to this law, the speed of light is the same in all inertial frames of
reference and, therefore, the negative result of the Michelson-Morley experiment
could be explained in a rigorous mathematical way. Poincars theory was based on

112

5 Physics of the 20th Century

three postulates, one of which was the existence of aether, as a preferred reference
system. A few weeks later, on the 30th of June, Einstein, in a big scientific leap,
united mechanics and electromagnetism, with the proposition of an entirely new
theory: the special theory of relativity (STR). This theory relates the motion of a
body, as perceived by an observer in a reference frame A, to its motion as perceived
by another observer in a reference frame B, moving in a uniform way (that is, in a
straight line and with constant velocity) with respect to A. The STR theory, as
formulated by Einstein, is based on two postulates:
The laws of physics are the same in all inertial reference frames.
The speed of light in vacuum is the same in all inertial reference frames.
It is striking that the above axioms do not have any obvious mathematical
form.3 However, it can easily be shown that their combination results in a
mathematical relation, analogous to the Pythagorean theorem, which gives the
distance between two points A and B, not in ordinary three-dimensional space,
but in a special four-dimensional space in which the fourth dimension is time. If
the coordinates of A and B are x, y, z, t and x dx; y dy; z dz; t dt,
respectively, then the elementary proper length, that is, the distance between
points A and B in the four-dimensional spacetime, denoted by ds, is given by the
relation


ds2 c2 dt2  dx2 dy2 dz2
where c is the speed of light. Thus, Einstein introduced in physics the completely
new idea that space and time are not separate entities, but closely linked through
the previous relation. Careful examination of this relation reveals the reason why
we put the word distance in quotes. The squares of the three spatial coordinates
have the same sign, just as in the Pythagorean Theorem, but the time coordinate
has a sign opposite to that of space. Therefore, this distance has different
properties than the usual one we are familiar with. In geometry, the Pythagorean
Theorem indicates that, no matter how a vector is oriented in space, its length
remains always the same. In STR theory, the corresponding relation indicates that,
no matter how an elementary four-dimensional spacetime vector is oriented, the
difference between the square of the three-dimensional length and the square of
the interval between observations does not change. One of the main consequences
of this result is that two events that are simultaneous in an inertial frame of
reference are not, in general, simultaneous in another frame of reference!
In addition to the dependence of the elementary length in four-dimensional
spacetime from time, it is also relatively easy to prove that the combination of the
two postulates of STR theory leads to the Lorentz-Poincar transformations.
Therefore, the two theories are mathematically equivalent, but Einsteins theory

Twenty years later, the Greek mathematician Constantine Caratheodory showed that, under
very general (and maybe trivial) assumptions, one can arrive to a superset of theories of
relativity, of which STR is a special case.

5.2 Theory of Relativity

113

uses one less postulate, that is, it does not presuppose the existence of aether!
Furthermore, STR theory is based on axioms that have a physical meaning,
while Poincars theory was just a mathematical conception. Of course, it is
impossible to present in a single paragraph all changes STR theory brought about
to the ideas and concepts of classical physics. However, we should note one of the
most important ones. Galileos rule for transforming velocities is given by the
relation.
vB vA VAB
According to this rule, the velocity of a body in an inertial reference frame B
equals the velocity of the body in an inertial reference frame A plus the velocity of
system B with respect to A. Therefore, if one of the velocities vA or VAB is large,
vB might turn out to be larger than the speed of light, c. However in the frame of
the STR, Galileos rule of adding velocities is an approximate relationship, valid
only for speeds much smaller than the speed of light. When one of the velocities vA
or VAB approaches the speed of light, the exact relation of STR


vB vA VAB = 1 vA  VAB =c2 ;
derived from Einsteins postulates, gives always a sum smaller than c. Thus, we
see that compatibility of mechanics with electromagnetism was achieved by
sacrificing one of the oldest axioms of physics, the Galilean transformations.

5.2.2 General Theory of Relativity


The special theory of relativity was formulated by Einstein in order to explain the
strange experimental results that had been accumulated in the early 20th century, mainly the one of Michelson and Morley. Einsteins basic idea, which led to
the formulation of the STR, was included in the first of the two postulates of this
theory, namely that the laws of physics should look the same for all inertial
observers. However, despite the fact that the STR theory was not yet generally
accepted, Einstein, based only on his intuition and not on experimental indications,
moved to an even more revolutionary idea: starting from the belief that the laws of
nature are simple, he introduced the hypothesis that the laws of physics should
be the same for all observers, whether inertial or accelerated. The theory based
on this postulate was named general theory of relativity (GTR), in contrast to the
special theory of relativity (STR), because in the GTR theory the elementary
distance between two events, as they are called the points of the fourdimensional spacetime, is given by the general relation
ds2

3 X
3
X
l0 m0

glm xl xm

114

5 Physics of the 20th Century

Fig. 5.4 The gravity of a


luminous red galaxy (center)
has gravitationally distorted
the light from a much more
distant blue galaxy. Since
such a lensing effect was
predicted by Albert Einsteins
general theory of relativity,
this kind of ring is now
known as an Einstein Ring.
(NASA-STScI)

where glm is a tensor of second rank (i.e. a matrix with special properties) and
indices l and m assume values from 0 to 3; coordinate x0 represents time, while x1,
x2 and x3 are the three spatial coordinates. Note that the elementary length, ds2,


ds2 c2 dt2  dx2 dy2 dz2 ;
appearing in the context of STR, is a special case of the GTR theory that results by
setting g00 c2 ; g11 g22 g33 1 and gij 0 for i 6 j.
In fact, the difference between Einsteins theories of relativity, STR and GTR,
is far greater than indicated by the difference in names or the difference in the
calculation of the elementary length. The STR theory is simply a correction of
the laws of physics, which applies for speeds approaching the speed of light. On
the other hand, GTR theory is a new theory of gravity, where the force given by
the law of universal gravitation, which Newton considered as an action at a
distance, is nothing more than the result of a local curvature of spacetime,
caused by the presence of a mass in this area. This new theory of gravity overcomes the problematic points of Newtons theory (presented in Sect. 4.1.2),
namely, the concept of action at a distance and the inability to describe the universe as a whole, due to the existence of physical quantities that assume infinite
values. Moreover the GTR theory predicts several phenomena that were unknown
at the time of Einstein, for example that light is bent by massive objects (Fig. 5.4).
Today, the GTR theory is a generally accepted physical theory, because most of
the phenomena it predicts have been detected experimentally, except one: gravitational waves. At present (2013), several experiments are in progress for the
purpose of detecting gravitational waves; hopefully, the first positive results will
be recorded soon, verifying fully the GTR theory (Fig. 5.5).
It is important to emphasize that at the time that Einstein put forward his GTR
theory, there werent experimental results that could justify such a revolutionary
idea. Einstein based his proposed theory solely on philosophical arguments, just
like ancient Greek natural philosophers did. Therefore, we see that modern science
has not abolished the ancient Greeks way of thinking; instead, the method of

5.2 Theory of Relativity

115

Fig. 5.5 The VIRGO


interferometric antenna of the
European Gravitational
Observatory in Cascina, Italy.
This type of antennas are
essentially scaled-up models
of the original MichelsonMorley apparatus, and are
used for the detection of
gravitational waves by
observing changes in an
interference pattern (INFN)

ancient Greek philosophers, to predict experimental results from theories conceived on general principles, was supplemented by the modern method of formulating theories to interpret existing experimental results.

5.3 Theory of Chaos


The French mathematician Laplace (Pierre Simon, Marquis de Laplace, 17491827),
in the preface of his book on the theory of probability, writes the following:
We may regard the present state of the universe as the effect of its past and the cause of its
future. An intellect which at a certain moment would know all forces that set nature in
motion, and all positions of all items of which nature is composed, if this intellect were
also vast enough to submit these data to analysis, it would embrace in a single formula the
movements of the greatest bodies of the universe and those of the tiniest atom; for such an
intellect nothing would be uncertain and the future just like the past would be present
before its eyes.4

He based his conclusion on the belief that the validity of the laws of physics is
universal and that the mathematical equations describing these laws can be solved
exactly. Therefore, according to Laplace, the concept of probability is due only to
our imperfect knowledge of the laws of physics and of the initial conditions at the
moment universe was created. In this case, the basic law that Laplace had in mind
was Newtons second axiom, according to which the motion of a body can be
described by an equation that relates its acceleration with the force applied to it.
The importance of the conclusion in philosophy and ethics is obvious: if the
motion of all bodies, from the smallest atoms in our body to the biggest stars in a

4
Laplace, Pierre Simon, A Philosophical Essay on Probabilities, translated from the 6th French
edition by Frederick Wilson Truscott and Frederick Lincoln Emory, Dover Publications (New
York, 1951) p. 4.

116

5 Physics of the 20th Century

galaxy, is dictated by strict mathematical laws, then the concept of free will is
meaningless!
A 100 years or so after Laplace, the great French mathematician and astronomer Poincar discovered a phenomenon that could restore the value of free will.
This was the existence of chaotic motions even in the simplest systems of classical
mechanics, such as the famous three-body problem (see Sect. 4.1.2). Because of
this property, the solutions of the equations of motion are not analytic with respect
to the initial conditions; for example, we cannot write a solution that has initial
conditions for position and velocity x x0 Dx0 and v v0 Dv0 , respectively,
as a Taylor series about the solution with initial conditions x0 and v0. At the heart
of the problem lies the extreme complexity of the solutions, which Poincar
realized when he attempted to solve the three-body problem and described in this
way:
When we try to represent the figure formed by these two curves and their infinitely many
intersections, each corresponding to a doubly asymptotic solution, these intersections form
a type of trellis, tissue, or grid with infinitely fine mesh. Neither of the two curves must
ever cut across itself again, but it must bend back upon itself in a very complex manner in
order to cut across all of the meshes in the grid an infinitely many times. The complexity
of this figure is striking, and I shall not even try to draw it.5

Unfortunately, this important result, to which the French mathematician arrived


based solely on qualitative geometric arguments and without making use of any
drawings, could not be assessed properly at that time, since the method through
which one could perceive quantitatively this phenomenon was not available.6 So,
Poincars discovery remained in the history of mechanics as a theoretical paradox, with no obvious connection to the main edifice of 19th century physics.
Fifty years after Poincars death, electronic computers were invented. Many
scientists decided to take advantage of this new tool and to use it for solving
problems numerically, problems which until then could not be solved analytically.
In 1963, the American mathematician and meteorologist Edward Lorenz (Edward
Norton Lorenz, 19172008), following the generally accepted strategy in modern
science for solving complex problems, attempted to solve numerically the set of
differential equations of the simplest possible model of the atmosphere. According
to this strategy, first we solve a simplified version of our target problem and consequently we introduce more and more details. For example, if we wish to
calculate the actual orbit of the Earth, first we assume that Earth is attracted only
by the Sun, which is the largest body in the solar system, and that all other bodies
cause only small perturbations. This first order result is of course the known
solution of the two-body problem, an ellipse. Then, we calculate the corrections to
the initially obtained simplified orbit by adding the attraction of the Moon, of

H. Poincar, New Methods of Celestial Mechanics (transl. D.L. Goroff), vol. III, AIP (New
York), 1992 (p. 1059).
6
It became possible to draw this complex figure only after the invention of computers and the
ensuing possibility to calculate numerically the solutions of any dynamical system.

5.3 Theory of Chaos

117

other planets, etc. Usually, we assume that other small bodies in the solar system
(such as asteroids and comets) and all distant stars have negligible influence on
Earth. This is the method of successive approximations. Of course, by following this
method it is assumed (although it is not clearly stated) that the small perturbations caused by the forces of the Moon and other planets, when applied to the
simplified orbit of the Earth, do not alter significantly the orbit that Earth
would follow if it were only under the influence of the attracting force of the Sun.
This means that we accept that the actual orbit of the Earth approximates the
calculated orbit by solving the problem of two bodies or, in other words, that the
actual solution can be written as a Taylor series about the solution of the twobody problem. As mentioned earlier, Poincar had shown that this is not possible,
but scientists at his time did not realize the importance of his discovery and continued to work with traditional methods of successive approximations.
Examining carefully the results calculated by the computer, Lorentz found that
something was wrong with the solutions: small changes in the initial conditions
gave rise to completely different states of the atmosphere. This was contradicting
the prevailing, at the time, view that small changes in the initial conditions lead
also to small changes in the solutions. After various attempts to correct any possible errors in the numerical code used to solve the set of equations, Lorentz
arrived at a revolutionary conclusion. The classical method of the successive
approximations for solving complex systems, which was introduced by Newton
300 years ago, could not be used to address the problem of weather forecasting,
since minor changes in the initial conditions lead to radically different solutions!
Lorenz used this conclusion, stated in a poetic manner, in the title of the lecture
he delivered in a conference in 1972: Does the flap of a butterflys wings in Brazil
set off a tornado in Texas? This concept, which is known today as the butterfly
effect, marked the beginning of the theory of chaos in modern times. It should be
noted that the anomaly discovered by Lorenz does not characterize all solutions
of a complex system. There are regions of initial conditions in space and velocity
where there is an obvious sensitivity in the initial conditions resulting in chaotic
trajectories. However, there are other regions of initial conditions corresponding to
regular orbits, where this sensitivity does not appear. But these two regions are
intermingled in a complicated way.
In the years following Lorentzs discovery, chaos theory evolved rapidly
through experimentation (either actual experiments or computer simulations) and
the development of new theoretical tools and concepts. Today, our understanding
regarding nature and the solutions of equations describing the temporal evolution
of various phenomena is radically different from that of scientists 50 years ago.
Chaotic phenomena, similar to those discovered by Lorenz, are observed in almost
any dynamical system, from the motion of planets around the Sun to the fluctuations of stock indices (Fig. 5.6). Thus, in line with what we mentioned in Sect. 4.
1.5, it turns out that the accurate solution of differential equations describing the
evolution of a phenomenon requires infinite precision in the initial conditions,
something that is not feasible. This phenomenon generates chaotic solutions and
free will becomes again meaningful.

118

5 Physics of the 20th Century

Fig. 5.6 The Kirkwood gaps


are a manifestation of chaos
in our Solar System. Due to
the chaotic nature of the
orbits of asteroids, gaps are
observed in the distribution of
the asteroids mean distances
from the Sun at locations
where, according to Keplers
third law, the ratio of the
period of their revolution to
that of Jupiters is equal to a
simple rational number
(NASA)

Fig. 5.7 The complexity that


lies in the root of chaos. The
red and blue lines are the two
lines referred to by Poincar.
They represent trajectories of
a simple dynamical system
(known as McMillan map)
near an unstable periodic
orbit, lying at the central
intersection of the lines
(drawing by author)

0.4

0.2

-0.2

-0.4
-0.4

-0.2

0.2

0.4

The basic attributes of all chaotic dynamical systems, as systems with chaotic
solutions are called, are two:
the existence of geometric complexity, which did not allow Poincar to draw
any figures at his time, and
the sensitive dependence on the initial conditions (Fig. 5.7).
As a consequence of the latter property, the longer is the period over which we
want to calculate a solution, the more decimal digits are required in the initial
values of quantities describing a dynamical system (for example, pressure and
temperature in the case of the atmosphere). The accuracy of a thermometer or a
barometer, for example, is limited to a maximum of four significant digits, and, as
a result, weather can be forecast with some confidence for a strongly limited time
interval, usually up to two or three days. Thus, the paradox numerical result of
Lorenz was finally explained.
Today, the scientific community not only has realized the importance of the
discovery of the phenomenon of chaos by Poincar, but attempts, through chaos

5.3 Theory of Chaos

119

theory, to find the answer to one of the oldest problems in physics: the link of
thermodynamics with the other main branches of physics. In the late 19th century,
Lord Kelvin and Helmholtz tried unsuccessfully to incorporate thermodynamics in
the classical physics of their time. The problem was eventually solved by Maxwell,
who concluded that the second axiom of thermodynamics (which states that heat
cannot be fully converted into mechanical work) cannot be proven as a theorem
from the laws of Newtonian mechanics. This seems paradoxical, since heat is
essentially a measure of how rapidly molecules move in a gas or atoms in a
bodya motion which, of course, obeys Newtons three axioms of motion.
Today we believe that the property of sensitive dependence on the initial
conditions can provide the solution to the above open problem. If a system consisting of a few objects can posses both regular and chaotic motions, then it is
reasonable to expect that a system consisting of a very large number of bodies
would possess practically no regular motions. Considering the fact that each cubic
centimeter of air contains one billion trillions of molecules, we realize the rationale of the above argument.
In recent years, there have been many attempts by experimental physicists,
theoretical physicists and mathematicians to prove the above hypothesis. In 1970,
the Russian mathematician Yakov Sinai (Yakov Grigorevich Sinai, 1935) showed
that a perfect (in the mathematical sense) gas posses only chaotic motions.
Unfortunately, this gas is only an ideal model, since it is assumed that molecules
have no dimensions and that they do not exert forces to each other except when
they collide. Instead, all systems of physical importance that have been tested so
far were found to possess, even in small percentage, regular (i.e. non-chaotic)
motions. The proof that the second axiom of thermodynamics is a consequence of
chaos theory and, therefore, of the laws of classical mechanics will be a major step
for physicists in their effort towards the unification of the laws of physics.

Chapter 6

Lessons From Three Centuries of Physics

After the detailed presentation of the evolution of concepts and ideas in various
branches of physics, we can arrive at some interesting general conclusions concerning the dependence of this evolution on various parameters, such as the
geographical area, the model for the development of research and the personality
of scientists.

6.1 Geographical Area


First of all we notice that some countries had, and still have, a long tradition in
research and in opening new scientific avenues. Such countries were originally
Italy, England and Holland, and later France and Germany. This tradition in
England and Holland was characterized by a continuity that lasts until today,
possibly because of the independent method of organization of teaching and
research in universities and research centers. In Italy, the shock caused from
Galileos persecution by the Catholic Church hindered scientific research, but the
country regained its position in the early 19th century with Volta. France, on the
other hand, after the revolution of 1789, experienced an impressive scientific
boom, mainly because universities and research institutions became available to all
citizens, not just nobles and clergymen, as it was the case with the previous
regime. Germany initially lagged in scientific progress, because the governing
class in the various states, into which the nation was originally divided, valued
military organization more than scientific education. However, in 1814, the
establishment of a sovereign state (Deutscher Bund), and especially the proclamation of the German Empire in 1871, allowed the organizational attributes of
German culture to take over and to promote scientific development through central
planning.
Consequently, as if touched by a magic hand, scientific research flourished in
Europe throughout the 19th century, completing the structure we now call classical physics. At the same time, in the United States research was a hardly known
concept and, with the exception of Franklin and Gibbs, no important scientists
H. Varvoglis, History and Evolution of Concepts in Physics,
DOI: 10.1007/978-3-319-04292-3_6,  Springer International Publishing Switzerland 2014

121

122

6 Lessons From Three Centuries of Physics

appeared. This situation changed radically in the 20th century, when the United
States government realized the importance of research in the achievement of
economic growth and military development and followed a policy of generous
funding of science. During the same period, organization of research in the USSR
was based on the concept of central planning, but with considerably less success
than this policy had in Germany at the end of the 19th century. As a result, today,
at the dawn of the 21st century, the best European researchers are attracted by
salaries, resources and organization offered by the United States; so, we notice that
the situation regarding scientific research and development in the 21th century has
been completely reversed from what it was 100 years earlier.

6.2 Methods of Organization


Apart from the above big picture of the evolution of physics, one notices that
the path followed by the development of research in various countries was not
uniform. For example, in England research development was based on the
amateurish efforts of members of aristocracy and other wealthy people, with the
Royal Society playing a central role. In France, scientific development was due to
the inflow of competent, lower working class students in higher education institutions after the revolution of 1789. In Germany, scientific progress was based on
central planning, aiming to a balanced development of universities. In Netherlands, in the absence of any particular cultural characteristic, scientific development can be attributed to the appearance of an unusually largefor such a small
countrynumber of distinguished scientists; this was probably due to the liberal
spirit of the Protestant religion in Netherlands, contrary to the intolerant attitude of
Catholic Church in countries like Italy. In Netherlands, important role also was
played by the economic boom that followed the development of industry and
commerce, which in turn relied on the products imported from colonies.

6.3 Personality of Researchers


Apart from any cultural differences, it is obvious that great scientists had different
backgrounds, as well. Some were poor and managed to study with difficulty
through sheer hard work (for example, Benjamin Franklin); some were wealthy
and followed an uneventful and effortless course of studies, aiming right from the
beginning of their career to a professorial chair in a university (for example, Lord
Kelvin), while others did not even manage to get a high school degree (for
example, Faraday). Some encountered difficulties, were discouraged and fell into
obscurity, while others, although confronted with adversities, were not disappointed and pursued their goals (for example, Fresnel). Some solved many practical problems, correctly or not, and became famous in their time, but their work

6.3 Personality of Researchers

123

does not impress us today (Lord Kelvin, Helmholtz). Some were involved in
establishing new scientific fields and, although their pioneering efforts were not
recognized in their time (since few could understand them), today are considered
as founders of physics (such as Maxwell). Many are known primarily for the part
of their work which was ultimately less important in the evolution of physics
(Galileo, Newton). Finally, there were direct and significant collaborations
between experimentalists and theoreticians (Kirchhoff-Bunsen, Faraday-Maxwell).

6.4 Conclusions
The final conclusion is therefore that, in science, the development of new ideas and
concepts does not necessarily follow a specific preferred path. This versatility
provides the necessary dynamics and vivacity to continuously produce new results,
open new roads, and widen existing ones. This feature, along with generous
funding of research projects and strategic cooperation within research groups, is
giving today the general impression that science is constantly advancing. At the
same time, it creates the illusionespecially to school children and studentsthat
the edifice of physics is integrated and uniform and that scientists simply complete
some details here and there and perfect it. The lesson that should be learned from
studying the evolution of ideas and concepts in physics up to the end of the 20th
century is that, if there is anything certain in research today, is that our ideas about
the world and the laws governing it will change in the future. In order to appreciate
fully this conclusion we simply have to remember what happened to the ideas and
concepts that prevailed in classical physics in the late 19th century:
Newtons gravity proved to be merely an approximation of Einsteins general
theory of relativity.
Galilean transformations are an approximation of Lorentz transformations.
Newtons law, F = ma, does not apply within the context of special theory of
relativity, unless it is assumed that mass depends on velocity.
The photoelectric effect showed the dual nature of light: namely that light can be
thought of as consisting either of waves or particles, depending on the phenomenon being observed.
Maxwells electromagnetism turned out to be subset of a more general theory,
which includes a force unknown in the 19th century, the weak nuclear force.
Finally, thermodynamics remains until today a singular point in physics. It
finds applications in classical physics, the general theory of relativity and
quantum mechanics, but no one knows why it describes so successfully phenomena in practically all branches of physics.
One should avoid the misconception that all problems of physics, except that
concerning the second axiom of thermodynamics, have been solved within the
framework of the theory of relativity and quantum mechanics. These two theories

124

6 Lessons From Three Centuries of Physics

were simply put forward in response to problems that remained unresolved at the
dawn of the 20th century. Unfortunately, or fortunately, the development of both
these theories, as well as experiments conducted during the 20th century, resulted
in more unresolved problems, some of which are the following:
General relativity and quantum mechanics are not compatible.
Astronomical observations suggest the existence of dark matter, i.e. matter of
unknown origin, which cannot be observed directly but can be detected by the
gravitational attraction it exerts to neighboring bodies. Dark matter might
consist of exotic particles or of a class of celestial bodies not yet observed.
Another puzzling fact is that, according to recent astronomical observations,
distant galaxies are receding with velocities larger than predicted by standard
cosmology, a fact that is presently interpreted by invoking a new force in the
universe, related to the existence of the so-called dark energy. If this is true, then
the existing forces in nature are five and not four, as it is stated in all physics
textbooks.
So, at the dawn of the 21st century, we are expecting a new revolution in
science, similar to the one brought about by quantum mechanics and the theory of
relativity. We hope that this revolution will provide a deeper understanding of
nature and physics through the integration of all physical theories into a unified
theory, which has been already given the appropriate name theory of everything.
Will we ever manage to accumulate all knowledge required to fully understand
nature? The question is basically a philosophical one, since it cannot be answered
with an experiment and, consequently, it does not belong in the realm of science.
However, all scientific experience accumulated so far makes many scientists to
believe that the journey of knowledge will never end.

Chapter 7

Organization of Teaching and Research

7.1 Universities Throughout the Centuries


7.1.1 Ancient Greece-Byzantium
Modern universities are the result of many centuries of evolution. We can say that
higher education has its roots in the teachings of natural philosophers of Ionia and
Magna Graecia (Fig. 7.1). However, strictly speaking, a masters teaching to his
students (for example, Socrates to Plato or Plato to Aristotle) cannot be regarded as
a university course, since this teaching is not contained in a formal curriculum.
The first institutions that can be thought of as offering university-level education
were the philosophical schools of Hellenistic Antiquity. In Athens, during the
second century AD, there were four philosophical schools, financed by the Roman
emperor of the time, Marcus Aurelius (121 AD180 AD). The most important of
them were Aristotles Lyceum and Platos Academy. In the latter, after several
centuries, taught Simplicius, who was mentioned in the first part of this book,
regarding his debate with John Philoponus. These schools have similarities as well
as differences with modern universities. Indicatively we may note the following:
Teaching was covering a wide range of subjects, especially in Lyceum, much
larger than that of modern university departments.
Very few books were available and the existing ones often were written by the
founders of the schools.
Research activity, in the sense of generating new knowledge, was nonexistent.
Access to these schools was not free.
These schools were not awarding some sort of a degree.
Those graduating from these schools did not acquire any kind of professional
rights.
Philosophical schools were abolished by decree of the Byzantine emperor
Justinian in 529 AD.
In Alexandria, during the Hellenistic era, there was a learning institution called
Museum, founded by Ptolemy I in 300 BC upon the recommendation of Demetrios
H. Varvoglis, History and Evolution of Concepts in Physics,
DOI: 10.1007/978-3-319-04292-3_7,  Springer International Publishing Switzerland 2014

125

126

7 Organization of Teaching and Research

Fig. 7.1 Digital reconstruction of the Gymnasium of Eudemus, the central gymnasium of the
ancient city of Miletus; based on material brought out by excavations (FHW)

Phalereus (ca. 350 BCca. 280 BC), one of the most eminent Aristotles disciples.
After successive periods of acme and decline, the Museum ceased to exist shortly after
391, a time when the Byzantine emperor Theodosius A0 abolished by imperial decree
all polytheistic temples. The last Director of the Museum was Theon, father of
Hypatia, the first eminent woman mathematician in the history of science. The
Museum was the first institution with an organization similar to that of modern
universities, as the scientists working there were paid by the state and had access to the
hundreds of thousands of books kept in the Library of Alexandria. Until the Roman
times, the Museum was functioning more or less like a contemporary research institute
analogous, strangely enough, to the Institute for Advanced Study in Princeton. The
Emperor gathered in the Museum all the greatest scientists of the time to help them
pursue undistracted their various research activities. In parallel, however, with their
research activities, the Museums scientists were also teaching in student classes.
During the transitional period of Late Antiquity, the Byzantine emperor
Theodosius II (401450 AD) founded in Constantinople, in 425 AD, a new higher
education institution, which originally had 30 professors and a library with 100,000
books. The institution was known with various names, such as University or
Academy of Constantinople, but it was usually called Pandidakterion (Fig. 7.2).
Attendance at Pandidakterion was at some periods free and at some others by
tuition; Pandidakterion was reformed several times until the fall of Constantinople
to the Turks, when it was abolished by Sultan Mahomet II (14321481).
Constantine VII Porphyrogenitus (905959) was one of the Byzantine emperors
who, in the mid 10th century AD, had expressed an interest in developing the
Academy (as Pandidakterion was called at that time); according to the historians, he

7.1 Universities Throughout the Centuries

127

Fig. 7.2 A class assumed to be in Pandidakterion; miniature from the 12th century John
Skylitzes Chronicle (Spanish National Library, photo by V. Tsamakda)

had appointed four professors: Constantine Protospathario in the chair of philosophy, archbishop of Nicea Alexander in the chair of rhetoric, patrician Nicephorus in
the chair of geometry, and secretary Gregory in the chair of astronomy.
Many regard as the first universities in history the philosophical schools of
Athens, others the Museum of Alexandria and others Pandidakterion. The prevailing view is that todays universities should be seen as direct descents of the
medieval universities of Western Europe, since the institutions mentioned above
were lacking some important characteristics that, as we believe, modern university
have. For example, there wasnt any specialization in these institutions, such as
schools, departments or faculties; moreover, they were not awarding any titles
granting professional rights.

7.1.2 Western Europe


In the Roman Empire there werent higher education institutions similar to those of
ancient Greece and Byzantium. The first higher education colleges appeared in
Western Europe after the 6th century AD and were founded by the Church, mainly in
metropolises and monasteries, for the education of priests and monks. This trend was
encouraged by the reform of Pope Gregory 7th in the 11th century. At first, these
ecclesiastic schools were using as textbooks religious books, especially the Bible, as
well as Aristotles writings. However, the situation changed after the 12th century
AD, when the writings of other ancient Greek scholars became available in the West,

128

7 Organization of Teaching and Research

in Greek or Arabic. These writings were translated into Latin, the language of
science and religion then, and became the official textbooks of the universities of that
era. This led to three cycles of studies, organized in a manner similar to that of
ancient Greek philosophical schools. In the first two cycles were taught seven core
courses,1 under the title liberal arts,2 because they were addressed to free citizens
and not slaves. The study of those seven basic subjects was mandatory for a student
in order to be able to proceed to the third graduating cycle of studies.
The first cycle, called trivium, included grammar, rhetoric and logic (or dialectic). By todays standards, this cycle could be classified as theoretical or
humanitarian science.
The second cycle, called quadrivium, included arithmetic, geometry, music and
astronomy. By todays standards, this cycle could be classified as science.
Finally, after the successful conclusion of the basic studies in liberal arts, the
student could proceed to higher level subjects and study philosophy and
theology (today we would refer to them as graduate studies).
In addition to these courses, that were directly taken from the curricula of the
schools of Late Antiquity, medieval universities soon expanded the scope of the
offered courses, introducing three new basic sciences, law, theology and
medicine.3
It is known that, perhaps as early as the 9th century, a medical school had been
established in Salerno (near the city of Naples in Italy). Universities, in the modern
sense, that were offering degrees in law, theology and medicine were first founded
in the 12th century; the first two of them were those of Bologna and Paris. The
University of Oxford was founded in the 13th century by professors and students
of the University of Paris and, soon after, other universities were also established.
The first universities did not have buildings or classrooms. They were simply a
group of professors and students, organizedaccording to the medieval practice
in guilds, called universitas magistorum et scholarium (group of teachers and
students). After a while, this title was reduced to universitas, from which came the
modern name university (Fig. 7.3).
Students were living in communes, called colleges (collegia). Originally,
teachers borne religious titles: master (in Latin, magister), lecturer and reader (in
Latin, lector), regent (in Latin, regens), and the degrees were awarded by the
chancellor, who was the representative of the church. In many countries these
titles are still in use. The title of professor was similar to the current title doctor
and characterized someone with a higher degree; this title was initially awarded
1

A difference between American and British terminology should be noted here: the equivalent
of the American term course is, in UK, a subject. In British terminology a course is a superset of a
subject.
2
The term liberal arts is still in use in American universities to differentiate the set of natural
sciences and humanitarian sciences, from medicine, agriculture and engineering.
3
Therefore, it should not come as a surprise that many of the first physicists were holding a degree
in medicine, since there was no other relevant degree for those who wanted to study science.

7.1 Universities Throughout the Centuries

129

Fig. 7.3 Kings College of


the University of Cambridge
in 1966 (photo by author)

only for theological studies, in which case the full title was: professor of sacred
theology (sacrae theologiae professor, STP). Then, in the 16th century, the use of
this title extended to other subjects and, since the most diligent students remained
in universities to teach younger ones, it was mainly associated with teaching. Only
at the end of the 18th century the title professor acquired the meaning it has today
(Table 7.1).

7.2 Research in Europe and the United States


During the last 100 years, the organization and conduct of research has changed
radically. Scientists in the 18th and 19th century had a very limited number of
collaborators, mainly technicians. For example, Newton worked alone, and so did
his rivals, Leibniz in mathematics and Huygens in optics. Faradays assistant
was a retired sergeant, while Maxwell used the help of his wife. But this situation
soon began to change. Science became more complex and the need for teamwork
imperative. Moreover, after World War I, a significant change took place in science. Until then, the center of the global scientific community was Europe and
especially Germany, while in the United States research was considered a waste of
time, since the American society was mainly focused on economic activities. The
war, however, showed that research and entrepreneurial activity are closely linked.
For example, by investing in the development and production of new materials a
company can realize significant profits. The same is true for the war industry, the
food industry, telecommunications, etc. It is not accidental that the largest research
center of the 20th century in the United States, Bell Laboratories, was owned by
the national telephone company, AT&T, before the government decided to split
the company in order to end its monopoly. Up to 2012, seven Nobel prizes in
physics had been awarded to thirteen scientists for research work conducted in this
research center. Bell Laboratories focused, of course, to applied research, with
achievements such as the transistor, the LASER, the UNIX operating system, and

130

7 Organization of Teaching and Research

Table 7.1 Comparison of medieval universities with modern universities


Medieval universities
Universities from The Renaissance up to the
19th century
1

Medieval universities were international, in The Latin language was gradually replaced
the sense that the teaching language was
by local, national languages; colleges
Latin, something that allowed teachers
recruited teachers and admitted students
and students to move within Europe
mainly from their founding country
regardless of their nationality
2a Universities were under the trusteeship of the Universities are separated from the church
church
2b There was no freedom of teaching and
The long and harsh religious struggles
research. The purpose of studies was to
resulted in the predominance of liberal
convey the established doctrines and
spirit and religious tolerance, from which
recognized teachings
research benefited significantly (see, for
example, the cases of Holland and France,
in contrast to that of Italy)
2c
Universities become centers of free teaching
and free scientific research
2d
Universities become institutions controlled
by the state (either public or privately
owned), but financially and
administratively independent
3 Teaching was based on scholastic philosophy, The scholastic philosophy and the scholastic
following the method of scholastic
method were abandoned and new
commentary of the texts and dialectical
textbooks were introduced, written by
debate (this explains the form of several
contemporary scientists (primarily by the
Galileos books, which were written in the
professors themselves)
form of dialogues)
4 There were no studies of natural sciences, in New subjects are introduced and the old ones
the sense that there was no objective
gradually lose their importance, resulting
recording of natural phenomena or
in curriculum changes as well. Also, the
experimental research, nor any attempt to
method of teaching is changing, from
determine the sequence of events (causal
commenting on old texts and dialectical
relationship) and the laws that govern
discussion to lecturing by professors and
them
tutoring by assistants

the programming language C. There were, however, very important results on


basic research as well, one of which was the discovery of the microwave background radiation, the most important experimental evidence supporting the big
bang theory. As a result, after World War I the United States funded research
enthusiastically, motivating many talented American scientists to engage in
research, and attracting to USA some of the best European scientists. As a result,
in recent decades, USA has become an international center of research.
However, during the last 100 years, apart from the organization of research, the
structure of universities has changed as well. Up to the late 19th century, the
international center of science was Europe; the organization of research and
teaching in European universities followed the so-called German system, as in
the late 19th century Germany was leading research at an international level. The

7.2 Research in Europe and the United States

131

key building block of this system was the university laboratory, whose director
was a professor with increased administrative responsibilities. The professor was
directing and supervising the work of the other members of the laboratory, namely,
of the dozents, assistants and chief-assistants; he was setting the research directions of the team, managing the laboratorys financial matters and, furthermore, he
had the exclusive right to teach (together with dozens) and supervise doctoral
theses. In addition, only professors were entitled to university administrative posts,
such as those of a rector or a dean. During the 20th century, and in particular after
World War II, the organization of universities changed almost everywhere, and
most countries followed the model presented in the following paragraph.
In the early 20th century, the American university model began to influence
academic institutions and research in Europe, first in countries with a tradition in
liberal ideas, such as Netherlands and Denmark, and later in countries dominated
by bureaucracy, such as the former Soviet Union. The following event, which
happened in 1961 during the visit of the great Danish physicist Niels Bohr (Nobel
Prize in Physics, 1922) in the Soviet Union, demonstrates this difference in
mentality between the scientists in the two countries. During one of the lectures he
delivered there, a Soviet physicist asked Bohr how he managed to set up in a small
country, such as Denmark, a brilliant school of physicists. Bohrs answer was
Presumably because I was never embarrassed to confess to my students that Im a
fool. By mistake, however, his response was translated in the Russian proceedings as Presumably because I was never embarrassed to declare to my students
that they are fools. When later the Russian theoretical physicist Evgeny Lifshitz
(Evgeny Mikhailovich Lifshitz 19151985) read to a gathering of colleagues
Bohrs reply from the official transcript, sensation was created in the audience.
Lifshitz, then, referring to the original English text, translated correctly Bohrs
reply and said that the error was an oversight of the translator. But the great
physicist Pyotr Kapitza (Pyotr Leonidovich Kapitza 18941984, Nobel prize in
Physics 1978), who attended the event, commented that the initial translation was
not an oversight. It reflected accurately the main difference between the school of
Bohr and that of Lev Landau (Lev Davidovich Landau 19081968, Physics Nobel
1962), of which Lifshitz was a member. While in Denmark Bohr had associates, in
the Soviet Union Landau had assistants.
The model upon which teaching and research was organized in the United
States became the generally accepted model around the world. The building block
of the university is the department, which awards degrees in a specific field. A
department may have professors of various statuses, their differences being primarily on salary and tenure and not on teaching and research rights. For example,
in Greek universities, faculty members of all levels can teach and supervise
doctoral theses, while professors of the upper two levels can be elected to
administrative positions, such as department chairman, dean, chancellor and rector. Research is conducted by groups, large or small. The group leader is usually a
scientist who has original and interesting ideas for research. Based on these ideas,
research proposals are prepared and then submitted for funding to public or private
agencies; these agencies issue periodically bidding calls to fund research

132

7 Organization of Teaching and Research

proposals, usually in a more or less specific direction. If a research proposal is


accepted and funded, graduate students work on the topic of the proposal to
prepare their masters and Ph.D. theses.
The research group has a pyramid organizational structure. The leader is at the
top, above post-docs, who act as intermediaries between the leader and Ph.D.
students, who carry out the bulk of the work. For example, post-docs train
Ph.D. students to new research techniques or to the use of equipment and computers, and check regularly their progress. Ph.D. students, in turn, train undergraduate and graduate students, who are working towards their diploma theses.
The authors list of the publications resulting from the research work of the group
contains several names. The order of names is semantic. If it is not alphabetical, it
suggests that most of the work, and possibly the writing of the paper, has been
done by the first author. The rest of the names appear in order of importance of
contribution. In most cases, the name of the team leader does not appear first; in its
place appear the names of younger researchers, who need support for their future
career. The order of names is not the same in all publications of the research group,
since, due to the partition of the work, each member undertakes specific tasks.
Sometimes a single group is not sufficient to carry out a complex research
program, especially when difficult experiments are involved, requiring the use of
very large instruments, such as modern particle accelerators, large terrestrial or
orbital telescopes, etc. In such a case, dozens, hundreds or even thousands of
names may appear in the authors list! In this case, it is difficult for someone not
familiar with the subject to appreciate fully the contribution of each author to the
final result.

7.3 Dissemination of Research Results


7.3.1 Publications
The research results of a scientist may be communicated to the scientific community in many ways. The oldest method, dating from the time of ancient Greeks,
is the publication of a book. This method continued to be used in the Hellenistic
period and in the Middle Ages, both in Europe and the Arab world. So the first
modern scientists, who lived during The Renaissance, followed this model. For
example, Galileo, who is considered as the first scientist of the modern era because
he realized that experimentation is a basic research tool, published his results in
books. Newton published his results in books, as well, but not because of lack of
other methods, since during Galileos time another means to publish research
results had appeared, namely the scientific journal. Newton published his results in
books because the first scientific article he had submitted to a journal was subjected to harsh criticism from the editor of the journal, who was Hooke. This was
mainly the reason why, for a long time Newton did not submit articles to scientific

7.3 Dissemination of Research Results

133

journals; instead, he preferred to keep the intermediate results of his work to


himself and publish them, when they had attained a level of completeness, in a
book form. Although many of Newtons contemporaries published their finding in
books, several began to use scientific journals. Over time, the second method
prevailed, and as a result, from the early 20th century onwards books are used
mainly to summarize only the culmination of a scientists research work and not
as a means to present primary information. Instead, the most appropriate and
generally accepted way to announce new results became the publication of papers
in scientific journals. Still, however, in some cases the established method of
publication in refereed journals, which is analyzed in the following paragraph,
leads to injustices similar to that made to Newton by Hooke.
There are two types of scientific journals: those that publish all papers they
receive and those that publish only papers that have been evaluated positively by
one or more referees. The referees are not employees of the journal, they are
simply scientists, working on the same field as that of the author of the submitted
paper, willing to do this evaluation without any fee for two reasons: first, because
they know that in this way they provide a service to the publication system and
second, because it is an honor for a scientist to belong to the body of referees of an
acclaimed scientific journal. Refereeing in an acclaimed journal is something that
deserves to be written in the curriculum vitae of any scientist. In what follows we
explain what we mean with the phrase acclaimed journal.
In assessing the quality of a submitted paper, referees typically consider four
things:
Is the content original or has been previously published by other scientists?
Is it correct from a scientific standpoint?
Does it contain enough new knowledge or information?
Is the information presented interesting for other researchers?

For example, it is possible to write a paper on the daily temperature recorded by


a thermometer in the garden of my house during the last 30 years. Certainly, no
one has presented this information before, it cannot be wrong because it is a simple
quote of numerical measurements, and contains a large amount of information. But
it is useless to other researchers, since they cannot use my results in the field of
their interest, unless I link somehow my results to a field of general interest, such
as the climate of the city or region that I live, global warming, etc.
The quality of a journal is assessed by its impact factor (see below); the quality
of a paper is presumed by the impact factor of the journal in which it has been
published, as well as by the number of citations to it (by authors of other papers).
Lets start with the second. A publication is considered important if its results have
been used by other authors, a fact that can be measured by the number of
citations of this publication by other authors. For example, in a publication can be
stated that using the formula proposed by Smith (2012), we obtain the following
results or Smith (2010) and Jones (2012) worked on one aspect of a certain
issue, while we will work on another. For each citation of a paper we have
included in our own paper, we provide the necessary information to facilitate the

134

7 Organization of Teaching and Research

reader to refer to the cited work: the authors name and the year of publication. At
the end of our paper we include a complete list of all references contained therein,
starting with the name of the author and continuing with the journal title, volume,
page and year of publication. Each entry in this list is a citation to a publication of
another author. The number of citations to the work of a scientist that is considered
as small, sufficient or large varies from one discipline to another. It could,
however, be said generally that a few hundred citations are definitively adequate,
fewer than one hundred is surely not enough, and more than a thousand are
certainly many. Some scientists have tens of thousands of citations to their work.
The evaluation of a journal is based on the citations to papers published in it.
This information is available in various electronic data bases, such as the
following:

Web of Knowledge (http://www.isiwebofknowledge.com)


Scopus (http://www.scopus.com)
ADS (http://adswww.harvard.edu)
Inspire (http://inspirehep.net)

The first two of these are accessible only to subscribers, while the latter two are
freely available. Each of the electronic databases consists of a list including the
following information:
The data of all papers published each year (authors, journal, volume and page).
All citations to these papers, sorted by the name of the (first) author of the
publication being cited.
The so-called impact factor (IF) of each scientific journal published in the
world, that is, the average number of citations to papers published in this journal
during the last 2 years. Obviously, the higher this number, the higher is the
standard of the journal, since papers published in this journal have a higher
impact than those published in other journals with a lower IF.
Therefore, authors obviously prefer to publish their results in journals with high
IF, firstly because it adds points to their curriculum vitae and secondly because
their work will be read by more scientists, since these journals are considered by
the international scientific community as publishing the most interesting research
results. As a result, journals with high IF receive many papers to be published,
much more than can be included in the limited number of pages of each issue; so,
the journal editors require from their referees to be strict in their evaluations.
Consequently, journals with high IF reject most of the papers submitted.

7.3.2 Conferences
Another way of publicizing research results is the presentation of papers in conferences. A 100 years ago, this method was far more effective, because the publication of a paper in a journal was a time consuming process: the time taken from

7.3 Dissemination of Research Results

135

the moment of submission of the work to the journal until the arrival of the printed
issue in a library was typically more than a year! One has only to contemplate the
procedure. In the beginning the author mailed the manuscript to the journal. There,
the editor, a distinguished scientist at a university or research center who often
worked without pay, read the manuscript and forwarded it to a suitable referee (or
refereessome journals used one referee, others two and a few journals three for
each submitted manuscript). Then the manuscript was mailed to the referee, who
had to evaluate it in 1 or 2 months. If the referee decided that the manuscript could
be published as it is, the paper was accepted without further delays. If the referee
decided that the manuscript was of poor quality and had to be rejected, the paper
was returned to the author and that was the end of the line. But if the referee
proposed certain corrections or improvements, which was usually the case, the
editor had to mail the referees comments to the author, who responded, writing a
new, improved (in principle) version of the paper. If the referee judged that the
new version was satisfactory, he would consent for its publication. Then, the
manuscript was sent to the printing office for typesetting the text (which usually
included complex mathematical or chemical formulas and diagrams or photographs) and after this step was also completed the journal mailed the proofs to the
author, in order to make sure that there werent any typographical errors. After the
final corrections by the author and the approval of the printed version, the article
was sent to press, to be included in one of the future issues of the journal. Finally,
the issue was mailed to university libraries and research centers. It is obvious that
news was circulating much faster in conferences, which were often organized
during summer vacations. In conferences, any scientist could present the recent
results of his research workin the early years only verbally but later, in the form
of a poster as well.
In recent decades the situation began to change because, on the one hand, the
development of telecommunications and air-transport made communication
between scientists much easier. On the other hand, the time between the submission of a manuscript and the final release of the journal issue has been
shortened significantly, because submission of the manuscript can now be done
electronically, and so does the answer of the referee(s). Furthermore, typesetting
time has shrunk to zero, because authors send the manuscript written in special
electronic forms supplied by the various journals; so if a manuscript is approved
for publication it is practically ready for printing. Having said that, one might think
that the importance of conferences has shrunk. However, the exactly opposite has
happened, for a reason that is not obvious from the outset. With the rapid
development of science, especially after World War II, the number of scientists,
and hence the number of publications produced each year, increased dramatically.
To address this extremely large output of scientific publications, and to satisfy the
need for publishing papers in completely new areas such as biotechnology,
materials science, computing, etc., many new journals appeared on the market.
Today (2013), at least one hundred times more papers are published each year than
before 1940. This flood of publications reversed the situation that prevailed a
century ago. Then, there was a lack of sufficient or timely information, so scientists

136

7 Organization of Teaching and Research

participated in conferences to compensate this problem. Today, communication of


information is both efficient and timely (most journals are available in electronic
form, so they can be accessed through the World Wide Web) but scientists do not
even have the time to read all the papers related to their narrow scientific field.
Conferences now play another role: they allow scientists to advertise their
results to their colleagues, who have not had time to read all the relevant publications in the journals! Moreover, personal contact is always playing an important
role in human relationships. Naturally, collaborations and citations are focused
more to the work of people we know personally, than to the work of people we
only know by their name printed on the top of an article.
The information presented in conferences used to be published in conference
proceedings. Today, this practice is not so widespread, since every year dozens of
conferences are organized on the same subject area and it is practically impossible
all corresponding presentations to be read by scientists working on the same field.
Besides, the effort to write a paper that very few will read is sometimes considered
as a waste of time. Today, papers presented in conferences are usually published
a little later, and in better form, in scientific journals, as a publication in a refereed
journal is considered more important than in conference proceedings, even if the
proceedings are refereed.

Further Reading

The interested reader might consult some of the following books for information beyond what is
contained in the book.
Asimov, I. (1982). Biographical Encyclopedia of Science & Technology. New York: Doubleday
(Online form: OCLC 523479).
Asimov, I. (1985). The History of Physics. New York: Walker Publishing Company.
Butterfield, H. (1997). The Origins of Modern Science, 13001800 (revised ed.). New York: Free
Press.
Crombie, A. C. (1969). Augustine to Galileo: The History of Science A.D. 4001650 (revised
ed.). London: Penguin.
Duhem, Q. (1969). To save the phenomena, an essay on the idea of physical theory from Plato to
Galileo. Chicago: University of Chicago Press (Online form: OCLC 681213472).
Einstein, A., & Infeld, L. (1967). The Evolution of Physics. New York: Touchstone (also
available in electronic form, free of charge, from: http://archive.org/stream/
evolutionofphysi033254mbp/evolutionofphysi033254mbp_djvu.txt).
Farrington, B., & Needham, J. (2000). Greek Science: Its Meaning for Us. Nottingham:
Spokesman Books.
Feynman, R. (1994). The Character of Physical Law. New York: Modern Library (Seven
lectures, available also live in Youtube).
Gamow, G. (1985). Thirty Years that Shook Physics: The Story of Quantum Theory. New York:
Dover.
Gillispie, C. C. (1966). The edge of objectivity. Princeton: Princeton University Press.
Gleick, J. (2008). Chaos: making a new science. London: Penguin.
Heath, T. L. (1991). Greek Astronomy. New York: Dover.
Heisenberg, W. (1958). The Physicists Conception of Nature. Westport: Greenwood Press.
Heisenberg, W., & Davies, P. (2000). Physics and Philosophy: The Revolution in Modern
Science. London: Penguin Classics.
Kirk, G. S., Raven, J. E., & Schofield, M. (1983). The Presocratic Philosophers, a Critical
History with a Selection of Texts (2nd ed.). Cambridge: Cambridge University Press.
Neugebauer, O. (1969). The Exact Sciences in Antiquity. New York: Dover.
Rossi, Q. (2001). The Birth of Modern Science. Hoboken: Wiley-Blackwell.
Schneer, C. J. (1984). The Evolution of Physical Science: Major Ideas from Earliest Times to the
Present. Lanham: University Press of America.
Segr, E. (2007). From Falling Bodies to Radio Waves: Classical Physicists and Their
Discoveries. New York: Dover
Segr, E. (2007). From X-rays to Quarks: Modern Physicists and Their Discoveries. New York:
Dover.
Sorabji, R. (Ed.). (1987). Philoponus and the Rejection of Aristotelian Science. Cornell: Cornell
University Press.

H. Varvoglis, History and Evolution of Concepts in Physics,


DOI: 10.1007/978-3-319-04292-3,  Springer International Publishing Switzerland 2014

137

138

Further Reading

For those with a good background in mathematics, a very interesting book is the classic:
Whittaker, E. T. (1989). A History of the Theories of Aether & Electricity: The Classical
Theories/The Modern Theories (Vol. 2). New York: Dover (Vol. 1 is available also in
electronic form, free of charge, from: http://archive.org/details/historyoftheorie00whitrich).

You might also like