You are on page 1of 171

Contents

1 Lev Landau 1
1.1 Life . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Early years . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.2 Leningrad and Europe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.3 National Scientic Center Kharkiv Institute of Physics and Technology, Kharkiv . . . . . . 1
1.1.4 Institute for Physical Problems, Moscow . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.5 Scientic achievements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.6 Personal life and views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.7 Last years . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.8 Death . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Legacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Landaus List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 In popular culture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5 Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5.1 Landau and Lifshitz Course of Theoretical Physics . . . . . . . . . . . . . . . . . . . . . . 4
1.5.2 Other . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.8 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2 Felix Bloch 7
2.1 Life and work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Footnotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.5 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

3 Quantum thermodynamics 9
3.1 A dynamical view of quantum thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.1.1 The emergence of time derivative of rst law of thermodynamics . . . . . . . . . . . . . . 9
3.1.2 The emergence of the second law of thermodynamics . . . . . . . . . . . . . . . . . . . . 10

i
ii CONTENTS

3.1.3 The Quantum and Thermodynamic Adiabatic Conditions and Quantum Friction . . . . . . 11
3.1.4 The emergence of the dynamical version of the third law of thermodynamics . . . . . . . . 11
3.2 Typicality as a source of emergence of thermodynamical phenomena . . . . . . . . . . . . . . . . 12
3.3 Quantum thermodynamics resource theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.5 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

4 Master equation 14
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.1.1 Detailed description of the matrix A , and properties of the system . . . . . . . . . . . . . 14
4.1.2 Examples of master equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.2 Quantum master equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.5 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

5 Markov property 17
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5.2 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5.3 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5.4 Alternative formulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.5 Strong Markov property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.6 In forecasting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.7 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

6 Lindblad equation 19
6.1 Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
6.2 Harmonic oscillator example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.5 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

7 Amir Caldeira 21
7.1 Selected Scientic Articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
7.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

8 Anthony James Leggett 22


8.1 Early life and education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
8.2 Career . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
8.3 Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
CONTENTS iii

8.4 Awards and honours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23


8.5 Personal life . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
8.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

9 Nitrogen-vacancy center 25
9.1 Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
9.2 Production . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
9.3 Basic optical properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
9.4 Energy level structure and its manipulation by external elds . . . . . . . . . . . . . . . . . . . . . 26
9.5 Spin dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
9.6 Potential applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
9.7 Historical remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
9.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
9.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

10 Quantum mechanics 32
10.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
10.2 Mathematical formulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
10.3 Mathematically equivalent formulations of quantum mechanics . . . . . . . . . . . . . . . . . . . . 36
10.4 Interactions with other scientic theories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
10.4.1 Quantum mechanics and classical physics . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
10.4.2 Copenhagen interpretation of quantum versus classical kinematics . . . . . . . . . . . . . . 38
10.4.3 General relativity and quantum mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . 38
10.4.4 Attempts at a unied eld theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
10.5 Philosophical implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
10.6 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
10.6.1 Electronics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
10.6.2 Cryptography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
10.6.3 Quantum computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
10.6.4 Macroscale quantum eects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
10.6.5 Quantum theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
10.7 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
10.7.1 Free particle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
10.7.2 Step potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
10.7.3 Rectangular potential barrier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
10.7.4 Particle in a box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
10.7.5 Finite potential well . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
10.7.6 Harmonic oscillator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
10.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
10.9 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
10.10References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
10.11Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
iv CONTENTS

10.12External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

11 Markov chain 49
11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
11.2 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
11.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
11.3.1 Gambling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
11.3.2 A birth-death process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
11.3.3 A non-Markov example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
11.4 Markov property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
11.4.1 The general case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
11.4.2 For discrete-time Markov chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
11.5 Formal denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
11.5.1 Discrete-time Markov chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
11.5.2 Continuous-time Markov chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
11.6 Transient evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
11.7 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
11.7.1 Reducibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
11.7.2 Periodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
11.7.3 Transience and recurrence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
11.7.4 Ergodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
11.7.5 Steady-state analysis and limiting distributions . . . . . . . . . . . . . . . . . . . . . . . . 56
11.8 Finite state space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
11.8.1 Stationary distribution relation to eigenvectors and simplices . . . . . . . . . . . . . . . . . 57
11.8.2 Time-homogeneous Markov chain with a nite state space . . . . . . . . . . . . . . . . . . 57
11.8.3 Convergence speed to the stationary distribution . . . . . . . . . . . . . . . . . . . . . . . 58
11.9 Reversible Markov chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
11.9.1 Closest reversible Markov chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
11.10Bernoulli scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
11.11General state space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
11.11.1 Harris chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
11.11.2 Locally interacting Markov chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
11.12Markovian representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
11.13Transient behaviour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
11.14Stationary distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
11.14.1 Example 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
11.14.2 Example 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
11.15Hitting times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
11.15.1 Expected hitting times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
11.16Time reversal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
11.17Embedded Markov chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
11.18Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
CONTENTS v

11.18.1 Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
11.18.2 Chemistry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
11.18.3 Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
11.18.4 Speech recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
11.18.5 Information and computer science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
11.18.6 Queueing theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
11.18.7 Internet applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
11.18.8 Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
11.18.9 Economics and nance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
11.18.10Social sciences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
11.18.11Mathematical biology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
11.18.12Genetics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
11.18.13Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
11.18.14Music . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
11.18.15Baseball . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
11.18.16Markov text generators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
11.18.17Bioinformatics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
11.19See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
11.20Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
11.21History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
11.22References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
11.23External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

12 Density matrix 71
12.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
12.2 Pure and mixed states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
12.2.1 Example: Light polarization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
12.2.2 Mathematical description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
12.3 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
12.4 Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
12.5 Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
12.6 The von Neumann equation for time evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
12.7 Quantum Liouville, Moyals equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
12.8 Composite systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
12.9 C*-algebraic formulation of states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
12.10See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
12.11Notes and references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

13 Matrix (mathematics) 77
13.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
13.1.1 Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
13.2 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
vi CONTENTS

13.3 Basic operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78


13.3.1 Addition, scalar multiplication and transposition . . . . . . . . . . . . . . . . . . . . . . . 79
13.3.2 Matrix multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
13.3.3 Row operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
13.3.4 Submatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
13.4 Linear equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
13.5 Linear transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
13.6 Square matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
13.6.1 Main types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
13.6.2 Main operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
13.7 Computational aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
13.8 Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
13.9 Abstract algebraic aspects and generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
13.9.1 Matrices with more general entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
13.9.2 Relationship to linear maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
13.9.3 Matrix groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
13.9.4 Innite matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
13.9.5 Empty matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
13.10Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
13.10.1 Graph theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
13.10.2 Analysis and geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
13.10.3 Probability theory and statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
13.10.4 Symmetries and transformations in physics . . . . . . . . . . . . . . . . . . . . . . . . . . 88
13.10.5 Linear combinations of quantum states . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
13.10.6 Normal modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
13.10.7 Geometrical optics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
13.10.8 Electronics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
13.11History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
13.11.1 Other historical usages of the word matrix in mathematics . . . . . . . . . . . . . . . . . 90
13.12See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
13.13Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
13.14References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
13.14.1 Physics references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
13.14.2 Historical references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
13.15External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

14 Eigenvalues and eigenvectors 96


14.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
14.2 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
14.3 Eigenvalues and eigenvectors of matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
14.3.1 Eigenvalues and the characteristic polynomial . . . . . . . . . . . . . . . . . . . . . . . . 98
14.3.2 Algebraic multiplicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
CONTENTS vii

14.3.3 Eigenspaces, geometric multiplicity, and the eigenbasis for matrices . . . . . . . . . . . . . 99


14.3.4 Additional properties of eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
14.3.5 Left and right eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
14.3.6 Diagonalization and the eigendecomposition . . . . . . . . . . . . . . . . . . . . . . . . . 101
14.3.7 Variational characterization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
14.3.8 Matrix examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
14.4 Eigenvalues and eigenfunctions of dierential operators . . . . . . . . . . . . . . . . . . . . . . . 104
14.4.1 Derivative operator example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
14.5 General denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
14.5.1 Eigenspaces, geometric multiplicity, and the eigenbasis . . . . . . . . . . . . . . . . . . . . 104
14.5.2 Zero vector as an eigenvector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
14.5.3 Spectral theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
14.5.4 Associative algebras and representation theory . . . . . . . . . . . . . . . . . . . . . . . . 105
14.6 Dynamic equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
14.7 Calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
14.7.1 Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
14.7.2 Eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
14.8 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
14.8.1 Eigenvalues of geometric transformations . . . . . . . . . . . . . . . . . . . . . . . . . . 106
14.8.2 Schrdinger equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
14.8.3 Molecular orbitals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
14.8.4 Geology and glaciology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
14.8.5 Principal component analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
14.8.6 Vibration analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
14.8.7 Eigenfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
14.8.8 Tensor of moment of inertia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
14.8.9 Stress tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
14.8.10 Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
14.8.11 Basic reproduction number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
14.9 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
14.10Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
14.11References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
14.12External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
14.12.1 Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
14.12.2 Demonstration applets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

15 Positive-denite matrix 114


15.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
15.2 Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
15.3 Characterizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
15.4 Quadratic forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
15.5 Simultaneous diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
viii CONTENTS

15.6 Negative-denite, semidenite and indenite matrices . . . . . . . . . . . . . . . . . . . . . . . . 116


15.6.1 Negative-denite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
15.6.2 Positive-semidenite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
15.6.3 Negative-semidenite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
15.6.4 Indenite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
15.7 Further properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
15.8 Block matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
15.9 On the denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
15.9.1 Consistency between real and complex denitions . . . . . . . . . . . . . . . . . . . . . . 117
15.9.2 Extension for non symmetric matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
15.10See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
15.11Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
15.12References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
15.13External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

16 Cambridge University Press 120


16.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
16.2 Governance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
16.3 Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
16.3.1 Academic publishing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
16.3.2 Cambridge English Language Teaching (ELT) . . . . . . . . . . . . . . . . . . . . . . . . 122
16.3.3 Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
16.4 Electronic and digital developments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
16.5 Controversies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
16.5.1 Alms for Jihad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
16.5.2 Cambridge University Press, et al. v. Becker et al. . . . . . . . . . . . . . . . . . . . . . . 123
16.6 Community work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
16.7 Open access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
16.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
16.9 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
16.10References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
16.11External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124

17 Hamiltonian (quantum mechanics) 125


17.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
17.2 The Schrdinger Hamiltonian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
17.2.1 One particle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
17.2.2 Many particles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
17.3 Schrdinger equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
17.4 Dirac formalism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
17.5 Expressions for the Hamiltonian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
17.5.1 General forms for one particle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
CONTENTS ix

17.5.2 Free particle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127


17.5.3 Constant-potential well . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
17.5.4 Simple harmonic oscillator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
17.5.5 Rigid rotor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
17.5.6 Electrostatic or coulomb potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
17.5.7 Electric dipole in an electric eld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
17.5.8 Magnetic dipole in a magnetic eld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
17.5.9 Charged particle in an electromagnetic eld . . . . . . . . . . . . . . . . . . . . . . . . . 129
17.6 Energy eigenket degeneracy, symmetry, and conservation laws . . . . . . . . . . . . . . . . . . . . 129
17.7 Hamiltons equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
17.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
17.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

18 Quantum state 131


18.1 Conceptual description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
18.1.1 Pure states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
18.1.2 Schrdinger picture vs. Heisenberg picture . . . . . . . . . . . . . . . . . . . . . . . . . . 132
18.2 Formalism in quantum physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
18.2.1 Pure states as rays in a Hilbert space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
18.2.2 Braket notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
18.2.3 Spin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
18.2.4 Many-body states and particle statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
18.2.5 Basis states of one-particle systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
18.2.6 Superposition of pure states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
18.2.7 Mixed states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
18.3 Interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
18.4 Mathematical generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
18.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
18.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
18.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
18.8 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137

19 Hermitian matrix 138


19.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
19.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
19.3 Further properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
19.4 Rayleigh quotient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
19.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
19.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
19.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140

20 Trace (linear algebra) 141


x CONTENTS

20.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141


20.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
20.2.1 Basic properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
20.2.2 Trace of a product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
20.2.3 Other properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
20.3 Exponential trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
20.4 Trace of a linear operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
20.4.1 Eigenvalue relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
20.4.2 Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
20.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
20.6 Lie algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
20.6.1 Bilinear forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
20.7 Inner product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
20.8 Generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
20.9 Coordinate-free denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
20.9.1 Dual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
20.10See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
20.11Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
20.12External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

21 Symmetric matrix 147


21.1 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
21.1.1 Real symmetric matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
21.1.2 Complex symmetric matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
21.2 Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
21.3 Hessian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
21.4 Symmetrizable matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
21.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
21.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
21.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
21.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
21.9 Text and image sources, contributors, and licenses . . . . . . . . . . . . . . . . . . . . . . . . . . 150
21.9.1 Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
21.9.2 Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
21.9.3 Content license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
Chapter 1

Lev Landau

Lev Davidovich Landau (Russian: University. In Leningrad, he rst made the acquain-
; IPA: [lv dvidvit lnda.u]; January 22 tance of theoretical physics and dedicated himself fully
[O.S. January 9] 1908 1 April 1968) was a Soviet to its study, graduating in 1927. Landau subsequently en-
physicist who made fundamental contributions to many rolled for post-graduate studies at the Leningrad Physico-
areas of theoretical physics. His accomplishments in- Technical Institute where he eventually received a doc-
clude the independent co-discovery of the density ma- torate in Physical and Mathematical Sciences in 1934.[7]
trix method[1] in quantum mechanics (alongside John Landau got his rst chance to travel abroad during the
von Neumann), the quantum mechanical theory of period 19291931, on a Soviet governmentPeoples
diamagnetism, the theory of superuidity, the theory Commissariat for Educationtravelling fellowship sup-
of second-order phase transitions, the GinzburgLandau plemented by a Rockefeller Foundation fellowship. By
theory of superconductivity, the theory of Fermi liquid, that time he was uent in German and French and could
the explanation of Landau damping in plasma physics, communicate in English.[8] He later improved his English
the Landau pole in quantum electrodynamics, the two- and learned Danish.[9]
component theory of neutrinos, and Landaus equations After brief stays in Gttingen and Leipzig, he went to
for S matrix singularities.[2] He received the 1962 Nobel Copenhagen on 8 April 1930 to work at the Niels Bohrs
Prize in Physics for his development of a mathematical Institute for Theoretical Physics. He stayed there till 3
theory of superuidity that accounts for the properties of May of the same year. After the visit, Landau always
liquid helium II at a temperature below 2.17 K (270.98 considered himself a pupil of Niels Bohr and Landaus
C).[3] approach to physics was greatly inuenced by Bohr. Af-
ter his stay in Copenhagen, he visited Cambridge (mid-
1930), where he worked with P. A. M. Dirac,[10] Copen-
1.1 Life hagen ( 20 to 22 September 22 November 1930),[11]
and Zurich (December 1930 to January 1931), where
1.1.1 Early years he worked with Wolfgang Pauli.[10] From Zurich Landau
went back to Copenhagen for the third time[12] and stayed
Landau was born on 22 January 1908 to Jewish there from 25 February till 19 March 1931 before return-
parents[3][4][5][6] in Baku, Azerbaijan, in what was then ing to Leningrad the same year.[13]
the Russian Empire. Landaus father was an engineer
with the local oil industry and his mother was a doc-
tor. He learned to dierentiate at age 12 and to inte-
1.1.3 National Scientic Center Kharkiv
grate at age 13. Landau graduated in 1920 at age 13 from Institute of Physics and Technology,
gymnasium. His parents considered him too young to at- Kharkiv
tend university, so for a year he attended the Baku Eco-
nomical Technical School (). In 1922, at age Between 1932 and 1937 he headed the Department
14, he matriculated at the Baku State University, studying of Theoretical Physics at the National Scientic Cen-
in two departments simultaneously: the Departments of ter Kharkiv Institute of Physics and Technology and
Physics and Mathematics, and the Department of Chem- lectured at the University of Kharkiv and the Kharkiv
istry. Subsequently, he ceased studying chemistry, but Polytechnical Institute. Apart from his theoretical ac-
remained interested in the eld throughout his life. complishments, Landau was the principal founder of a
great tradition of theoretical physics in Kharkiv, Soviet
Union, sometimes referred to as the Landau school. In
1.1.2 Leningrad and Europe Kharkiv, he and his friend and former student, Evgeny
Lifshitz, began writing the Course of Theoretical Physics,
In 1924, he moved to the main centre of Soviet physics ten volumes that together span the whole of the subject
at the time: the Physics Department of Leningrad State and are still widely used as graduate-level physics texts.

1
2 CHAPTER 1. LEV LANDAU

During the Great Purge, Landau was investigated within perature below 2.17 K (270.98 C). [21]
the UPTI Aair in Kharkiv, but he managed to leave for
Moscow to take up a new post.[14]
1.1.6 Personal life and views
Landau developed a famous comprehensive exam called
the Theoretical Minimum which students were ex-
pected to pass before admission to the school. The
exam covered all aspects of theoretical physics, and be-
tween 1934 and 1961 only 43 candidates passed, but
those who did later became quite notable theoretical
physicists.[15][16]
In 1932, he computed the Chandrashekhar limit;[17] how-
ever, he did not apply it to white dwarf stars.

1.1.4 Institute for Physical Problems,


Moscow

Landau was the head of the Theoretical Division at the


Institute for Physical Problems from 1937 until 1962.[18]
Landau was arrested on 27 April 1938, because he
had compared the Stalinist dictatorship with that of
Hitler,[14][19] and was held in the NKVD's Lubyanka
prison until his release on 29 April 1939, after the
head of the institute Pyotr Kapitsa, an experimental low-
temperature physicist, wrote a letter to Joseph Stalin, per-
sonally vouching for Landaus behavior, and threatening
to quit the institute if Landau were not released.[20] After
his release Landau discovered how to explain Kapitsas
superuidity using sound waves, or phonons, and a new
excitation called a roton.[14]
Landau led a team of mathematicians supporting Soviet
atomic and hydrogen bomb development. Landau cal-
culated the dynamics of the rst Soviet thermonuclear
bomb, including predicting the yield. For this work he
received the Stalin Prize in 1949 and 1953, and was
awarded the title "Hero of Socialist Labour" in 1954.[14]
Landau family in 1910
His students included Lev Pitaevskii, Alexei Abrikosov,
Evgeny Lifshitz, Lev Gor'kov, Isaak Khalatnikov, Roald In 1937 Landau married a girl from Kharkiv, Kora T.
Sagdeev and Isaak Pomeranchuk. Drobanzeva;[22] their son Igor was born in 1946. Landau
believed in free love rather than monogamy, and en-
couraged his wife and his students to practice free love";
1.1.5 Scientic achievements his wife was not enthusiastic.[14] During his life, Landau
was admitted involuntarily six times to the Kashchenko
Landaus accomplishments include the independent co- psychiatric hospital.[23]
discovery of the density matrix method in quantum me-
He was an atheist.[24][25] In 1957 a lengthy report to the
chanics (alongside John von Neumann), the quantum me-
CPSU Central Committee by the KGB recorded Landaus
chanical theory of diamagnetism, the theory of superu-
views on the 1956 Hungarian Uprising, Lenin, and what
idity, the theory of second-order phase transitions, the
he termed red fascism.[26]
GinzburgLandau theory of superconductivity, the the-
ory of Fermi liquid, the explanation of Landau damping
in plasma physics, the Landau pole in quantum electro-
1.1.7 Last years
dynamics, the two-component theory of neutrinos, and
Landaus equations for S matrix singularities. On 7 January 1962, Landaus car collided with an oncom-
He received the 1962 Nobel Prize in Physics for his de- ing truck. He was severely injured and spent two months
velopment of a mathematical theory of superuidity that in a coma. Although Landau recovered in many ways,
accounts for the properties of liquid helium II at a tem- his scientic creativity was destroyed,[18] and he never
1.3. LANDAUS LIST 3

returned fully to scientic work. His injuries prevented


him from accepting the 1962 Nobel Prize for physics in
person.[27]
Throughout his whole life Landau was known for his
sharp humor, which can be illustrated by the following
dialogue with a psychiatrist (P), who tried to test for a
possible brain damage while Landau (L) was recovering
from the car crash:[9]

P: Please draw me a circle


L draws a cross
P: Hm, now draw me a cross
L draws a circle
P: Landau, why don't you do what I ask?"
L: If I did, you might come to think I've be-
come mentally retarded.
A commemorative Russian silver coin dedicated to the 100th an-
In 1965 former students and co-workers of Landau niversary of Landaus birth
founded the Landau Institute for Theoretical Physics, lo-
cated in the town of Chernogolovka near Moscow, and
led for the following three decades by Isaak Markovich
Khalatnikov.
In June 1965, Lev Landau and Yevsei Liberman pub-
lished a letter in the New York Times, stating that as Soviet
Jews they opposed U.S. intervention on behalf of the
Student Struggle for Soviet Jewry.[28]

1.1.8 Death
Landau died on 1 April 1968, aged 60, from compli-
cations of the injuries sustained in the car accident he
was involved in six years earlier. He was buried at the
Novodevichy cemetery.[29][30]

1.2 Legacy
Two celestial objects are named in his honour:

the minor planet 2142 Landau.[31]


the lunar crater Landau.

1.3 Landaus List


Landau in 1962[9] on a 2010 Ukrainian stamp
Landau kept a list of names of physicists which he ranked
on a logarithmic scale of productivity ranging from 0 to
5.[32] The highest ranking, 0, was assigned to Isaac New-
ton. Albert Einstein was ranked 0.5. A rank of 1 was
awarded to the founding fathers of quantum mechanics,
Niels Bohr, Werner Heisenberg, Paul Dirac and Erwin about Landau, referred to the scale, and ranked himself in
Schrdinger, and others. Landau ranked himself as a 2.5 the fourth division, in the article My Life with Landau:
but later promoted himself to a 2. David Mermin, writing Homage of a 4.5 to a 2.[32][33]
4 CHAPTER 1. LEV LANDAU

1.4 In popular culture L.P. Pitaevskii; E.M. Lifshitz (1980). Statistical


Physics, Part 2. Vol. 9 (1st ed.). Butterworth
The Russian television lm My Husband the Ge- Heinemann. ISBN 978-0-7506-2636-1.
nius (unocial translation of the Russian title L.P. Pitaevskii; E.M. Lifshitz (1981). Physical Ki-
) released in 2008 tells the biography netics. Vol. 10 (1st ed.). Pergamon Press. ISBN
of Landau (played by Daniil Spivakovsky), mostly 978-0-7506-2635-4.
focusing on his private life. It was generally panned
by critics. People who had personally met Landau,
including famous Russian scientist Vitaly Ginzburg, 1.5.2 Other
said that the lm was not only terrible but also false
in historical facts. L.D. Landau, A.J. Akhiezer, E.M. Lifshitz (1967).
General Physics, Mechanics and Molecular Physics.
Another lm about Landau, Dau, is directed by Ilya Pergamon Press. ISBN 978-0-08-009106-8.
Khrzhanovsky with non-professional actor Teodor
Currentzis (an orchestra conductor) as Landau. L.D. Landau; A.I. Kitaigorodsky (1978). Physics
for Everyone. Mir Publishers Moscow.
L.D. Landau; Ya. Smorodinsky (2011). Lectures on
1.5 Works Nuclear Theory. Dover Publications.

1.5.1 Landau and Lifshitz Course of Theo- A complete list of Landaus works appeared in 1998 in
retical Physics the Russian journal Physics-Uspekhi.[34]

Main article: Course of Theoretical Physics


1.6 See also
L.D. Landau, E.M. Lifshitz (1976). Mechanics. LandauHopf theory of turbulence
Vol. 1 (3rd ed.). ButterworthHeinemann. ISBN
978-0-7506-2896-9. LandauLifshitzGilbert equation
LandauLifshitz model
L.D. Landau; E.M. Lifshitz (1975). The Classical
Theory of Fields. Vol. 2 (4th ed.). Butterworth Landau (crater)
Heinemann. ISBN 978-0-7506-2768-9.
Landau theory of second order phase transitions
L.D. Landau; E.M. Lifshitz (1977). Quantum Me-
chanics: Non-Relativistic Theory. Vol. 3 (3rd ed.). GinzburgLandau theory of superconductivity
Pergamon Press. ISBN 978-0-08-020940-1. 2nd Landau quantization, Landau levels
ed. (1965) at archive.org
Landau damping
V.B. Berestetskii, E.M. Lifshitz, L.P. Pitaevskii
(1982). Quantum Electrodynamics. Vol. 4 (2nd List of Jewish Nobel laureates
ed.). ButterworthHeinemann. ISBN 978-0-7506-
3371-0.
1.7 References
L.D. Landau; E.M. Lifshitz (1980). Statistical
Physics, Part 1. Vol. 5 (3rd ed.). Butterworth [1] Schlter, Michael; Lu Jeu Sham (1982). Density
Heinemann. ISBN 978-0-7506-3372-7. functional theory. Physics Today. 35 (2): 36.
Bibcode:1982PhT....35b..36S. doi:10.1063/1.2914933.
L.D. Landau; E.M. Lifshitz (1987). Fluid Mechan-
ics. Vol. 6 (2nd ed.). ButterworthHeinemann. [2] Shifman, M., ed. (2013). Under the Spell of Landau:
ISBN 978-0-08-033933-7. When Theoretical Physics was Shaping Destinies. World
Scientic. ISBN 978-981-4436-56-4. doi:10.1142/8641.
L.D. Landau; E.M. Lifshitz (1986). Theory of Elas-
ticity. Vol. 7 (3rd ed.). ButterworthHeinemann. [3] Kapitza, P. L.; Lifshitz, E. M. (1969). Lev Davy-
dovitch Landau 19081968. Biographical Memoirs
ISBN 978-0-7506-2633-0.
of Fellows of the Royal Society. 15: 140158.
doi:10.1098/rsbm.1969.0007.
L.D. Landau; E.M. Lifshitz; L.P. Pitaevskii (1984).
Electrodynamics of Continuous Media. Vol. 8 (1st [4] Martin Gilbert, The Jews in the Twentieth Century:
ed.). ButterworthHeinemann. ISBN 978-0-7506- An Illustrated History, Schocken Books, 2001, ISBN
2634-7. 0805241906 p. 284
1.8. FURTHER READING 5

[5] Frontiers of physics: proceedings of the Landau Memorial [22] Petr Leonidovich Kapitsa, Experiment, Theory, Prac-
Conference, Tel Aviv, Israel, 610 June 1988, (Pergamon tice: Articles and Addresses, Springer, 1980, ISBN
Press, 1990) ISBN 0080369391, pp. 1314 9027710619, p. 329.

[6] Edward Teller, Memoirs: A Twentieth Century Jour- [23] Mishina, Irina (17 December 2012). -
ney In Science And Politics, Basic Books 2002, ISBN [Dual personalities]. [Versiya] (in Russian).
0738207780 p. 124 Retrieved 3 March 2014.

[7] Frantiek Janouch, Lev Landau: A Portrait of a Theoret- [24] Schaefer, Henry F. (2003). Science and Christianity: Con-
ical Physicist, 19081988, Research Institute for Physics, ict Or Coherence?. The Apollos Trust. p. 9. ISBN
1988, p. 17. 9780974297507. I present here two examples of notable
atheists. The rst is Lev Landau, the most brilliant Soviet
[8] Rumer, Yuriy. . berkovich-zametki.com physicist of the twentieth century.

[9] Bessarab, Maya (1971) . [25] Lev Landau. Soylent Communications. 2012. Re-
. Moscow trieved 7 May 2013.

[26] 19 December 1957* (no number). The Bukovsky


[10] Mehra, Jagdish (2001) The Golden Age of Theoretical
Archives.
Physics, Boxed Set of 2 Volumes, World Scientic, p.
952. ISBN 9810243421. [27] Nobel Presentation speech by Professor I. Waller, mem-
ber of the Swedish Academy of Sciences. Nobelprize.org.
[11] During this period Landau visitied Copenhagen three Retrieved on 28 January 2012.
times: 8 April to 3 May 1930, from 20 September to 22
November 1930, and from 25 February to 19 March 1931 [28] Yaacov Ro'i, The Struggle for Soviet Jewish Emigration,
(see Landau Lev biography MacTutor History of Math- 19481967, Cambridge University Press 2003, ISBN
ematics). 0521522447 p. 199

[12] Sykes, J. B. (2013) Landau: The Physicist and the Man: [29] Lev Davidovich Landau. Find a Grave. Retrieved 28
Recollections of L. D. Landau, Elsevier, p. 81. ISBN January 2012.
9781483286884.
[30] Obelisk at the Novodevichye Cemetery. novode-
[13] Haensel, P.; Potekhin, A.Y. and Yakovlev, D.G. (2007) vichye.com (26 October 2008). Retrieved on 28 January
Neutron Stars 1: Equation of State and Structure, Springer 2012.
Science & Business Media, p. 2. ISBN 0387335439. [31] Schmadel, Lutz D. (2003). Dictionary of Minor Planet
Names (5th ed.). Springer Verlag. p. 174. ISBN 3-540-
[14] Gennady Gorelik, Scientic American 1997, The Top Se-
00238-3.
cret Life of Lev Landau
[32] Hey, Tony (1997). Einsteins Mirror. Cambridge Univer-
[15] Blundell, Stephen J. (2009). Superconductivity: A Very sity Press. p. 1. ISBN 0-521-43532-3.
Short Introduction. Oxford U. Press. p. 67. ISBN
9780191579097. [33] Mitra, Asoke; Ramlo, Susan; Dharamsi, Amin; Mi-
tra, Asoke; Dolan, Richard; Smolin, Lee (2006).
[16] Ioe, Boris L. (25 April 2002). Landaus Theoretical New Einsteins Need Positive Environment, Inde-
Minimum, Landaus Seminar, ITEP in the beginning of pendent Spirit. Physics Today. 59 (11): 10.
the 1950s. arxiv.org. arXiv:hep-ph/0204295 . Bibcode:2006PhT....59k..10H. doi:10.1063/1.2435630.

[17] On the Theory of Stars, in Collected Papers of L. D. Lan- [34] Complete list of L D Landaus works. Phys. Usp. 41
dau, ed. and with an introduction by D. ter Haar, New (6): 621623. June 1998. Bibcode:1998PhyU...41..621..
York: Gordon and Breach, 1965; originally published in doi:10.1070/PU1998v041n06ABEH000413.
Phys. Z. Sowjet. 1 (1932), 285.

[18] Dorozynsk, Alexander (1965). The Man They Wouldn't


Let Die.
1.8 Further reading
[19] - (Peter Books
Kapitza Memorial Museum-Study), :
(a biographical sketch of Aca- Dorozynski, Alexander (1965). The Man They
demician Kapitza).
Wouldn't Let Die. Secker and Warburg. ASIN
[20] Richard Rhodes, Dark Sun: The Making of the Hydrogen B0006DC8BA. (After Landaus 1962 car accident,
Bomb, pub Simon & Schuster, 1995, ISBN 0684824140 the physics community around him rallied to at-
p. 33. tempt to save his life. They managed to prolong his
life until 1968.)
[21] Lev Davidovich Landau, Soviet physicist and No-
bel laureate. Physics Today. 57 (2): 62. 2004. Janouch, Frantisek (1979). Lev D. Landau: His life
doi:10.1063/1.2408530. and work. CERN. ASIN B0007AUCL0.
6 CHAPTER 1. LEV LANDAU

Khalatnikov, I.M., ed. (1989). Landau. The physi-


cist and the man. Recollections of L.D. Landau.
Sykes, J.B. (trans.). Pergamon Press. ISBN 0-08-
036383-0.
Kojevnikov, Alexei B. (2004). Stalins Great Sci-
ence: The Times and Adventures of Soviet Physicists.
History of Modern Physical Sciences. Imperial Col-
lege Press. ISBN 1-86094-420-5.
Landau-Drobantseva, Kora (1999). Professor Lan-
dau: How We Lived (in Russian). AST. ISBN 5-
8159-0019-2.

Shifman, M., ed. (2013). Under the Spell of Lan-


dau: When Theoretical Physics was Shaping Des-
tinies. World Scientic. ISBN 978-981-4436-56-4.
doi:10.1142/8641.

Articles

Karl Hufbauer, Landaus youthful sallies into stel-


lar theory: Their origins, claims, and receptions,
Historical Studies in the Physical and Biological Sci-
ences, 37 (2007), 337354.
As a student, Landau dared to correct Einstein in a
lecture. Global Talent News.
O'Connor, John J.; Robertson, Edmund F., Lev
Landau, MacTutor History of Mathematics archive,
University of St Andrews.
Lev Davidovich Landau. Nobel-Winners.

Landaus Theoretical Minimum, Landaus Seminar,


ITEP in the Beginning of the 1950s by Boris L.
Ioe, Concluding talk at the workshop QCD at the
Threshold of the Fourth Decade/Ioeest.

EJTP Landau Issue 2008.


Ammar Sakaji and Ignazio Licata (eds),Lev Davi-
dovich Landau and his Impact on Contemporary
Theoretical Physics, Nova Science Publishers, New
York, 2009, ISBN 978-1-60692-908-7.
Gennady Gorelik, The Top Secret Life of Lev Lan-
dau, Scientic American, Aug. 1997, vol. 277(2),
5357.

1.9 External links


Media related to Lev Landau at Wikimedia Com-
mons
Chapter 2

Felix Bloch

This article is about the Swiss physicist. For the man 1928. His doctoral thesis established the quantum theory
accused of espionage, see Felix Bloch (diplomatic of solids, using Bloch waves to describe the electrons.
ocer).
In 1940 he married Lore Misch.[4]
He remained in European academia, studying with
Felix Bloch (23 October 1905 10 September 1983) Wolfgang Pauli in Zrich, Niels Bohr in Copenhagen and
was a Swiss physicist, working mainly in the U.S.[1] He Enrico Fermi in Rome before he went back to Leipzig
and Edward Mills Purcell were awarded the 1952 Nobel assuming a position as privatdozent (lecturer). In 1933,
Prize for their development of new ways and methods for immediately after Hitler came to power, he left Ger-
nuclear magnetic precision measurements.[2] In 1954 many because he was Jewish.[5] He emigrated to work
1955, he served for one year as the rst Director-General at Stanford University in 1934. In the fall of 1938, Bloch
of CERN. began working with the University of California at Berke-
ley 37 cyclotron to determine the magnetic moment of
the neutron.[6] Bloch went on to become the rst professor
2.1 Life and work for theoretical physics at Stanford. In 1939, he became
a naturalized citizen of the United States. During WW
II he worked on nuclear power at Los Alamos National
Laboratory, before resigning to join the radar project at
Harvard University.
After the war he concentrated on investigations into nu-
clear induction and nuclear magnetic resonance, which
are the underlying principles of MRI.[7] [8] [9] In 1946
he proposed the Bloch equations which determine the
time evolution of nuclear magnetization. When CERN
was being set up in the early 1950s, its founders were
searching for someone of the stature and international
prestige to head the edgling international laboratory, and
in 1954 Professor Bloch became CERNs rst Director-
General,[10] at the time when construction was getting un-
der way on the present Meyrin site and plans for the rst
machines were being drawn up. After leaving CERN, he
Felix Bloch in the lab, 1950s returned to Stanford University, where he in 1961 was
made Max Stein Professor of Physics.
Bloch was born in Zrich, Switzerland to Jewish[3] par-
At Stanford, he was the advisor of Carson D. Jeries,
ents Gustav and Agnes Bloch.
who became a professor of Physics at the University of
He was educated at the Cantonal Gymnasium in California, Berkeley.
Zurich and at the Eidgenssische Technische Hochschule
In 1964, he was elected a foreign member of the Royal
(ETHZ), also in Zrich. Initially studying engineering
Netherlands Academy of Arts and Sciences.[11]
he soon changed to physics. During this time he at-
[4]
tended lectures and seminars given by Peter Debye and He died in Zurich.
Hermann Weyl at ETH Zrich and Erwin Schrdinger at
the neighboring University of Zrich. A fellow student
in these seminars was John von Neumann. Graduating in
1927 he continued his physics studies at the University of
Leipzig with Werner Heisenberg, gaining his doctorate in

7
8 CHAPTER 2. FELIX BLOCH

2.2 See also Deputy Director-General: Prof. E.


Amaldi. Nature. 174 (4434): 774
List of Jewish Nobel laureates 775. 1954. Bibcode:1954Natur.174R.774..
doi:10.1038/174774c0.
List of things named after Felix Bloch
McGraw-Hill Modern Men of Science. 1. McGraw-
Hill. 1966. pp. 4546. ISBN 978-0-07-045217-6.

2.3 Footnotes National Cyclopaedia of American Biography. 61.


James T. White & Co. 1984. pp. 310312. ISBN
[1] Hofstadter, Robert (March 1984). Obituary: Fe- 0-88371-040-4.
lix Bloch. Physics Today. 37 (3): 115116.
Bibcode:1984PhT....37c.115H. doi:10.1063/1.2916128.
2.5 Further reading
[2] Sohlman, M (Ed.) Nobel Foundation directory 2003.
Vastervik, Sweden: AB CO Ekblad; 2003.
Bloch, F.; Staub, H. Fission Spectrum, Los
[3] Fraser, Gordon (2012). Chapter 7. The Quantum Exo- Alamos National Laboratory (LANL) (through pre-
dus. Oxford University Press. p. 182. ISBN 978-0-19- decessor agency Los Alamos Scientic Lab), United
959215-9. States Department of Energy (through predecessor
agency the US Atomic Energy Commission), (Au-
[4] Former Fellows of The Royal Society of Edinburgh 1783 gust 18, 1943).
2002. royalsoced.org.uk

[5] Bloch, Felix, Current Biography, H. W. Wilson Com-


pany, 1954. Accessed 24 February 2013. Because of his 2.6 External links
Jewish faith, his position soon became uncomfortable and
he went to Paris, where he lectured at the Institut Henri Felix Bloch: Pioneer of Electricity & Magnetism at
Poincar.
the National High Magnetic Field Laboratory
[6] Felix Bloch, Nuclear Induction, Bloch Equations, Bloch
Biography and Bibliographic Resources, from the
Theorem, Bloch States. Osti.gov. Retrieved on 26 June
Oce of Scientic and Technical Information,
2015.
United States Department of Energy
[7] Alvarez, Luis W.; Bloch, F. (1940). A Quanti-
tative Determination of the Neutron Moment in
http://nobelprize.org/physics/laureates/1952/
Absolute Nuclear Magnetons. Physical Review. bloch-bio.html
57 (2): 111122. Bibcode:1940PhRv...57..111A.
http://www-sul.stanford.edu/depts/spc/xml/
doi:10.1103/PhysRev.57.111.
sc0303.xml
[8] Bloch, F.; Hansen, W. W.; Packard, Martin
Oral History interview transcript with Felix Bloch
(1946-02-01). Nuclear Induction. Physical Re-
view. 69: 127. Bibcode:1946PhRv...69..127B. 14 May 1964, American Institute of Physics, Niels
doi:10.1103/PhysRev.69.127. Bohr Library and Archives

[9] Shampo, M A; Kyle R A (September 1995). Fe- Oral History interview transcript with Felix Bloch
lix Blochdeveloper of magnetic resonance imaging. 15 August 1968, American Institute of Physics,
Mayo Clin. Proc. 70 (9): 889. PMID 7643644. Niels Bohr Library and Archives
doi:10.4065/70.9.889.
Oral History interview transcript with Felix Bloch
[10] People and things : Felix Bloch. CERN Courier. CERN. 15 December 1981, American Institute of Physics,
Retrieved 1 September 2015. Niels Bohr Library and Archives

[11] F. Bloch (1905 - 1983)". Royal Netherlands Academy Felix Bloch Papers, 19311987 (33 linear ft.) are
of Arts and Sciences. Retrieved 22 May 2016. housed in the Department of Special Collections and
University Archives at Stanford University Libraries
National Academy of Sciences Biographical Mem-
2.4 References oir

Nobel Prize for Physics, 1952. Felix Bloch Papers


Nature. 170 (4335): 911912.
1952. Bibcode:1952Natur.170R.911..
doi:10.1038/170911b0.
Chapter 3

Quantum thermodynamics

Quantum thermodynamics is the study of the re-


lations between two independent physical theories: i
thermodynamics and quantum mechanics. The two in- S = [HS , S ] + LD (S )

dependent theories address the physical phenomena of
light and matter. In 1905 Einstein argued that the re- HS is a (Hermitian) Hamiltonian part and LD :
quirement of consistency between thermodynamics and
electromagnetism...:.[1] leads to the conclusion that light
( 1( )
)
is quantized obtaining the relation E = h . This pa- LD (S ) = Vn S Vn S Vn Vn + Vn Vn S
per is the dawn of quantum theory. In a few decades n
2
quantum theory became established with an indepen-
dent set of rules.[2] Currently quantum thermodynamics is the dissipative part describing implicitly through system
addresses the emergence of thermodynamic laws from operators Vn the inuence of the bath on the system. The
quantum mechanics. It diers from quantum statistical Markov property imposes that the system and bath are
mechanics in the emphasis on dynamical processes out uncorrelated at all times SB = s B . The L-GKS
of equilibrium. In addition there is a quest for the theory equation is unidirectional and leads any initial state S to a
to be relevant for a single individual quantum system. steady state solution which is an invariant of the equation
of motion S (t ) = 0 .[3]
The Heisenberg picture supplies a direct link to quantum
thermodynamic observables. The dynamics of a system
3.1 A dynamical view of quantum observable represented by the operator, O , has the form:
thermodynamics
dO i O
There is an intimate connection of quantum ther- dt = [HS , O] + LD (O) + t
modynamics with the theory of open quantum sys-
tems.[3] Quantum mechanics inserts dynamics into ther- where the possibility that the operator, O is explicitly
modynamics, giving a sound foundation to nite-time- time-dependent, is included.
thermodynamics. The main assumption is that the entire
world is a large closed system, and therefore, time evolu-
tion is governed by a unitary transformation generated by
3.1.1 The emergence of time derivative of
a global Hamiltonian. For the combined system bath sce- rst law of thermodynamics
nario, the global Hamiltonian can be decomposed into:
When O = HS the rst law emerges:

H = HS + HB + HSB dE HS
= + LD (HS )
dt t
where HS is the systems Hamiltonian, HB is the
bath Hamiltonian and HSB is the system-bath interac- t and the heat
where power is interpreted as P = H S


tion. The state of the system is obtained from a par- current J = LD (HS ) . [6][7][8]

tial trace over the combined system and bath: S (t) = Additional conditions have to imposed on the dissipa-
T rB (SB (t)) . Reduced dynamics is an equivalent de- tor LD to be consistent with thermodynamics. First the
scription of the systems dynamics utilizing only systems invariant S () should become an equilibrium Gibbs
operators. Assuming Markov property for the dynamics state. This implies that the dissipator LD should com-
the basic equation of motion for an open quantum system mute with the unitary part generated by HS .[3] In ad-
is the Lindblad equation(L-GKS):[4][5] dition an equilibrium state, is stationary and stable. This

9
10 CHAPTER 3. QUANTUM THERMODYNAMICS

assumption is used to derive the Kubo-Martin-Schwinger formation gathered by measurement. An example is the
stability criterion for thermal equilibrium i.e. KMS state. case of Maxwells demon, which has been resolved by
[13][14][15]
A unique and consistent approach is obtained by deriv- Le Szilrd.
ing the generator, LD , in the weak system bath coupling The entropy of an observable is associated with the com-
limit.[9] In this limit, the interaction energy can be ne-plete projective measurement of an observable, A ,
glected. This approach represents a thermodynamic ide- where the
operator, A , has a spectral decomposition:
alization: it allows energy transfer, while keeping a tensor
A = j i Pj where Pj is the projection operators
product separation between the system and bath, i.e., a of the eigenvalue j . The probability of outcome j is
quantum version of an isothermal partition. pj = T r(Pj ) The entropy associated with the observ-
Markovian behavior involves a rather complicated coop- able, A , is the Shannon entropy with respect to the pos-
eration between system and bath dynamics. This means sible outcomes:
that in phenomenological treatments, one cannot com-
bine arbitrary system Hamiltonians, HS , with a given
L-GKS generator. This observation is particularly impor- SA = pj ln pj
tant in the context of quantum thermodynamics, where it j
is tempting to study Markovian dynamics with an arbi-
trary control Hamiltonian. Erroneous derivations of the The most signicant observable in thermodynamics is the
quantum master equation can easily lead to a violation of energy represented by the Hamiltonian[16]
operator, H , and
the laws of thermodynamics. its associated energy entropy, S E .

An external perturbation modifying the Hamiltonian of John von Neumann suggested to single out the most in-
the system will also modify the heat ow. As a result, formative observable to characterize the entropy of the
the L-GKS generator has to be renormalized. For a slow system. This invariant is obtained by minimizing the en-
change, one can adopt the adiabatic approach and use the tropy with respect to all possible observables. The most
instantaneous systems Hamiltonian to derive LD . An informative observable operator commutes with the state
important class of problems in quantum thermodynam- of the system. The entropy of this observable is termed
ics is periodically driven systems. Periodic quantum heat the Von Neumann entropy and is equal to:
engines and power-driven refrigerators fall into this class.
A reexamination of the time-dependent heat current ex-
pression using quantum transport techniques has been Svn = T r( ln )
proposed.[10]
As a consequence, SA Svn for all observables. At
A derivation of consistent dynamics beyond the weak thermal equilibrium the energy entropy is equal to the von
coupling limit has been suggested.[11] Neumann entropy: SE = Svn .
Svn is invariant to a unitary transformation changing the
3.1.2 The emergence of the second law of state. The Von Neumann entropy Svn is additive only for
a system state that is composed of a tensor product of its
thermodynamics subsystems:
The second law is a statement on the irreversibility of
dynamics or, the breakup of time reversal symmetry (T-
symmetry). This should be consistent with the empirical = j j
direct denition: heat will ow spontaneously from a hot
source to a cold sink.
Clausius version of the II-law
From a static viewpoint, for a closed quantum system, the
II-law of thermodynamics is a consequence of the uni- No process is possible whose sole result is the transfer of
tary evolution.[12] In this approach, one accounts for the heat from a body of lower temperature to a body of higher
entropy change before and after a change in the entire sys- temperature.
tem. A dynamical viewpoint is based on local accounting
for the entropy changes in the subsystems and the entropy This statement for N-coupled heat baths in steady state
generated in the baths. becomes:

Entropy Jn
0
n
Tn
In thermodynamics, entropy is related to a concrete pro-
cess. In quantum mechanics, this translates to the ability A dynamical version of the II-law can be proven, based
to measure and manipulate the system based on the in- on Spohns inequality[17]
3.1. A DYNAMICAL VIEW OF QUANTUM THERMODYNAMICS 11

It is impossible by any procedure, no matter how


idealized, to reduce any assembly to absolute zero
T r (LD [ln ln ()]) 0 temperature in a nite number of operations.

which is valid for any L-GKS generator, with a stationary


state, () .[3] At steady state the second law of thermodynamics implies
that the total entropy production is non-negative. When
Consistency with thermodynamics can be employed to
the cold bath approaches the absolute zero temperature,
verify quantum dynamical models of transport. For ex-
it is necessary to eliminate the entropy production diver-
ample, local models for networks where local L-GKS
gence at the cold side when Tc 0 , therefore
equations are connected through weak links have been
[18]
shown to violate the second law of thermodynamics.

3.1.3 The Quantum and Thermodynamic Sc Tc , 0 .


Adiabatic Conditions and Quantum
Friction
For = 0 the fulllment of the second law depends on
Thermodynamic adiabatic processes have no entropy the entropy production of the other baths, which should
change. Typically, an external control modies the state. compensate for the negative entropy production of the
A quantum version of an adiabatic process can be mod- cold bath. The rst formulation of the third law modies
eled by an externally controlled time dependent Hamil- this restriction. Instead of 0 the third law imposes
tonian H(t) . If the system is isolated, the dynamics > 0 , guaranteeing that at absolute zero the entropy
are unitary, and therefore, Svn is a constant. A quan- production at the cold bath is zero: Sc = 0 . This re-
tum adiabatic process is dened by the energy entropy quirement leads to the scaling condition of the heat cur-
SE being constant. The quantum adiabatic condition rent Jc Tc+1 .
is therefore equivalent to no net change in the popula- The second formulation, known as the unattainability
tion of the instantaneous energy levels. This implies that principle can be rephrased as;[22]
the Hamiltonian should commute with itself at dierent
times: [H(t), H(t )] = 0 .
No refrigerator can cool a system to absolute zero
When the adiabatic conditions are not fullled, additional temperature at nite time.
work is required to reach the nal control value. For an
isolated system, this work is recoverable, since the dy-
namics is unitary and can be reversed. The coherence The dynamics of the cooling process is governed by the
stored in the o-diagonal elements of the density oper- equation
ator carry the required information to recover the extra
energy cost and reverse the dynamics. Typically, this en-
ergy is not recoverable, due to interaction with a bath that
causes energy dephasing. The bath, in this case, acts like
dTc (t)
a measuring apparatus of energy. This lost energy is the Jc (Tc (t)) = cV (Tc (t)) .
dt
quantum version of friction.[19][20]

where cV (Tc ) is the heat capacity of the bath. Taking


3.1.4 The emergence of the dynamical ver- Jc Tc+1 and cV Tc with 0 , we can quantify
sion of the third law of thermody- this formulation by evaluating the characteristic exponent
namics of the cooling process,

There are seemingly two independent formulations of the


third law of thermodynamics both originally were stated
by Walther Nernst. The rst formulation is known as the dTc (t)
Nernst heat theorem, and can be phrased as: Tc , Tc 0, =+1
dt

The entropy of any pure substance in thermody- This equation introduces the relation between the char-
namic equilibrium approaches zero as the temper- acteristic exponents and . When < 0 then the bath
ature approaches zero. is cooled to zero temperature in a nite time, which im-
plies a valuation of the third law. It is apparent from the
The second formulation is dynamical, known as the last equation, that the unattainability principle is more re-
unattainability principle[21] strictive than the Nernst heat theorem.
12 CHAPTER 3. QUANTUM THERMODYNAMICS

3.2 Typicality as a source of emer- [2] John Von Neumann. Mathematical foundations of quan-
tum mechanics. No. 2. Princeton university press, 1955.
gence of thermodynamical phe-
[3] Koslo, Ronnie. Quantum thermodynamics: A dynam-
nomena ical viewpoint. Entropy 15, no. 6 (2013): 2100-2128.

The basic idea of quantum typicality is that the vast ma- [4] Lindblad, G. On the generators of quantum dynamical
semigroups. Comm. Math. Phys. 1976, 48, 119130.
jority of all pure states featuring a common expectation
value of some generic observable at a given time will yield [5] 6. Gorini, V.; Kossakowski, A.; Sudarshan, E.C.G. Com-
very similar expectation values of the same observable at pletely positive dynamical semigroups of N-level systems.
any later time. This is meant to apply to Schrdinger type J. Math. Phys. 1976, 17, 821825.
dynamics in high dimensional Hilbert spaces. As a con-
[6] Spohn, H.; Lebowitz, J. Irreversible thermodynamics for
sequence individual dynamics of expectation values are
quantum systems weakly coupled to thermal reservoirs.
then typically well described by the ensemble average.[23]
Adv. Chem. Phys. 1979, 38, 109.
Quantum ergodic theorem originated by John von Neu-
mann is a strong result arising from the mere mathe- [7] Alicki, R. Quantum open systems as a model of a heat
engine. J. Phys A: Math. Gen. 1979, 12, L103L107
matical structure of quantum mechanics. The QET is
a precise formulation of termed normal typicality, i.e. [8] Koslo, R. A quantum mechanical open system as a
the statement that, for typical large systems, every ini- model of a heat engine. J. Chem. Phys. 1984, 80, 1625
tial wave function 0 from an energy shell is normal: it 1631
evolves in such a way that |t for most t, is macroscopi-
[9] E.B. Davis Markovian master equations. Comm. Math.
cally equivalent to the micro-canonical density matrix.[24]
Phys. 1974, 39, 91110.

[10] Maria Florencia Ludovico, Jong Soo Lim, Michael


Moskalets, Liliana Arrachea, and David Sanchez. Dy-
3.3 Quantum thermodynamics re- namical energy transfer in ac driven quantum systems.
Phys. Rev. B 89, 161306 (2014).
source theory
[11] Esposito, Massimiliano, Maicol A. Ochoa, and Michael
The second law of thermodynamics can be interpreted Galperin. Quantum Thermodynamics: A Nonequilib-
as quantifying state transformations which are statisti- rium Greens Function Approach. Physical review letters
114, no. 8 (2015): 080602.
cally unlikely so that they become eectively forbidden.
The second law typically applies to systems composed of [12] Lieb, E.H.; Yngvason, J. The physics and mathematics of
many particles interacting; Quantum thermodynamics re- the second law of thermodynamics. Phys. Rep. 1999,
source theory is a formulation of thermodynamics in the 310, 196.
regime where it can be applied to a small number of par-
[13] Szilard, L. On the minimization of entropy in a thermody-
ticles interacting with a heat bath. For processes which
namic system with interferences of intelligent beings. Z.
are cyclic or very close to cyclic, the second law for mi-
Phys. 1929, 53, 840856.
croscopic systems takes on a very dierent form than it
does at the macroscopic scale, imposing not just one con- [14] Brilluin, L. Science and Information Theory ; Academic
straint on what state transformations are possible, but an Press: New York, NY, USA, 1956. 107.
entire family of constraints. These second laws are not
[15] Maruyama, K.; Nori, F.; Verdal, V. Colloquium: The
only relevant for small systems, but also apply to indi-
physics of Maxwells demon and information . Rev. Mod.
vidual macroscopic systems interacting via long-range in- Phys. 2009, 81, 123.
teractions, which only satisfy the ordinary second law on
average. By making precise the denition of thermal op- [16] Polkovnikov, A. Microscopic diagonal entropy and its
erations, the laws of thermodynamics take on a form with connection to basic thermodynamic relations. Ann. Phys.
the rst law dening the class of thermal operations, the 2011, 326, 486499
zeroeth law emerging as a unique condition ensuring the
[17] Spohn, H.; Lebowitz, J. Irreversible thermodynamics for
theory is nontrivial, and the remaining laws being a mono- quantum systems weakly coupled to thermal reservoirs.
tonicity property of generalised free energies.[25] [26] Adv. Chem. Phys. 1978, 109, 38.

[18] Levy, Amikam, and Ronnie Koslo. The local approach


to quantum transport may violate the second law of ther-
3.4 References modynamics. EPL (Europhysics Letters) 107, no. 2
(2014): 20004.

[1] Einstein, Albert. "ber einen die Erzeugung und Ver- [19] Koslo, R.; Feldmann, T. A discrete four stroke quantum
wandlung des Lichtes betreenden heuristischen Gesicht- heat engine exploring the origin of friction. Phys. Rev. E
spunkt. Annalen der Physik 322, no. 6 (1905): 132-148. 2002, 65, 055102 .
3.6. EXTERNAL LINKS 13

[20] Plastina, F., A. Alecce, T. J. G. Apollaro, G. Falcone, G.


Francica, F. Galve, N. Lo Gullo, and R. Zambrini. Ir-
reversible work and inner friction in quantum thermody-
namic processes. Physical Review Letters 113, no. 26
(2014): 260601.

[21] Landsberg, P. T. Foundations of thermodynamics. Re-


views of Modern Physics 28, no. 4 (1956): 363

[22] Levy, Amikam, Robert Alicki, and Ronnie Koslo.


Quantum refrigerators and the third law of thermody-
namics. Physical Review E 85, no. 6 (2012): 061126.

[23] Bartsch, Christian, and Jochen Gemmer. Dynamical


typicality of quantum expectation values. Physical re-
view letters 102, no. 11 (2009): 110403.

[24] Goldstein, Sheldon, Joel L. Lebowitz, Christian Mas-


trodonato, Roderich Tumulka, and Nino Zangh. Normal
typicality and von Neumanns quantum ergodic theorem.
In Proceedings of the Royal Society of London A: Mathe-
matical, Physical and Engineering Sciences, vol. 466, no.
2123, pp. 3203-3224. The Royal Society, 2010.

[25] Fernando Brando, Micha Horodecki, Nelly Ng,


Jonathan Oppenheim, and Stephanie Wehner The second
laws of quantum thermodynamics PNAS 2015 112 (11)
3275-3279, 2015, doi:10.1073/pnas.1411728112

[26] John Goold and Marcus Huber and Arnau Riera and Lidia
del Rio and Paul Skrzypczyk, The role of quantum in-
formation in thermodynamics a topical review, Journal
of Physics A: Mathematical and Theoretical, 49, 143001,
2016

3.5 Further reading


Gemmer, Jochen, M. Michel, and Gnter Mahler.
Quantum thermodynamics. Emergence of thermody-
namic behavior within composite quantum systems. 2.
(2009).
Petruccione, Francesco, and Heinz-Peter Breuer. The
theory of open quantum systems. Oxford university
press, 2002.

3.6 External links


Go to "Concerning an Heuristic Point of View To-
ward the Emission and Transformation of Light" to
read an English translation of Einsteins 1905 paper.
(Retrieved: 2014 Apr 11)
Chapter 4

Master equation

For the master equation used in quantum physics, see rate equal to the value of the connection). When the con-
Lindblad equation. For the classical and quantum nections depend on the actual time (i.e. matrix A depends
master equations in quantum eld theory, see Batalin on the time, A A(t) ), the process is not stationary and
Vilkovisky formalism. the master equation reads

In physics, chemistry and related elds, master equa-


tions are used to describe the time evolution of a system dP
= A(t)P .
that can be modelled as being in a probabilistic combina- dt
tion of states at any given time and the switching between When the connections represent multi exponential
states is determined by a transition rate matrix. The equa- jumping time probability density functions, the process
tions are a set of dierential equations over time of the is semi-Markovian, and the equation of motion is an
probabilities that the system occupies each of the dier- integro-dierential equation termed the generalized mas-
ent states. ter equation:


4.1 Introduction dP
=
t
A(t )P ( )d.
dt 0
A master equation is a phenomenological set of rst- The matrix A can also represent birth and death, meaning
order dierential equations describing the time evolution that probability is injected (birth) or taken from (death)
of (usually) the probability of a system to occupy each the system, where then, the process is not in equilibrium.
one of a discrete set of states with regard to a continu-
ous time variable t. The most familiar form of a master
equation is a matrix form: 4.1.1 Detailed description of the matrix A
, and properties of the system
dP Let A be the matrix describing the transition rates (also
= AP ,
dt known as kinetic rates or reaction rates). As always, the
rst subscript represents the row, the second subscript the
where P is a column vector (where element i represents column. That is, the source is given by the second sub-
state i), and A is the matrix of connections. The way con- script, and the destination by the rst subscript. This is
nections among states are made determines the dimension the opposite of what one might expect, but it is technically
of the problem; it is either convenient.
For each state k, the increase in occupation probability
a d-dimensional system (where d is 1,2,3,...), where depends on the contribution from all other states to k, and
any state is connected with exactly its 2d nearest is given by:
neighbors, or

a network, where every pair of states may have a


connection (depending on the networks properties). Ak P ,

When the connections are time-independent rate con- where P , is the probability for the system to be in the
stants, the master equation represents a kinetic scheme, state , while the matrix A is lled with a grid of
and the process is Markovian (any jumping time proba- transition-rate constants. Similarly, Pk contributes to the
bility density function for state i is an exponential, with a occupation of all other states P ,

14
4.2. QUANTUM MASTER EQUATIONS 15

evolution of probabilities (diagonal elements of the den-


sity matrix), but also of variables containing information
Ak Pk , about quantum coherence between the states of the sys-
tem (non-diagonal elements of the density matrix).
In probability theory, this identies the evolution as Another special case of the master equation is the Fokker-
a continuous-time Markov process, with the integrated Planck equation which describes the time evolution of a
master equation obeying a ChapmanKolmogorov equa- continuous probability distribution.[1] Complicated mas-
tion. ter equations which resist analytic treatment can be cast
into this form (under various approximations), by using
The master equation can be simplied so that the terms
approximation techniques such as the system size expan-
with = k do not appear in the summation. This allows
sion.
calculations even if the main diagonal of the A is not de-
ned or has been assigned an arbitrary value.

4.2 Quantum master equations


dPk
= (Ak P ) = (Ak P )+Akk Pk = (Ak P Ak Pk ).
dt A quantum master equation is a generalization of the
=k =k
idea of a master equation. Rather than just a system
The nal equality arises from the fact that of dierential equations for a set of probabilities (which
only constitutes the diagonal elements of a density ma-
trix), quantum master equations are dierential equations
d
(Ak ) = (Pk ) = 0 for the entire density matrix, including o-diagonal el-
dt ements. A density matrix with only diagonal elements

can be modeled as a classical random process, therefore
because the summation over a row of the probabilities such an ordinary master equation is considered classi-
Pk yields one. The reason is that the probability to go cal. O-diagonal elements represent quantum coherence
from to any other state is a sure event and therefore has which is a physical characteristic that is intrinsically quan-
a probability of one. Using this we can write the diagonal tum mechanical.
elements as
The Redeld equation and Lindblad equation are exam-
ples of approximate quantum master equations assumed
to be Markovian. More accurate quantum master equa-
Akk = (Ak ) Akk Pk = (Ak Pk ) tions for certain applications include the polaron trans-
=k =k formed quantum master equation, and the variational po-
laron transformed quantum master equation.[2]
The master equation exhibits detailed balance if each of
the terms of the summation disappears separately at equi-
librium i.e. if, for all states k and having equilibrium
probabilities k and , 4.3 See also
Kolmogorov equations (Markov jump process)
Ak = Ak k .
Continuous-time Markov process
These symmetry relations were proved on the basis of the
time reversibility of microscopic dynamics (microscopic Quantum master equation
reversibility) as Onsager reciprocal relations.
Fermis golden rule

4.1.2 Examples of master equations Detailed balance

Many physical problems in classical, quantum mechan- Boltzmanns H-theorem


ics and problems in other sciences, can be reduced to the
form of a master equation, thereby performing a great
simplication of the problem (see mathematical model). 4.4 References
The Lindblad equation in quantum mechanics is a gener-
alization of the master equation describing the time evo- [1] Honerkamp, Josef (1998). Statistical physics : an ad-
lution of a density matrix. Though the Lindblad equa- vanced approach with applications ; with 7 tables and 57
tion is often referred to as a master equation, it is not problems with solutions. Berlin [u.a.]: Springer. p. 173.
one in the usual sense, as it governs not only the time ISBN 3-540-63978-0.
16 CHAPTER 4. MASTER EQUATION

[2] McCutcheon, D.; Dattani, N. S.; Gauger, E.; Lovett,


B.; Nazir, A. (25 August 2011). A general approach
to quantum dynamics using a variational master equa-
tion: Application to phonon-damped Rabi rotations in
quantum dots. Physical Review B. 84: 081305R.
Bibcode:2011PhRvB..84h1305M. arXiv:1105.6015 .
doi:10.1103/PhysRevB.84.081305.

van Kampen, N. G. (1981). Stochastic processes in


physics and chemistry. North Holland. ISBN 978-
0-444-52965-7.
Gardiner, C. W. (1985). Handbook of Stochastic
Methods. Springer. ISBN 3-540-20882-8.
Risken, H. (1984). The Fokker-Planck Equation.
Springer. ISBN 3-540-61530-X.

4.5 External links


Timothy Jones, A Quantum Optics Derivation (2006)
Chapter 5

Markov property

A discrete-time stochastic process satisfying the Markov


property is known as a Markov chain.

5.1 Introduction
A stochastic process has the Markov property if the
conditional probability distribution of future states of the
process (conditional on both past and present values)
depends only upon the present state; that is, given the
present, the future does not depend on the past. A process
with this property is said to be Markovian or a Markov
process. The most famous Markov process is a Markov
chain. Brownian motion is another well-known Markov
process.

A single realisation of three-dimensional Brownian motion for


times 0 t 2. Brownian motion has the Markov property, 5.2 History
as the displacement of the particle does not depend on its past
displacements.
Main article: Markov chain History

In probability theory and statistics, the term Markov


property refers to the memoryless property of a
stochastic process. It is named after the Russian 5.3 Denition
mathematician Andrey Markov.[1]
A stochastic process has the Markov property if the Let (, F, P) be a probability space with a ltration
conditional probability distribution of future states of the (Fs , s I) , for some (totally ordered) index set I ;
process (conditional on both past and present states) de- and let (S, S) be a measurable space. A (S, S) -valued
pends only upon the present state, not on the sequence stochastic process X = (Xt , t I) adapted to the l-
of events that preceded it. A process with this property tration is said to possess the Markov property if, for each
is called a Markov process. The term strong Markov A S and each s, t I with s < t ,
property is similar to the Markov property, except that
the meaning of present is dened in terms of a random
variable known as a stopping time. P(Xt A|Fs ) = P(Xt A|Xs ). [3]

The term Markov assumption is used to describe a


model where the Markov property is assumed to hold, In the case where S is a discrete set with the discrete
such as a hidden Markov model. sigma algebra and I = N , this can be reformulated as
[2] follows:
A Markov random eld extends this property to two
or more dimensions or to random variables dened for an
interconnected network of items. An example of a model
for such a eld is the Ising model. P(Xn = xn |Xn1 = xn1 , . . . , X0 = x0 ) = P(Xn = xn |Xn1 = xn1

17
18 CHAPTER 5. MARKOV PROPERTY

5.4 Alternative formulations This stochastic process of observed colors doesn't have
the Markov property. Using the same experiment above,
Alternatively, the Markov property can be formulated as if sampling without replacement is changed to sampling
follows. with replacement, the process of observed colors will
have the Markov property.[5]
An application of the Markov property in a generalized
E[f (Xt )|Fs ] = E[f (Xt )|(Xs )] form is in Markov chain Monte Carlo computations in
the context of Bayesian statistics.
for all t s 0 and f : S R bounded and
measurable.[4]
5.8 See also
5.5 Strong Markov property Markov chain

Suppose that X = (Xt : t 0) is a stochastic process Markov blanket


on a probability space (, F, P) with natural ltration Markov decision process
{Ft }t0 . For any t 0 , we can dene the germ sigma
algebra Ft+ to be the intersection of all Fs for s > t . Causal Markov condition
Then for any stopping time on , we can dene
Markov model
ChapmanKolmogorov equation
F + = {A F : { = t} A Ft+ , t 0}

Then X is said to have the strong Markov property if, for


each stopping time , conditioned on the event { < }
5.9 References
, we have that for each t 0 , X +t is independent of
F + given X . [1] Markov, A. A. (1954). Theory of Algorithms. [Translated
by Jacques J. Schorr-Kon and PST sta] Imprint Moscow,
The strong Markov property implies the ordinary Markov Academy of Sciences of the USSR, 1954 [Jerusalem, Is-
property, since by taking the stopping time = t , the rael Program for Scientic Translations, 1961; available
ordinary Markov property can be deduced. from Oce of Technical Services, United States Depart-
ment of Commerce] Added t.p. in Russian Translation
of Works of the Mathematical Institute, Academy of Sci-
ences of the USSR, v. 42. Original title: Teoriya algorif-
5.6 In forecasting mov. [QA248.M2943 Dartmouth College library. U.S.
Dept. of Commerce, Oce of Technical Services, num-
In the elds of predictive modelling and probabilistic ber OTS 60-51085.]
forecasting, the markov property is considered desirable;
[2] Dodge, Y. (2003) The Oxford Dictionary of Statistical
such a model is known as a Markov model. Terms OUP. ISBN 0-19-850994-4

[3] Durrett, Rick. Probability: Theory and Examples. Fourth


5.7 Examples Edition. Cambridge: Cambridge University Press, 2010.

[4] ksendal, Bernt K. (2003). Stochastic Dierential Equa-


Assume that an urn contains two red balls and one green tions: An Introduction with Applications. Springer, Berlin.
ball. One ball was drawn yesterday, one ball was drawn ISBN 3-540-04758-1.
today, and the nal ball will be drawn tomorrow. All of [5] Example of a stochastic process which does not have the
the draws are without replacement. Markov property. Mathematics Stack Exchange.
Suppose you know that todays ball was red, but you have
no information about yesterdays ball. The chance that
tomorrows ball will be red is 1/2. Thats because the only
two remaining outcomes for this random experiment are:
On the other hand, if you know that both today and yes-
terdays balls were red, then you are guaranteed to get a
green ball tomorrow.
This discrepancy shows that the probability distribution
for tomorrows color depends not only on the present
value, but is also aected by information about the past.
Chapter 6

Lindblad equation

In quantum mechanics, the GoriniKossakowski Lindblad equations can also be expressed as the following
SudarshanLindblad equation (GKSL equation, equations for quantum observables:
named after Vittorio Gorini, Andrzej Kossakowski,
George Sudarshan and Gran Lindblad) or master
1( ))

equation in Lindblad form, is the most general type d i 1 (
of Markovian and time-homogeneous master equation A = + [H, A]+ Lk ALk ALk Lk + Lk Lk A ,
dt 2
k=1
describing non-unitary evolution of the density matrix
that is trace-preserving and completely positive for any where A is a quantum observable and we assumed a di-
initial condition. The Schroedinger equation is a special agonal coecient matrix h = (hn,m).
case of the more general Lindblad equation, which has
led to some speculation that quantum mechanics may
be productively extended and expanded through further 6.1 Diagonalization
application and analysis of the Lindblad equation.[1]
The Lindblad master equation for an N-dimensional sys- Since the matrix h = (hn,m) is positive, it can be
tems reduced density matrix can be written: diagonalized with a unitary transformation u:

N 2
1 ( )
i 1(
) 1 0 0
= [H, ]+ hn,m Ln Lm Lm Ln + Lm Ln 0 0
2 2
n,m=1 u hu = . . . ..
.. .. .. .
where H is a (Hermitian) Hamiltonian part, the L are
0 0 N 2 1
an arbitrary linear basis of the operators on the systems
Hilbert space, and the h , are constants which determine where the eigenvalues are non-negative. If we dene
the dynamics. The coecient matrix h = (hn,m) must be another orthonormal operator basis
positive to ensure that the equation is trace-preserving and
completely positive. The summation only runs to N 2
1 because we have taken LN 2 to be proportional to the N2
1
identity operator, in which case the summand vanishes. Ai = uj,i Lj
Our convention implies that the L are traceless for m < j=1
N 2 . The terms in the summation where m = n can be
described in terms of the Lindblad superoperator, we can rewrite Lindblad equation in diagonal form

1( ) N 1
2 ( )
D(L) = LL L L + L L . i
= [H, ]+
1 1
i Ai Ai Ai Ai Ai Ai .
2 2 2
i=1
If the h , terms are all zero, then this is quantum Liou-
ville equation (for a closed system), which is the quantum This equation is invariant under a unitary transformation
analog of the classical Liouville equation. A related equa- of Lindblad operators and constants,
tion describes the time evolution of the expectation values
of observables, it is given by the Ehrenfest theorem.
N 2 1
Note that H is not necessarily equal to the self- A A = v A ,
i i i i j,i i j
Hamiltonian of the system. It may also incorporate j=1
eective unitary dynamics arising from the system-
environment interaction. and also under the inhomogeneous transformation

19
20 CHAPTER 6. LINDBLAD EQUATION

Lindblad, G. (1976). On the generators of quan-


tum dynamical semigroups. Commun. Math. Phys.
Ai Ai = Ai + ai , 48 (2): 119. Bibcode:1976CMaPh..48..119L.
doi:10.1007/BF01608499.
N 1
1 ( 2
)
HH =H+
j aj Aj aj Aj . Gorini, V.; Kossakowski, A.; Sudarshan,
2i j=1 E.C.G. (1976). Completely positive semi-
groups of N-level systems. J. Math. Phys.
However, the rst transformation destroys the orthonor-
17 (5): 821. Bibcode:1976JMP....17..821G.
mality of the operators A (unless all the are equal)
doi:10.1063/1.522979.
and the second transformation destroys the tracelessness.
Therefore, up to degeneracies among the , the A of the Banks, T.; Susskind, L.; Peskin, M.E. (1984).
diagonal form of the Lindblad equation are uniquely de- Diculties for the evolution of pure states
termined by the dynamics so long as we require them to into mixed states. Nuclear Physics B. 244:
be orthonormal and traceless. 125134. Bibcode:1984NuPhB.244..125B.
doi:10.1016/0550-3213(84)90184-6.

6.2 Harmonic oscillator example Accardi, Luigi; Lu, Yun Gang; Volovich, I.V.
(2002). Quantum Theory and Its Stochastic Limit.
New York: Springer Verlag. ISBN 978-3-5404-
The most common Lindblad equation is that describing 1928-0.
the damping of a quantum harmonic oscillator, it has
Alicki, Robert; Lendi, Karl (1987). Quantum
Dynamical Semigroups and Applications. Berlin:
L1 = a Springer Verlag. ISBN 978-0-3871-8276-6.
L2 = a Attal, Stphane; Joye, Alain; Pillet, Claude-Alain

(2006). Open Quantum Systems II: The Markovian
2 (n + 1) n = m = 1
Approach. Springer. ISBN 978-3-5403-0992-5.
hn,m = 2 n n=m=2


0 else Breuer, Heinz-Peter; Petruccione, F. (2002). The
Theory of Open Quantum Systems. Oxford Univer-
Here n is the mean number of excitations in the reser- sity Press. ISBN 978-0-1985-2063-4.
voir damping the oscillator and is the decay rate. Addi-
tional Lindblad operators can be included to model vari- Gardiner, C.W.; Zoller, Peter (2010). Quantum
ous forms of dephasing and vibrational relaxation. These Noise. Springer Series in Synergetics (3rd ed.).
methods have been incorporated into grid-based density Berlin Heidelberg: Springer-Verlag. ISBN 978-3-
matrix propagation methods. 642-06094-6.
Ingarden, Roman S.; Kossakowski, A.; Ohya, M.
(1997). Information Dynamics and Open Systems:
6.3 See also Classical and Quantum Approach. New York:
Springer Verlag. ISBN 978-0-7923-4473-5.
Open quantum system
Lindblad, G. (1983). Non-Equilibrium Entropy and
Quantum jump method Irreversibility. Dordrecht: Delta Reidel. ISBN 1-
4020-0320-X.
Quantum dynamical semigroup
Tarasov, Vasily E. (2008). Quantum Mechanics of
Non-Hamiltonian and Dissipative Systems. Amster-
dam, Boston, London, New York: Elsevier Science.
6.4 References ISBN 978-0-0805-5971-1.

[1] Weinberg, Steven. The Trouble with Quantum Mechan-


ics. The New York Review of Books. Retrieved 4 January
2017. 6.5 External links
The Lindblad master equation
Kossakowski, A. (1972). On quantum sta-
tistical mechanics of non-Hamiltonian sys-
tems. Rep. Math. Phys. 3 (4): 247.
Bibcode:1972RpMP....3..247K. doi:10.1016/0034-
4877(72)90010-9.
Chapter 7

Amir Caldeira

Amir Ordacgi Caldeira (born 1950 in Rio de Janeiro) for dissipation in quantum mechanics, Physical Re-
is a Brazilian physicist. He received his bachelors de- view Letters 67, 1960 (1991).
gree in 1973 from the Pontifcia Universidade Catlica
do Rio de Janeiro, his M.Sc. degree in 1976 from the
same university, and his Ph.D. in 1980 from University 7.2 References
of Sussex. His Ph.D. advisor was the Physics Nobel
Prize winner Anthony James Leggett. He joined the [1] Amir O. Caldeiras page on the Brazilian Academy of Sci-
faculty at Universidade Estadual de Campinas (UNI- ences website Archived March 11, 2007, at the Wayback
CAMP) in 1980. In 1984 he did post-doctoral work Machine.
at the Kavli Institute for Theoretical Physics (KITP)
at University of California, Santa Barbara and at the
Thomas J. Watson Research Laboratory at IBM. In 1994-
1995 he spent a sabbatical at the University of Illinois at
Urbana-Champaign. He is currently a Full Professor at
Universidade Estadual de Campinas. He was the recip-
ient of the Wataghin Prize, from Universidade Estadual
de Campinas, for his contributions to theoretical physics
in 1986.
Caldeiras research interests are in theoretical condensed
matter physics, in particular quantum dissipation and
strongly correlated electron systems. His best known
work is on the Caldeira-Leggett model, which is one of
the rst and most important treatments of decoherence in
quantum mechanical systems.[1]

7.1 Selected Scientic Articles


A. O. Caldeira and A. J. Leggett, Inuence of
damping on quantum interference: an exactly solu-
ble model, Physical Review A 31, 1059 (1985).

A. O. Caldeira and A. J. Leggett, Quantum tun-


nelling in a dissipative system, Annals of Physics
149, 374 (1983).

A. O. Caldeira and A. J. Leggett, Path integral ap-


proach to quantum brownian motion, Physica A
121, 587 (1983).

A. O. Caldeira and A. J. Leggett, Inuence of dis-


sipation on quantum tunnelling in macroscopic sys-
tems, Physical Review Letters 46, 211 (1981).

A. H. Castro Neto and A. O. Caldeira, New model

21
Chapter 8

Anthony James Leggett

Warning: Page using Template:Infobox scientist with eventually followed by two sisters, Clare and Judith, and
unknown parameter ethnicity (this message is shown two brothers, Terence and Paul, all raised in their parents
only in preview). Roman Catholics faith. Leggett ceased to be a practising
Catholic in his early twenties.[10]
Sir Anthony James Leggett KBE FRS[4] (born 26 Soon after he was born, his parents bought a house in
March 1938), has been a professor of physics at the Upper Norwood, south London. When he was 18 months
University of Illinois at Urbana-Champaign since 1983.[5] old, WWII broke out and he was evacuated to Engleeld
Leggett is widely recognized as a world leader in the the- Green, a small village in Surrey on the edge of the great
ory of low-temperature physics, and his pioneering work park of Windsor Castle, where he stayed for the dura-
on superuidity was recognized by the 2003 Nobel Prize tion of the war. After the end of the war, he returned
in Physics.[6] He has shaped the theoretical understand- to the Upper Norwood house and lived there until 1950;
ing of normal and superuid helium liquids and strongly his father taught at a school in north-east London and his
coupled superuids.[7] He set directions for research in mother looked after the ve children full-time. He at-
the quantum physics of macroscopic dissipative systems tended the local Catholic primary school, and later, fol-
and use of condensed systems to test the foundations of lowing a successful performance in the 11-plus, which
quantum mechanics.[8][9] he took rather earlier than most, and then transferred to
Wimbledon College.
Leggett won a scholarship to Balliol College, Oxford, in
December 1954 and entered the University the following
8.1 Early life and education year with the intention of reading the degree technically
known as Literae Humaniores (classics). He completed
Leggett was born in Camberwell, South London, and a second undergraduate degree, this time in physics at
raised Catholic.[10] His fathers forebears were village Merton College, Oxford.[11] One person who was will-
cobblers in a small village in Hampshire; his father broke ing to overlook his unorthodox credentials was Dirk ter
with this tradition to become a greengrocer; his father Haar, then a reader in theoretical physics and a fellow of
would relate how he used to ride with him to buy veg- Magdalen College, Oxford; so he signed up for research
etables at the Covent Garden market in London. His under his supervision. As with all ter Haars students
mothers parents were of Irish descent; her father had em- in that period, the tentatively assigned thesis topic was
igrated to England and worked as a clerk in the naval Some Problems in the Theory of Many-Body Systems,
dockyard in Chatham.[10] His maternal grandmother, which left a considerable degree of latitude.
who survived into her eighties, was sent out to domestic Dirk took a great interest in the personal welfare of his
service at the age of twelve. She eventually married his students and their families, and was meticulous in making
grandfather and raised a large family, then in her late six- sure they received adequate support; indeed, he encour-
ties emigrated to Australia to join her daughter and son- aged Leggett to apply for a Prize Fellowship at Magdalen,
in-law, and nally returned to the UK for her last years. which he held from 1963 to 1967. In the end Leggetts
His father and mother were each the rst in their families thesis consisted of studies of two somewhat disconnected
to receive a university education; they met and became problems in the general area of liquid helium, one on
engaged while students at the Institute of Education at higher-order phonon interaction processes in superuid
4
the University of London, but were unable to get married He and the other on the properties of dilute solutions of
4
for some years because his father had to care for his own He in normal liquid 3 He (a system which unfortunately
mother and siblings. His father worked as a secondary turned out to be much less experimentally accessible than
school teacher of physics, chemistry and mathematics. the other side of the phase diagram, dilute solutions of
3
His mother also taught secondary school mathematics for He in 4 He). The University of Oxford awarded Leggett
a time, but had to give this up when he was born. He was an Honorary DLitt in June 2005.

22
8.4. AWARDS AND HONOURS 23

8.2 Career 8.4 Awards and honours


Leggett spent the period August 1964 - August 1965 as a Leggett is a member of the National Academy of Sci-
postdoctoral research fellow at the University of Illinois ences, the American Philosophical Society, the American
at Urbana-Champaign (UIUC), and David Pines and his Academy of Arts and Sciences, the Russian Academy of
colleagues (John Bardeen, Gordon Baym, Leo Kadano Sciences (foreign member), and was elected a Fellow of
and others) provided a fertile environment. He then spent the Royal Society (FRS) in 1980,[4] the American Phys-
a year in the group of Professor Takeo Matsubara at ical Society, and American Institute of Physics, and Life
Kyoto University in Japan. Fellow of the Institute of Physics.
After one more postdoctoral year which he spent in rov- He was awarded the 2003 Nobel Prize in Physics (with
ing mode, spending time at Oxford, Harvard and Illi- V. L. Ginzburg and A. A. Abrikosov) for pioneering con-
nois, in the autumn of 1967 he took up a lectureship at tributions to the theory of superconductors and superu-
the University of Sussex, where he was to spend the next ids. He is an Honorary Fellow of the Institute of Physics
fteen years of his career. (U.K.). He was appointed Knight Commander of the Or-
der of the British Empire (KBE) in the 2004 Queens
In early 1982 he accepted an oer from the University of
Birthday Honours for services to physics.[14] He also
Illinois at Urbana-Champaign (UIUC) of the MacArthur
won the 2002/2003 Wolf Foundation Prize for research
Chair with which the university had recently been en-
on condensed forms of matter (with B. I. Halperin). He
dowed. As he had already committed himself to an
was also honoured with the Eugene Feenberg Memorial
eight-month stay as a visiting scientist at Cornell in early
Medal (1999). He has been elected as a Foreign Fellow
1983,[12] he nally arrived in Urbana in the early fall of
of the Indian National Science Academy (2011).
that year, and has been there ever since.
Leggetts own research interests shifted away from super-
uid 3 He since around 1980; he worked inter alia on the 8.5 Personal life
low-temperature properties of glasses, high-temperature
superconductivity, the BoseEinstein condensate (BEC)
atomic gases and above all on the theory of experiments In June 1972 he married Haruko Kinase. They met at
to test whether the formation of quantum mechanics will Sussex University, in Brighton, England. In 1978, they
continue to describe the physical world as we push it up had a daughter Asako.[10] His wife Haruko earned a Ph.D.
from the atomic level towards that of everyday life. in cultural anthropology from the University of Illinois
at Urbana-Champaign and is currently doing research on
In 2007 he accepted a position at the University of Water- the hospice system.[10] Their daughter, Asako, also grad-
loo Canada. For the next ve years, he will spend at least uated from UIUC, with a joint major in geography and
two months a year on campus at the Institute for Quantum chemistry. He holds dual US/UK citizenship.
Computing.
He currently serves as the chief scientist at the Institute
for Condensed Matter Theory, a research institute hosted 8.6 References
by the University of Illinois at Urbana-Champaign.
[1] LEGGETT, Sir Anthony (James). ukwhoswho.com. Whos
Who. 2015 (online Oxford University Press ed.). A & C
Black, an imprint of Bloomsbury Publishing plc. (sub-
8.3 Research scription required)

[2] Anthony James Leggett at the Mathematics Genealogy


His research focuses on cuprate superconductivity, con-
Project
ceptual issues in the foundations of quantum mechanics,
superuidity in highly degenerate atomic gases, low tem- [3] Freire Jr., Olival; Pessoa Jr., Osvaldo; Bromberg, Joan
perature properties of amorphous solids and topological Lisa. Teoria quntica: estudos histricos e implicaes cul-
quantum computation. turais (in Portuguese). Campina Grande, Brazil: SciELO
- EDUEPB. p. 71. ISBN 8578791266.
The 29 December 2005 edition of the International Her-
ald Tribune printed an article, New tests of Einsteins [4] Fellows of the Royal Society. London: Royal Society.
'spooky' reality, which referred to Leggetts Autumn Archived from the original on 2015-03-16.
2005 debate at a conference in Berkeley, California, with [5] Anthony Leggett UIUC Faculty page.
fellow Nobel laureate Norman Ramsey of Harvard Uni-
versity.[13] Both debated the worth of attempts to change [6] Nobel Prize in Physics 2003..
quantum theory. Leggett thought attempts were justied, [7] A. J. Leggett. (1975). A theoretical description
Ramsey opposed. Leggett believes quantum mechanics of the new phases of liquid 3He. Rev. Mod.
may be incomplete because of the quantum measurement Phys. 47 (2): 331414. Bibcode:1975RvMP...47..331L.
problem. doi:10.1103/RevModPhys.47.331.
24 CHAPTER 8. ANTHONY JAMES LEGGETT

[8] A. O. Caldeira and A. J. Leggett. (1983). Quantum


tunneling in a dissipative system (PDF). Ann. Phys.
149 (2): 374456. Bibcode:1983AnPhy.149..374C.
doi:10.1016/0003-4916(83)90202-6.

[9] The Problems of Physics - A conversation with Tony


Leggett, Ideas Roadshow, 2013

[10] Anthony J. Leggett Autobiography.

[11] Levens, R.G.C., ed. (1964). Merton College Register


1900-1964. Oxford: Basil Blackwell. p. 511.

[12] https://www.bnl.gov/energy/ces/cv/leggett.asp

[13] New tests of Einsteins 'spooky' reality

[14] No. 57315. The London Gazette (invalid |supp= (help)).


12 June 2004. London Gazette uses unsupported param-
eters (help)

Quotations related to Anthony James Leggett at Wik-


iquote
Chapter 9

Nitrogen-vacancy center

The nitrogen-vacancy center (N-V center) is one of nu- a lattice vacancy.


merous point defects in diamond. Its most explored and Two charge states of this defect, neutral N-V0 and neg-
useful property is photoluminescence, which can be eas-
ative N-V , are known from spectroscopic studies us-
ily detected from an individual N-V center, especially ing optical absorption,[1][2] photoluminescence (PL),[3]
those in the negative charge state (N-V ). Electron spins
electron paramagnetic resonance (EPR)[4][5][6] and opti-
at N-V centers, localized at atomic scales, can be manip- cally detected magnetic resonance (ODMR),[7] which can
ulated at room temperature by applying a magnetic eld,
be viewed as a hybrid of PL and EPR; most details of the
electric eld, microwave radiation or light, or a combi- structure originate from EPR. A nitrogen atom has ve
nation, resulting in sharp resonances in the intensity and
valence electrons. Three of them covalently bond to the
wavelength of the photoluminescence. These resonances carbon atoms and two remain non-bonded and are called
can be explained in terms of electron spin related phe- a lone pair. The vacancy has three unpaired electrons.
nomena such as quantum entanglement, spin-orbit inter- Two of them make a quasi covalent bond and one remains
action and Rabi oscillations, and analysed using advanced unpaired. The overall symmetry, however, is axial (trig-
quantum optics theory. An individual N-V center can onal CV); one can visualize this by imagining the three
be viewed as a basic unit of a quantum computer, and it unpaired vacancy electrons continuously exchanging their
has potential applications in novel, more ecient elds of roles.
electronics and computational science including quantum
cryptography and spintronics. The N-V0 thus has one unpaired electron and is paramag-
netic. However, despite extensive eorts, electron para-
magnetic resonance signals from N-V0 avoided detection
9.1 Structure for decades until 2008. Optical excitation is required to
bring the N-V0 defect into the EPR-detectable excited
state; the signals from the ground state are presumably
too broad for EPR detection.[8]
In the negative charge state N-V , an extra electron is lo-
cated at the vacancy site forming a spin S=1 pair with one
of the vacancy electrons. As in N-V0 , the vacancy elec-
trons are exchanging roles preserving the overall trigo-
nal symmetry. This N-V state is what is commonly, and
somewhat incorrectly, called the nitrogen-vacancy cen-
ter. The neutral state has not yet been explored for spin
manipulations.
The N-V centers are randomly oriented within a diamond
crystal. Ion implantation techniques can enable their ar-
ticial creation in predetermined positions.[9]

9.2 Production
Primitive picture of the N-V center
Main article: Crystallographic defects in diamond
The nitrogen-vacancy center is a point defect in the
diamond lattice. It consists of a nearest-neighbor pair of Nitrogen-vacancy centers are typically produced from
a nitrogen atom, which substitutes for a carbon atom, and single substitutional nitrogen centers (called C or P1 cen-

25
26 CHAPTER 9. NITROGEN-VACANCY CENTER

ters in diamond literature) by irradiation followed by an- illumination, however, also converts some N-V into N-
nealing at temperatures above 700 C.[1] A wide range V0 centers.[3] Emission is very quick (relaxation time ~10
of high-energy particles are suitable for such irradiation, ns).[16][17] At room temperature, no sharp peaks are ob-
including electrons, protons, neutrons, ions, and gamma served because of the thermal broadening. However,
photons. Irradiation produces lattice vacancies, which are cooling the N-V centers with liquid nitrogen or liquid
a part of N-V centers. Those vacancies are immobile helium dramatically narrows the lines down to a width of
at room temperature, and annealing is required to move a few megahertz.
them. Single substitutional nitrogen produces strain in the
An important property of the luminescence from individ-
diamond lattice;[10] it therefore eciently captures mov-
ual N-V centers is its high temporal stability. Whereas
ing vacancies,[11] producing the N-V centers. many single-molecular emitters bleach after emission of
During chemical vapor deposition of diamond, a small 106 108 photons, no bleaching is observed for the N-V
fraction of single substitutional nitrogen impurity (typ- centers at room temperature.[7][14]
ically <0.5%) traps vacancies generated as a result of Because of these properties, the ideal technique to ad-
the plasma synthesis. Such nitrogen-vacancy centers are dress the N-V centers is confocal microscopy, both at
preferentially aligned to the growth direction.[12] room temperature and at low temperature. In particu-
Diamond is notorious for having a relatively large lattice lar, low temperature operation is required to specically
strain. Strain splits and shifts optical transitions from in- address only the zero-phonon line (ZPL).
dividual centers resulting in broad lines in the ensem-
bles of centers.[1] Special care is taken to produce ex-
tremely sharp N-V lines (line width ~10 MHz)[13] re- 9.4 Energy level structure and its
quired for most experiments: high-quality, pure natu-
ral or better synthetic diamonds (type IIa) are selected. manipulation by external elds
Many of them already have sucient concentrations of
grown-in N-V centers and are suitable for applications.
If not, they are irradiated by high-energy particles and
annealed. Selection of a certain irradiation dose al-
lows tuning the concentration of produced N-V cen-
ters such that individual N-V centers are separated by
micrometre-large distances. Then, individual N-V cen-
ters can be studied with standard optical microscopes
or, better, near-eld scanning optical microscopes having
sub-micrometre resolution.[7][14]

9.3 Basic optical properties

Schematic energy level structure of the N-V center. Electron


transitions between the ground 3 A and excited 3 E states, sepa-
rated by 1.945 eV (637 nm), produce absorption and lumines-
cence. The 3 A state is split by 1027 gauss[4][5] (~12 eV) and
Optical absorption and emission of the N-V center at room tem- the 3 E state by 508 gauss[18] (~5.9 eV). Numbers 0, 1 indicate
perature spin value; splitting due to the orbital degeneracy is not shown.

N-V centers emit bright red light which can be conve- The energy level structure of the N-V center was estab-
niently excited by visible light sources, such as argon or lished by combining optical, electron paramagnetic reso-
krypton lasers, frequency doubled Nd:YAG lasers, dye nance and theoretical results, as shown in the gure. In
lasers, or He-Ne lasers. Excitation can also be achieved particular, several theoretical works have been done, us-
at energies below that of zero phonon emission.[15] Laser ing the Linear Combination of Atomic Orbitals (LCAO)
9.4. ENERGY LEVEL STRUCTURE AND ITS MANIPULATION BY EXTERNAL FIELDS 27

approach, to build the electronic orbitals to describe the responding m = 0 state in 3 E and then go back to original
possible quantum states, looking at the NV center as a state. However, for a spin state with m = 1 in 3 A, after
molecule. Moreover, group theory results are used, to the excitation, it has a relatively high probability to jump
take into account the symmetry of the diamond crystal, to the intermediate state 1 A by non-radiative transition
and so the symmetry of the NV itself. The energy levels and go to the ground state with m = 0. After sucient
are labeled according to the group theory, and in particu- cycles, the state of the NV center can be regarded as in the
lar are labelled after the irreducible representations of the m = 0 state. Such a process can be used in the initializa-
CV symmetry group of the defect center, A1 , A2 and E. tion of quantum state in quantum information processing.
The numbers 3 in 3 A and 1 in 1 A represent the number
There is an additional level splitting in the excited 3 E state
of allowable m spin states, or the spin multiplicity, which due to the orbital degeneracy and spin-orbit interaction.
range from S to S for a total of 2S+1 possible states. If
Importantly, this splitting can be modulated by applying a
S = 1, m can be 1, 0, or 1. The 1 A level is predicted static electric eld,[13][19] in a similar fashion to the mag-
by theory but not directly observed in experiment, and it
netic eld mechanism outlined above, though the physics
is believed to play an important role in the quenching of of the splitting is somewhat more complex. Neverthe-
photoluminescence.
less, an important practical outcome is that the intensity
In the absence of an external magnetic eld, the ground and position of the luminescence lines can be modulated
and excited states are split by the magnetic interaction by applying electric or/and magnetic elds.
between the two unpaired electrons at the N-V center The energy dierence between the m = 0 and m = 1
(see microscopic model): when two electrons have paral- states corresponds to the microwave region. Thus by ir-
lel spins (m =1), their energy is higher than when spins radiating the N-V centers with microwave radiation, one
are antiparallel (m =0). The farther apart the electrons can change the relative population of those levels, thereby
are, the weaker their interaction energy D (roughly D again modulating the luminescence intensity.
~1/r3 ).[5] Thus the smaller splitting in the excited state
can be viewed in terms of larger electron-electron sepa- There is an additional splitting of the m = 1 energy lev-
ration in the excited state. When an external magnetic els, which originates from the "hyperne" interaction be-
eld is applied to the N-V center, it does not aect the tween the nuclear and electron spins. Thus nally, the

m =0 states nor the 1 A state (because it has S = 0), but it optical absorption and luminescence from the N-V cen-
splits the m = 1 levels. If a magnetic eld is oriented ter consists of roughly a dozen sharp lines with a sepa-
along the defect axis and reaches about 1027 G (or 508 ration in the MHz-GHz range, and all those lines can be
G) then the m = 1 and m = 0 states in the ground (or ex- resolved, given proper sample preparation. The intensity
cited) state become equal in energy; they strongly interact and position of those lines can be modulated using the
resulting in so-called spin polarization, which strongly af- following tools:
fects the intensity of optical absorption and luminescence
transitions involving those states.[18] 1. Amplitude and orientation of magnetic eld, which
This happens because transitions between electronic splits the m = 1 levels in the ground and excited
states are mediated by a photon which cannot change states.
overall spin. Thus optical transitions must preserve the
2. Amplitude and orientation of elastic eld (strain),
total spin and occur between levels of the same total spin.
which can be applied by, e.g., squeezing the dia-
For this reason, transitions 3 E1 A and 1 A 3 A are non-
mond. Similar eects can be induced by applying
radiative and quench the luminescence. Whereas m =
electric eld,[13][19] and the electric eld can be con-
1 (excited state) m = 0 (ground state) transition was
trolled with much higher precision.
forbidden in the absence of an external magnetic eld, it
becomes allowed when a magnetic eld mixes the m = 3. Continuous-wave microwave radiation, which
1 and m = 0 levels in the ground state. As a measurable changes the population of the sublevels within the
outcome of this phenomenon, luminescence intensity can ground and excited state.[19]
be strongly modulated by magnetic eld.
4. Tunable laser, which can selectively excite certain
An important property of the non-radiative transition be-
sublevels of the ground and excited state.[19][20]
tween 3 E and 1 A is that it is stronger for m = 1 and
weaker for m = 0. This property results in a very useful 5. In addition to those static perturbations, numerous
manipulation of N-V center, which is called optical spin- dynamic eects (spin echo, Rabi oscillations, etc.)
polarization. First, an o-resonance excitation which has can be exploited by applying a carefully designed
a higher frequency (typically 2.32 eV (532 nm)) than sequence of microwave pulses.[21][22][23][24][25] The
the frequencies of all transitions and thus lays in the vi- rst pulse coherently excites the electron spins, and
bronic bands for all transitions. By using a pulse of this this coherence is then manipulated and probed by
wavelength, people can excite all spin states and create the subsequent pulses. Those dynamic eects are
phonons as well. For a spin state with m = 0, due to con- rather important for practical realization of quantum
servation of spin in transition, it will be excited to the cor- computers, which ought to work at high frequency.
28 CHAPTER 9. NITROGEN-VACANCY CENTER

As a nal remark, it should be noted that the above- tion is indicated with a thin line. The diagram also shows
described energy structure is by no means exceptional for the non-radiative and infrared competing decay paths be-
a defect in diamond or other semiconductor.[26] It was tween the two singlet states, and the ne splitting in the
not this structure alone, but a combination of several fa- triplet states, whose dierences in energy correspond to
vorable factors (previous knowledge, easy production and microwave frequencies.
excitation, etc.) which suggested the use of the N-V cen- Some authors explain the dynamics of the
ter. NV
3 centerby
admitting that the transition from E to A2 , 1 is
1
[31]
small, but as Robledo et al. shows,
1 only the fact that
the probability of decaying to A1 is smaller for 3 E, 0

9.5 Spin dynamics than for 3 E, 1 is enough to polarize the spin to m =
0.

9.6 Potential applications

The spectral shape and intensity of the optical signals


from the N-V centers are sensitive to external pertur-
bation. This phenomenon has been explored in proto-
type ultrasensitive sensors of strain, electric and mag-
netic eld. Such sensors can detect magnetic elds of a
few nanotesla[32] or electric elds of about 10 V/cm[33]
at kilohertz frequencies after 100 seconds of averag-
ing. This sensitivity allows detecting a magnetic or elec-
Diagram for spin dynamics in the NV center on diamond. The tric eld produced by a single electron located tens of
primary transition between triplet ground and excited states is nanometers away from an N-V center. Another possi-
predominantly spin conserving. Decay via the intermediate sin- ble use is as a detector to measure the full mechanical
glets gives rise to spin polarization by preferentially switching spin stress tensor in the bulk of the crystal. For this applica-
from ms = 1 to ms = 0. Both absorption and emission wave-
tion, the stress-induced splitting of the zero-phonon-line
lengths are indicated,[27] since they dier due to Stokes and anti-
Stokes shift. The diagram was mainly inspired in the one present
is exploited, and its polarization properties.[34] A robust
on Rogers et al.[28] frequency-modulated radio receiver using the electron-
spin-dependent photoluminescence that operated up to
Thinking of the NV center as a multielectronic system, 350 C demonstrates the possibility for use in extreme
[35]
we can draw the diagram in the gure at right, where the conditions.
states are labeled according to their symmetry and with a In addition to the quantum optical applications, lumines-
left superscript that indicates with a 3 if it is a triplet (S=1) cence from the N-V centers can be applied for imaging
and with a 0 if it is a singlet (S=0). It is well accepted biological processes, such as uid ow in living cells.[36]
today that we have two triplet states and two intermediate This application relies on good compatibility of dia-
singlet states.[29] mond nano-particles with the living cells and on favorable

The optical excitations conserve the spinstate, but there properties of photoluminescence from the N-V centers
is a high probability of the states E, 1 decaying non- (strong intensity, easy excitation and detection, temporal
3

radiatively to the singlet state 1 A1 , a phenomenon stability, etc.). Compared with large single-crystal dia-
called intersystem crossing (ISC). This happens at an ap- monds, nanodiamonds are cheap (about 1 USD per gram)
preciable rate because the energy and available from various suppliers. N-V centers are
3 curve in function of the
produced in diamond powders with sub-micrometre par-
position of the atoms for the E, 1 state intersects the ticle size using the standard process of irradiation and an-
1
curve for the A1 state. Therefore, for some instant
during the vibrational relaxation that the ions undergo af- nealing described above. Those nanodiamonds are intro-
ter the excitement, it is possible for the spin to ip with lit- duced in a cell, and their luminescence is monitored using
[37]
[30]
tle or no energy required in the transition. It is impor- a standard uorescence microscope.
tant to note that this mechanism
also leads to a transition Further N-V center has been hypothesized to be a po-
3 1
from E, 0 to A1 , but the rate of this ISC is much tential bio-mimetic system for emulating radical pair spin
lower than the 3 E, 1 states rate, therefore this transi- dynamics of the avian compass.[38][39]
9.9. REFERENCES 29

9.7 Historical remarks in Physics. 41 (8): 1201. Bibcode:1978RPPh...41.1201L.


doi:10.1088/0034-4885/41/8/002.
The microscopic model and most optical properties of [6] Redman, D.; Brown, S.; Sands, R.; Rand, S.
ensembles of the N-V centers have been rmly estab- (1991). Spin dynamics and electronic states of N-
lished in the 1970s based on the optical measurements V centers in diamond by EPR and four-wave-mixing
combined with uniaxial stress[1] and on the electron para- spectroscopy. Physical Review Letters. 67 (24):
magnetic resonance.[4][5] However, a minor error in EPR 34203423. Bibcode:1991PhRvL..67.3420R. PMID
results (it was assumed that illumination is required to ob- 10044729. doi:10.1103/PhysRevLett.67.3420.
serve N-V EPR signals) resulted in the incorrect mul- [7] Gruber, A.; et al. (1997). Scanning Confocal
tiplicity assignments in the energy level structure. In Optical Microscopy and Magnetic Resonance on Sin-
1991 it was shown that EPR can be observed without gle Defect Centers. Science. 276 (5321): 2012.
illumination,[6] which established the energy level scheme doi:10.1126/science.276.5321.2012.
shown above. The magnetic splitting in the excited state [8] Felton, S.; et al. (2008). Electron paramag-
has been measured only recently.[18] netic resonance studies of the neutral nitrogen
The characterization of single N-V centers has become vacancy in diamond. Physical Review B. 77
a very competitive eld nowadays, with many dozens of (8): 081201. Bibcode:2008PhRvB..77h1201F.
doi:10.1103/PhysRevB.77.081201.
papers published in the most prestigious scientic jour-
nals. One of the rst results was reported back in 1997.[7] [9] Awschalom, D. D.; Epstein, R.; Hanson, R. (2007).
In that paper, it was demonstrated that the uorescence of Diamond Age of Spintronics. Scientic Ameri-
single N-V centers can be detected by room-temperature can. 297 (4): 84. Bibcode:2007SciAm.297d..84A.
uorescence microscopy and that the defect shows per- doi:10.1038/scienticamerican1007-84.
fect photostability. Also one of the outstanding proper- [10] Lang, A. R.; et al. (1991). On the Dilatation of Syn-
ties of the NV center was demonstrated, namely room- thetic Type Ib Diamond by Substitutional Nitrogen Im-
temperature optically detected magnetic resonance. purity. Philosophical Transactions of the Royal Society
A. 337 (1648): 497. Bibcode:1991RSPTA.337..497L.
doi:10.1098/rsta.1991.0135.
9.8 See also [11] Iakoubovskii, Konstantin; Adriaenssens, Guy J (2001).
Trapping of vacancies by defects in diamond. Jour-
nal of Physics: Condensed Matter. 13 (26): 6015.
Crystallographic defects in diamond
Bibcode:2001JPCM...13.6015I. doi:10.1088/0953-
Crystallographic defect 8984/13/26/316.
[12] Edmonds, A.; dHaenens-Johansson, U.; Cruddace,
Material properties of diamond R.; Newton, M.; Fu, K. -M.; Santori, C.; Beau-
soleil, R.; Twitchen, D.; Markham, M. (2012).
Production of oriented nitrogen-vacancy color cen-
9.9 References ters in synthetic diamond. Physical Review B. 86
(3). Bibcode:2012PhRvB..86c5201E. arXiv:1112.5757
[1] Davies, G.; Hamer, M. F. (1976). Optical Stud- . doi:10.1103/PhysRevB.86.035201.
ies of the 1.945 eV Vibronic Band in Diamond. [13] Tamarat, Ph.; et al. (2006). Stark Shift Control of Single
Proceedings of the Royal Society of London A. 348 Optical Centers in Diamond. Physical Review Letters.
(1653): 285. Bibcode:1976RSPSA.348..285D. 97 (8): 083002. Bibcode:2006PhRvL..97h3002T.
doi:10.1098/rspa.1976.0039.
PMID 17026299. arXiv:quant-ph/0607170 .
[2] Mita, Y. (1996). Change of absorption spectra in type-Ib doi:10.1103/PhysRevLett.97.083002.
diamond with heavy neutron irradiation. Physical Review [14] Kuhn, S.; et al. (2001). Diamond colour centres as a
B. 53 (17): 11360. Bibcode:1996PhRvB..5311360M. nanoscopic light source for scanning near-eld optical mi-
doi:10.1103/PhysRevB.53.11360. croscopy. Journal of Microscopy. 202 (1): 26. PMID
11298860. doi:10.1046/j.1365-2818.2001.00829.x.
[3] Iakoubovskii, K.; Adriaenssens, G. J.; Nesladek, M.
(2000). Photochromism of vacancy-related centres in [15] De Weerdt, F.; Collins, A. T.; Zugik, M.; Connor, A.
diamond (PDF). Journal of Physics: Condensed Mat- (2005). Sub-threshold excitation of luminescene of de-
ter. 12 (2): 189. Bibcode:2000JPCM...12..189I. fects in diamonds. Journal of Physics: Condensed Mat-
doi:10.1088/0953-8984/12/2/308. ter. 50 (17): 8005. Bibcode:2005JPCM...17.8005D.
doi:10.1088/0953-8984/17/50/018.
[4] Loubser, J. H. N.; van Wyk, J. A. (1977). Electron Spin
Resonance in Annealed Type 1b Diamond. Diamond Re- [16] Collins, A. T.; Thomaz, M. F.; Jorge, M. I. B.
search. 11: 47. ISSN 0070-4679. (1983). Luminescence decay time of the 1.945
eV centre in type Ib diamond. Journal of Physics
[5] Loubser, J. H. N.; van Wyk, J. A. (1978). Electron spin C. 16 (11): 2177. Bibcode:1983JPhC...16.2177C.
resonance in the study of diamond. Reports on Progress doi:10.1088/0022-3719/16/11/020.
30 CHAPTER 9. NITROGEN-VACANCY CENTER

[17] Hanzawa, H.; Nisida, Y.; Kato, T. (1997). Measure- Chris G. (2013-10-01). Quantum computing with de-
ment of decay time for the NV centre in Ib diamond with fects. MRS Bulletin. 38 (10): 802807. ISSN 1938-
a picosecond laser pulse. Diamond and Related Ma- 1425. doi:10.1557/mrs.2013.206.
terials. 6 (11): 1595. Bibcode:1997DRM.....6.1595H.
doi:10.1016/S0925-9635(97)00037-X. [28] Rogers, L. J.; Doherty, M. W.; Barson, M. S. J.;
Onoda, S.; Ohshima, T.; Manson, N. B. (2015-
[18] Fuchs, G. D.; et al. (2008). Excited-State 01-01). Singlet levels of the NV centre in
Spectroscopy Using Single Spin Manipulation diamond. New Journal of Physics. 17 (1):
in Diamond. Physical Review Letters. 101 013048. Bibcode:2015NJPh...17a3048R. ISSN
(1): 117601. Bibcode:2008PhRvL.101k7601F. 1367-2630. arXiv:1407.6244 . doi:10.1088/1367-
PMID 18851332. arXiv:0806.1939 . 2630/17/1/013048.
doi:10.1103/PhysRevLett.101.117601.
[29] Doherty, Marcus W.; Manson, Neil B.; Delaney, Paul;
[19] Tamarat, Ph.; et al. (2008). Spin-ip and spin- Jelezko, Fedor; Wrachtrup, Jrg; Hollenberg, Lloyd C.
conserving optical transitions of the nitrogen-vacancy L. (2013-07-01). The nitrogen-vacancy colour cen-
centre in diamond. New Journal of Physics. 10 tre in diamond. Physics Reports. The nitrogen-
(4): 045004. Bibcode:2008NJPh...10d5004T. vacancy colour centre in diamond. 528 (1): 1
doi:10.1088/1367-2630/10/4/045004. 45. Bibcode:2013PhR...528....1D. arXiv:1302.3288 .
doi:10.1016/j.physrep.2013.02.001.
[20] Santori, C.; et al. (2006). Coherent Population
Trapping of Single Spins in Diamond under Op- [30] Choi, SangKook (2012-01-01). Mechanism for op-
tical Excitation. Physical Review Letters. 97 tical initialization of spin in NV. Physical Re-
(24): 247401. Bibcode:2006PhRvL..97x7401S. view B. 86 (4). Bibcode:2012PhRvB..86d1202C.
PMID 17280321. arXiv:quant-ph/0607147 . doi:10.1103/PhysRevB.86.041202.
doi:10.1103/PhysRevLett.97.247401. [31] Robledo, Lucio; Bernien, Hannes; Sar, Toeno van
der; Hanson, Ronald (2011-01-01). Spin dynam-
[21] Hanson, R.; Gywat, O.; Awschalom, D. D. (2006).
ics in the optical cycle of single nitrogen-vacancy cen-
Room-temperature manipulation and decoherence of a
tres in diamond. New Journal of Physics. 13
single spin in diamond. Physical Review B. 74 (16):
(2): 025013. Bibcode:2011NJPh...13b5013R. ISSN
161203. Bibcode:2006PhRvB..74p1203H. arXiv:quant-
1367-2630. arXiv:1010.1192 . doi:10.1088/1367-
ph/0608233 . doi:10.1103/PhysRevB.74.161203.
2630/13/2/025013.
[22] Dutt, M. V. G.; et al. (2007). Quantum Regis-
[32] Maze, J. R.; Stanwix, P. L.; Hodges, J. S.; Hong, S.; Tay-
ter Based on Individual Electronic and Nuclear Spin
lor, J. M.; Cappellaro, P.; Jiang, L.; Dutt, M. V. G.; To-
Qubits in Diamond (PDF). Science. 316 (5829):
gan, E.; Zibrov, A. S.; Yacoby, A.; Walsworth, R. L.;
13126. Bibcode:2007Sci...316.....D. PMID 17540898.
Lukin, M. D. (2008). Nanoscale magnetic sensing with
doi:10.1126/science.1139831.
an individual electronic spin in diamond (PDF). Nature.
[23] Childress, L.; et al. (2006). Coherent Dynam- 455 (7213): 644647. Bibcode:2008Natur.455..644M.
ics of Coupled Electron and Nuclear Spin Qubits PMID 18833275. doi:10.1038/nature07279.
in Diamond. Science. 314 (5797): 2815. [33] Dolde, F.; Fedder, H.; Doherty, M. W.; Nbauer, T.;
Bibcode:2006Sci...314..281C. PMID 16973839. Rempp, F.; Balasubramanian, G.; Wolf, T.; Reinhard, F.;
doi:10.1126/science.1131871. Hollenberg, L. C. L.; Jelezko, F.; Wrachtrup, J. (2011).
Electric-eld sensing using single diamond spins. Na-
[24] Batalov, A.; et al. (2008). Temporal Coherence
ture Physics. 7 (6): 459. Bibcode:2011NatPh...7..459D.
of Photons Emitted by Single Nitrogen-Vacancy
Defect Centers in Diamond Using Optical Rabi- arXiv:1103.3432 . doi:10.1038/nphys1969.
Oscillations. Physical Review Letters. 100 (7): 077401.
[34] Grazioso, F.; et al. Measurement of the full stress ten-
Bibcode:2008PhRvL.100g7401B. PMID 18352594.
sor in a crystal using photoluminescence from point de-
doi:10.1103/PhysRevLett.100.077401.
fects: The example of nitrogen vacancy centers in di-
[25] Jelezko, F.; et al. (2004). Observation of Co- amond, Appl. Phys. Lett. 103, 101905 (2013).
herent Oscillations in a Single Electron Spin Grazioso, F.; Patton, B. R.; Delaney, P.; Markham, M.
(PDF). Physical Review Letters. 92 (7): 076401. L.; Twitchen, D. J.; Smith, J. M. (2013). Measure-
Bibcode:2004PhRvL..92g6401J. PMID 14995873. ment of the full stress tensor in a crystal using photolu-
doi:10.1103/PhysRevLett.92.076401. minescence from point defects: The example of nitro-
gen vacancy centers in diamond. Applied Physics Letters.
[26] Aharonovich, I.; et al. (2009). Enhanced single- 103 (10): 101905. Bibcode:2013ApPhL.103j1905G.
photon emission in the near infrared from a di- arXiv:1110.3658 . doi:10.1063/1.4819834. http://
amond color center. Physical Review B. 79 arxiv.org/abs/1110.3658
(23): 235316. Bibcode:2009PhRvB..79w5316A.
doi:10.1103/PhysRevB.79.235316. [35] Shao, Linbo; Zhang, Mian; Markham, Matthew; Ed-
monds, Andrew; Loncar, Marko (15 December 2016).
[27] Gordon, Luke; Weber, Justin R.; Varley, Joel B.; Jan- Diamond Radio Receiver: Nitrogen-Vacancy Centers as
otti, Anderson; Awschalom, David D.; Van de Walle, Fluorescent Transducers of Microwave Signals. Phys.
9.9. REFERENCES 31

Rev. Applied. American Physical Society. 6 (6): 064008.


doi:10.1103/PhysRevApplied.6.064008. Retrieved 23
December 2016.

[36] Chang, Y.-R.; et al. (2008). Mass production and


dynamic imaging of uorescent nanodiamonds (PDF).
Nature Nanotechnology. 3 (5): 2848. PMID 18654525.
doi:10.1038/nnano.2008.99.

[37] Aharonovich, I.; Greentree, A. D.; Prawer, S.


(2011). Diamond photonics. Nature Photon-
ics. 5 (7): 397. Bibcode:2011NaPho...5..397A.
doi:10.1038/nphoton.2011.54.

[38] Cryptochrome and Magnetic Sensing, University of Illi-


nois at Urbana-Champaign

[39] Cai, Jianming; Guerreschi, Gian Giacomo;


Briegel, Hans J. (2010-06-04). Quantum Con-
trol and Entanglement in a Chemical Compass.
Physical Review Letters. 104 (22): 220502.
Bibcode:2010PhRvL.104v0502C. arXiv:0906.2383
. doi:10.1103/PhysRevLett.104.220502.
Chapter 10

Quantum mechanics

Not to be confused with Quantum eld theory. foundly reconceived in the mid-1920s.
The reconceived theory is formulated in various specially
For a more accessible and less technical introduction to developed mathematical formalisms. In one of them, a
this topic, see Introduction to quantum mechanics. mathematical function, the wave function, provides infor-
Quantum mechanics (QM; also known as quantum mation about the probability amplitude of position, mo-
mentum, and other physical properties of a particle.
Important applications of quantum theory[2] include
quantum chemistry, superconducting magnets, light-
emitting diodes, and the laser, the transistor and
semiconductors such as the microprocessor, medical and
research imaging such as magnetic resonance imaging
and electron microscopy, and explanations for many bio-
logical and physical phenomena.

10.1 History
Main article: History of quantum mechanics

Scientic inquiry into the wave nature of light began


in the 17th and 18th centuries, when scientists such as
Solution to Schrdingers equation for the hydrogen atom at dif- Robert Hooke, Christiaan Huygens and Leonhard Euler
ferent energy levels. The brighter areas represent a higher prob-
proposed a wave theory of light based on experimen-
ability of nding an electron.
tal observations.[3] In 1803, Thomas Young, an English
physics or quantum theory), including quantum eld polymath, performed the famous double-slit experiment
that he later described in a paper titled On the nature of
theory, is a branch of physics which is the fundamen-
tal theory of nature at small scales and low energies of light and colours. This experiment played a major role in
the general acceptance of the wave theory of light.
atoms and subatomic particles.[1] Classical physics, the
physics existing before quantum mechanics, derives from In 1838, Michael Faraday discovered cathode rays. These
quantum mechanics as an approximation valid only at studies were followed by the 1859 statement of the black-
large (macroscopic) scales. Quantum mechanics dif- body radiation problem by Gustav Kirchho, the 1877
fers from classical physics in that energy, momentum suggestion by Ludwig Boltzmann that the energy states of
and other quantities are often restricted to discrete val- a physical system can be discrete, and the 1900 quantum
ues (quantization), objects have characteristics of both hypothesis of Max Planck.[4] Plancks hypothesis that en-
particles and waves (wave-particle duality), and there are ergy is radiated and absorbed in discrete quanta (or en-
limits to the precision with which quantities can be known ergy packets) precisely matched the observed patterns of
(Uncertainty principle). black-body radiation.
Quantum mechanics gradually arose from Max Planck's In 1896, Wilhelm Wien empirically determined a distri-
solution in 1900 to the black-body radiation problem (re- bution law of black-body radiation,[5] known as Wiens
ported 1859) and Albert Einstein's 1905 paper which of- law in his honor. Ludwig Boltzmann independently ar-
fered a quantum-based theory to explain the photoelectric rived at this result by considerations of Maxwells equa-
eect (reported 1887). Early quantum theory was pro- tions. However, it was valid only at high frequencies and

32
10.1. HISTORY 33

underestimated the radiance at low frequencies. Later,


Planck corrected this model using Boltzmanns statistical
interpretation of thermodynamics and proposed what is
now called Plancks law, which led to the development of
quantum mechanics.
Following Max Planck's solution in 1900 to the black-
body radiation problem (reported 1859), Albert Ein-
stein oered a quantum-based theory to explain the
photoelectric eect (1905, reported 1887). Around
1900-1910, the atomic theory and the corpuscular theory
of light[6] rst came to be widely accepted as scientic
fact; these latter theories can be viewed as quantum theo-
ries of matter and electromagnetic radiation, respectively.
Among the rst to study quantum phenomena in nature
were Arthur Compton, C. V. Raman, and Pieter Zeeman,
each of whom has a quantum eect named after him.
Robert Andrews Millikan studied the photoelectric eect
experimentally, and Albert Einstein developed a theory
for it. At the same time, Ernest Rutherford experimen-
tally discovered the nuclear model of the atom, for which
Niels Bohr developed his theory of the atomic structure,
which was later conrmed by the experiments of Henry
Moseley. In 1913, Peter Debye extended Niels Bohrs
theory of atomic structure, introducing elliptical orbits, a
concept also introduced by Arnold Sommerfeld.[7] This
phase is known as old quantum theory. Max Planck is considered the father of the quantum theory.

According to Planck, each energy element (E) is propor-


tional to its frequency ():

E = h

where h is Plancks constant.


Planck cautiously insisted that this was simply an aspect
of the processes of absorption and emission of radiation
and had nothing to do with the physical reality of the radi-
ation itself.[8] In fact, he considered his quantum hypothe-
sis a mathematical trick to get the right answer rather than
a sizable discovery.[9] However, in 1905 Albert Einstein
interpreted Plancks quantum hypothesis realistically and
used it to explain the photoelectric eect, in which shin- The 1927 Solvay Conference in Brussels.
ing light on certain materials can eject electrons from the
material. He won the 1921 Nobel Prize in Physics for this
work. Sommerfeld, and others. The Copenhagen interpretation
Einstein further developed this idea to show that an of Niels Bohr became widely accepted.
electromagnetic wave such as light could also be de- In the mid-1920s, developments in quantum mechanics
scribed as a particle (later called the photon), with a led to its becoming the standard formulation for atomic
discrete quantum of energy that was dependent on its physics. In the summer of 1925, Bohr and Heisenberg
frequency.[10] published results that closed the old quantum theory. Out
The foundations of quantum mechanics were established of deference to their particle-like behavior in certain pro-
during the rst half of the 20th century by Max Planck, cesses and measurements, light quanta came to be called
Niels Bohr, Werner Heisenberg, Louis de Broglie, Arthur photons (1926). From Einsteins simple postulation was
Compton, Albert Einstein, Erwin Schrdinger, Max born a urry of debating, theorizing, and testing. Thus,
Born, John von Neumann, Paul Dirac, Enrico Fermi, the entire eld of quantum physics emerged, leading to its
Wolfgang Pauli, Max von Laue, Freeman Dyson, David wider acceptance at the Fifth Solvay Conference in 1927.
Hilbert, Wilhelm Wien, Satyendra Nath Bose, Arnold It was found that subatomic particles and electromagnetic
34 CHAPTER 10. QUANTUM MECHANICS

waves are neither simply particle nor wave but have cer- Broadly speaking, quantum mechanics incorporates four
tain properties of each. This originated the concept of classes of phenomena for which classical physics cannot
waveparticle duality. account:
By 1930, quantum mechanics had been further unied
and formalized by the work of David Hilbert, Paul Dirac quantization of certain physical properties
and John von Neumann[11] with greater emphasis on
quantum entanglement
measurement, the statistical nature of our knowledge
of reality, and philosophical speculation about the 'ob- principle of uncertainty
server'. It has since permeated many disciplines includ-
ing quantum chemistry, quantum electronics, quantum waveparticle duality
optics, and quantum information science. Its specu-
lative modern developments include string theory and
quantum gravity theories. It also provides a useful frame- 10.2 Mathematical formulations
work for many features of the modern periodic table
of elements, and describes the behaviors of atoms dur- Main article: Mathematical formulation of quantum
ing chemical bonding and the ow of electrons in com- mechanics
puter semiconductors, and therefore plays a crucial role See also: Quantum logic
in many modern technologies.
While quantum mechanics was constructed to de- In the mathematically rigorous formulation of quantum
scribe the world of the very small, it is also needed mechanics developed by Paul Dirac,[18] David Hilbert,[19]
to explain some macroscopic phenomena such as John von Neumann,[20] and Hermann Weyl,[21] the
superconductors,[12] and superuids.[13] possible states of a quantum mechanical system are
The word quantum derives from the Latin, meaning symbolized[22] as unit vectors (called state vectors). For-
how great or how much.[14] In quantum mechanics, mally, these reside in a complex separable Hilbert
it refers to a discrete unit assigned to certain physical spacevariously called the state space or the associated
quantities such as the energy of an atom at rest (see Hilbert space of the systemthat is well dened up to
Figure 1). The discovery that particles are discrete a complex number of norm 1 (the phase factor). In
packets of energy with wave-like properties led to the other words, the possible states are points in the projective
branch of physics dealing with atomic and subatomic space of a Hilbert space, usually called the complex pro-
systems which is today called quantum mechanics. It jective space. The exact nature of this Hilbert space is
underlies the mathematical framework of many elds dependent on the systemfor example, the state space
of physics and chemistry, including condensed matter for position and momentum states is the space of square-
physics, solid-state physics, atomic physics, molecular integrable functions, while the state space for the spin of
physics, computational physics, computational chemistry, a single proton is just the product of two complex planes.
quantum chemistry, particle physics, nuclear chemistry, Each observable is represented by a maximally Hermitian
and nuclear physics.[15] Some fundamental aspects of the (precisely: by a self-adjoint) linear operator acting on
theory are still actively studied.[16] the state space. Each eigenstate of an observable corre-
sponds to an eigenvector of the operator, and the associ-
Quantum mechanics is essential to understanding the be-
ated eigenvalue corresponds to the value of the observable
havior of systems at atomic length scales and smaller. If
in that eigenstate. If the operators spectrum is discrete,
the physical nature of an atom were solely described by
the observable can attain only those discrete eigenvalues.
classical mechanics, electrons would not orbit the nucleus,
since orbiting electrons emit radiation (due to circular In the formalism of quantum mechanics, the state of a
motion) and would eventually collide with the nucleus system at a given time is described by a complex wave
due to this loss of energy. This framework was un- function, also referred to as state vector in a complex
able to explain the stability of atoms. Instead, elec- vector space.[23] This abstract mathematical object allows
trons remain in an uncertain, non-deterministic, smeared, for the calculation of probabilities of outcomes of con-
probabilistic waveparticle orbital about the nucleus, de- crete experiments. For example, it allows one to com-
fying the traditional assumptions of classical mechanics pute the probability of nding an electron in a particular
and electromagnetism.[17] region around the nucleus at a particular time. Contrary
to classical mechanics, one can never make simultaneous
Quantum mechanics was initially developed to provide
predictions of conjugate variables, such as position and
a better explanation and description of the atom, espe-
momentum, to arbitrary precision. For instance, elec-
cially the dierences in the spectra of light emitted by
trons may be considered (to a certain probability) to be
dierent isotopes of the same chemical element, as well
located somewhere within a given region of space, but
as subatomic particles. In short, the quantum-mechanical
with their exact positions unknown. Contours of con-
atomic model has succeeded spectacularly in the realm
stant probability, often referred to as clouds, may be
where classical mechanics and electromagnetism falter.
drawn around the nucleus of an atom to conceptualize
10.2. MATHEMATICAL FORMULATIONS 35

where the electron might be located with the most prob- of occurrence. However, quantum mechanics does not
ability. Heisenbergs uncertainty principle quanties the pinpoint the exact values of a particles position and mo-
inability to precisely locate the particle given its conjugate mentum (since they are conjugate pairs) or its energy and
momentum.[24] time (since they too are conjugate pairs); rather, it pro-
According to one interpretation, as the result of a mea- vides only a range of probabilities in which that parti-
surement the wave function containing the probability cle might be given its momentum and momentum prob-
information for a system collapses from a given initial ability. Therefore, it is helpful to use dierent words to
state to a particular eigenstate. The possible results of describe states having uncertain values and states having
denite values (eigenstates). Usually, a system will not be
a measurement are the eigenvalues of the operator rep-
resenting the observablewhich explains the choice of in an eigenstate of the observable (particle) we are inter-
ested in. However, if one measures the observable, the
Hermitian operators, for which all the eigenvalues are
real. The probability distribution of an observable in a wave function will instantaneously be an eigenstate (or
generalized eigenstate) of that observable. This pro-
given state can be found by computing the spectral de-
composition of the corresponding operator. Heisenbergs cess is known as wave function collapse, a controversial
uncertainty principle is represented by the statement that and much-debated process[29] that involves expanding the
the operators corresponding to certain observables do not system under study to include the measurement device. If
commute. one knows the corresponding wave function at the instant
before the measurement, one will be able to compute the
The probabilistic nature of quantum mechanics thus probability of the wave function collapsing into each of
stems from the act of measurement. This is one of the possible eigenstates. For example, the free particle in
the most dicult aspects of quantum systems to un- the previous example will usually have a wave function
derstand. It was the central topic in the famous Bohr that is a wave packet centered around some mean posi-
Einstein debates, in which the two scientists attempted to tion x0 (neither an eigenstate of position nor of momen-
clarify these fundamental principles by way of thought tum). When one measures the position of the particle, it
experiments. In the decades after the formulation of is impossible to predict with certainty the result.[25] It is
quantum mechanics, the question of what constitutes probable, but not certain, that it will be near x0 , where
a measurement has been extensively studied. Newer the amplitude of the wave function is large. After the
interpretations of quantum mechanics have been formu- measurement is performed, having obtained some result
lated that do away with the concept of wave function col- x, the wave function collapses into a position eigenstate
lapse (see, for example, the relative state interpretation). centered at x.[30]
The basic idea is that when a quantum system interacts
with a measuring apparatus, their respective wave func- The time evolution of a quantum state is described by
the Schrdinger equation, in which the Hamiltonian (the
tions become entangled, so that the original quantum sys-
tem ceases to exist as an independent entity. For details, operator corresponding to the total energy of the system)
generates the time evolution. The time evolution of wave
see the article on measurement in quantum mechanics.[25]
functions is deterministic in the sense that - given a wave
Generally, quantum mechanics does not assign denite function at an initial time - it makes a denite prediction
values. Instead, it makes a prediction using a probability of what the wave function will be at any later time.[31]
distribution; that is, it describes the probability of ob-
taining the possible outcomes from measuring an observ- During a measurement, on the other hand, the change of
able. Often these results are skewed by many causes, the initial wave function into another, later wave function
such as dense probability clouds. Probability clouds are is not deterministic, it is unpredictable (i.e.,[32][33]
random). A
approximate (but better than the Bohr model) whereby time-evolution simulation can be seen here.
electron location is given by a probability function, the Wave functions change as time progresses. The
wave function eigenvalue, such that the probability is the Schrdinger equation describes how wave functions
squared modulus of the complex amplitude, or quantum change in time, playing a role similar to Newtons second
state nuclear attraction.[26][27] Naturally, these probabil- law in classical mechanics. The Schrdinger equation,
ities will depend on the quantum state at the instant applied to the aforementioned example of the free par-
of the measurement. Hence, uncertainty is involved in ticle, predicts that the center of a wave packet will move
the value. There are, however, certain states that are as- through space at a constant velocity (like a classical parti-
sociated with a denite value of a particular observable. cle with no forces acting on it). However, the wave packet
These are known as eigenstates of the observable (eigen will also spread out as time progresses, which means that
can be translated from German as meaning inherent or the position becomes more uncertain with time. This also
characteristic).[28] has the eect of turning a position eigenstate (which can
In the everyday world, it is natural and intuitive to think be thought of as an innitely sharp wave packet) into a
of everything (every observable) as being in an eigenstate. broadened wave packet that no longer[34]
represents a (de-
Everything appears to have a denite position, a de- nite, certain) position eigenstate.
nite momentum, a denite energy, and a denite time Some wave functions produce probability distributions
36 CHAPTER 10. QUANTUM MECHANICS

There exist several techniques for generating approxi-


mate solutions, however. In the important method known
as perturbation theory, one uses the analytic result for
a simple quantum mechanical model to generate a re-
sult for a more complicated model that is related to the
simpler model by (for one example) the addition of a
weak potential energy. Another method is the semi-
classical equation of motion approach, which applies
to systems for which quantum mechanics produces only
weak (small) deviations from classical behavior. These
deviations can then be computed based on the classical
motion. This approach is particularly important in the
eld of quantum chaos.

10.3 Mathematically equivalent


formulations of quantum
Fig. 1: Probability densities corresponding to the wave functions mechanics
of an electron in a hydrogen atom possessing denite energy lev-
els (increasing from the top of the image to the bottom: n = 1, 2,
3, ...) and angular momenta (increasing across from left to right:
There are numerous mathematically equivalent formula-
s, p, d, ...). Brighter areas correspond to higher probability den- tions of quantum mechanics. One of the oldest and most
sity in a position measurement. Such wave functions are directly commonly used formulations is the "transformation the-
comparable to Chladnis gures of acoustic modes of vibration ory" proposed by Paul Dirac, which unies and general-
in classical physics, and are modes of oscillation as well, possess- izes the two earliest formulations of quantum mechanics
ing a sharp energy and, thus, a denite frequency. The angular - matrix mechanics (invented by Werner Heisenberg) and
momentum and energy are quantized, and take only discrete val- wave mechanics (invented by Erwin Schrdinger).[36]
ues like those shown (as is the case for resonant frequencies in
acoustics) Especially since Werner Heisenberg was awarded the
Nobel Prize in Physics in 1932 for the creation of quan-
tum mechanics, the role of Max Born in the development
of QM was overlooked until the 1954 Nobel award. The
that are constant, or independent of timesuch as when role is noted in a 2005 biography of Born, which recounts
in a stationary state of constant energy, time vanishes in his role in the matrix formulation of quantum mechanics,
the absolute square of the wave function. Many systems and the use of probability amplitudes. Heisenberg him-
that are treated dynamically in classical mechanics are de- self acknowledges having learned matrices from Born, as
scribed by such static wave functions. For example, a published in a 1940 festschrift honoring Max Planck.[37]
single electron in an unexcited atom is pictured classi- In the matrix formulation, the instantaneous state of a
cally as a particle moving in a circular trajectory around quantum system encodes the probabilities of its measur-
the atomic nucleus, whereas in quantum mechanics it is able properties, or "observables". Examples of observ-
described by a static, spherically symmetric wave func- ables include energy, position, momentum, and angular
tion surrounding the nucleus (Fig. 1) (note, however, that momentum. Observables can be either continuous (e.g.,
only the lowest angular momentum states, labeled s, are the position of a particle) or discrete (e.g., the energy of
spherically symmetric).[35] an electron bound to a hydrogen atom).[38] An alternative
The Schrdinger equation acts on the entire probability formulation of quantum mechanics is Feynman's path in-
amplitude, not merely its absolute value. Whereas the tegral formulation, in which a quantum-mechanical am-
absolute value of the probability amplitude encodes in- plitude is considered as a sum over all possible classical
formation about probabilities, its phase encodes informa- and non-classical paths between the initial and nal states.
tion about the interference between quantum states. This This is the quantum-mechanical counterpart of the action
gives rise to the wave-like behavior of quantum states. principle in classical mechanics.
As it turns out, analytic solutions of the Schrdinger equa-
tion are available for only a very small number of rela-
tively simple model Hamiltonians, of which the quantum 10.4 Interactions with other scien-
harmonic oscillator, the particle in a box, the dihydrogen
cation, and the hydrogen atom are the most important rep- tic theories
resentatives. Even the helium atomwhich contains just
one more electron than does the hydrogen atomhas de- The rules of quantum mechanics are fundamental. They
ed all attempts at a fully analytic treatment. assert that the state space of a system is a Hilbert space
10.4. INTERACTIONS WITH OTHER SCIENTIFIC THEORIES 37

and that observables of that system are Hermitian op- weak nuclear force and the electromagnetic force were
erators acting on that spacealthough they do not tell unied, in their quantized forms, into a single quantum
us which Hilbert space or which operators. These can eld theory (known as electroweak theory), by the physi-
be chosen appropriately in order to obtain a quantitative cists Abdus Salam, Sheldon Glashow and Steven Wein-
description of a quantum system. An important guide berg. These three men shared the Nobel Prize in Physics
for making these choices is the correspondence princi- in 1979 for this work.[39]
ple, which states that the predictions of quantum mechan- It has proven dicult to construct quantum models of
ics reduce to those of classical mechanics when a system gravity, the remaining fundamental force. Semi-classical
moves to higher energies or, equivalently, larger quan-
approximations are workable, and have led to predic-
tum numbers, i.e. whereas a single particle exhibits a tions such as Hawking radiation. However, the formula-
degree of randomness, in systems incorporating millions
tion of a complete theory of quantum gravity is hindered
of particles averaging takes over and, at the high energy by apparent incompatibilities between general relativity
limit, the statistical probability of random behaviour ap-
(the most accurate theory of gravity currently known) and
proaches zero. In other words, classical mechanics is sim- some of the fundamental assumptions of quantum theory.
ply a quantum mechanics of large systems. This high
The resolution of these incompatibilities is an area of ac-
energy limit is known as the classical or correspondence tive research, and theories such as string theory are among
limit. One can even start from an established classical the possible candidates for a future theory of quantum
model of a particular system, then attempt to guess the gravity.
underlying quantum model that would give rise to the
classical model in the correspondence limit. Classical mechanics has also been extended into the
complex domain, with complex classical mechanics ex-
When quantum mechanics was originally formulated, it hibiting behaviors similar to quantum mechanics.[40]
was applied to models whose correspondence limit was
non-relativistic classical mechanics. For instance, the
well-known model of the quantum harmonic oscilla-
tor uses an explicitly non-relativistic expression for the 10.4.1 Quantum mechanics and classical
kinetic energy of the oscillator, and is thus a quantum
version of the classical harmonic oscillator.
physics
Early attempts to merge quantum mechanics with special Predictions of quantum mechanics have been veried ex-
relativity involved the replacement of the Schrdinger perimentally to an extremely high degree of accuracy.[41]
equation with a covariant equation such as the Klein According to the correspondence principle between clas-
Gordon equation or the Dirac equation. While these the- sical and quantum mechanics, all objects obey the laws
ories were successful in explaining many experimental re- of quantum mechanics, and classical mechanics is just an
sults, they had certain unsatisfactory qualities stemming approximation for large systems of objects (or a statistical
from their neglect of the relativistic creation and anni- quantum mechanics of a large collection of particles).[42]
hilation of particles. A fully relativistic quantum the- The laws of classical mechanics thus follow from the laws
ory required the development of quantum eld theory, of quantum mechanics as a statistical average at the limit
which applies quantization to a eld (rather than a xed of large systems or large quantum numbers.[43] However,
set of particles). The rst complete quantum eld the- chaotic systems do not have good quantum numbers, and
ory, quantum electrodynamics, provides a fully quantum quantum chaos studies the relationship between classical
description of the electromagnetic interaction. The full and quantum descriptions in these systems.
apparatus of quantum eld theory is often unnecessary
for describing electrodynamic systems. A simpler ap- Quantum coherence is an essential dierence between
proach, one that has been employed since the inception classical and quantum theories as illustrated by the
of quantum mechanics, is to treat charged particles as EinsteinPodolskyRosen (EPR) paradox an attack
quantum mechanical objects being acted on by a classical on a certain philosophical interpretation of quantum me-
electromagnetic eld. For example, the elementary quan- chanics by an appeal to local realism.[44] Quantum in-
tum model of the hydrogen atom describes the electric terference involves adding together probability ampli-
eld of the hydrogen atom using a classical e2 /(4 0 r) tudes, whereas classical waves infer that there is an
Coulomb potential. This semi-classical approach fails adding together of intensities. For microscopic bodies,
if quantum uctuations in the electromagnetic eld play the extension of the system is much smaller than the
an important role, such as in the emission of photons by coherence length, which gives rise to long-range entan-
charged particles. glement and other nonlocal phenomena characteristic of
quantum systems.[45] Quantum coherence is not typically
Quantum eld theories for the strong nuclear force and evident at macroscopic scales, though an exception to this
the weak nuclear force have also been developed. The rule may occur at extremely low temperatures (i.e. ap-
quantum eld theory of the strong nuclear force is called proaching absolute zero) at which quantum behavior may
quantum chromodynamics, and describes the interactions manifest itself macroscopically.[46] This is in accordance
of subnuclear particles such as quarks and gluons. The with the following observations:
38 CHAPTER 10. QUANTUM MECHANICS

Many macroscopic properties of a classical system For many experiments, it is possible to think of the ini-
are a direct consequence of the quantum behav- tial and nal conditions of the system as being a particle.
ior of its parts. For example, the stability of bulk In some cases it appears that there are potentially several
matter (consisting of atoms and molecules which spatially distinct pathways or trajectories by which a par-
would quickly collapse under electric forces alone), ticle might pass from initial to nal condition. It is an im-
the rigidity of solids, and the mechanical, thermal, portant feature of the quantum kinematic description that
chemical, optical and magnetic properties of matter it does not permit a unique denite statement of which of
are all results of the interaction of electric charges those pathways is actually followed. Only the initial and
under the rules of quantum mechanics.[47] nal conditions are denite, and, as stated in the forego-
ing paragraph, they are dened only as precisely as al-
While the seemingly exotic behavior of matter lowed by the conguration space description or its equiv-
posited by quantum mechanics and relativity theory alent. In every case for which a quantum kinematic de-
become more apparent when dealing with particles scription is needed, there is always a compelling reason
of extremely small size or velocities approaching the for this restriction of kinematic precision. An example
speed of light, the laws of classical, often considered of such a reason is that for a particle to be experimentally
"Newtonian", physics remain accurate in predicting found in a denite position, it must be held motionless;
the behavior of the vast majority of large objects for it to be experimentally found to have a denite mo-
(on the order of the size of large molecules or big- mentum, it must have free motion; these two are logically
ger) at velocities much smaller than the velocity of incompatible.[60][61]
light.[48]
Classical kinematics does not primarily demand exper-
imental description of its phenomena. It allows com-
10.4.2 Copenhagen interpretation of pletely precise description of an instantaneous state by
a value in phase space, the Cartesian product of cong-
quantum versus classical kinemat- uration and momentum spaces. This description simply
ics assumes or imagines a state as a physically existing en-
tity without concern about its experimental measurability.
A big dierence between classical and quantum me- Such a description of an initial condition, together with
chanics is that they use very dierent kinematic Newtons laws of motion, allows a precise deterministic
descriptions.[49] and causal prediction of a nal condition, with a denite
In Niels Bohr's mature view, quantum mechanical phe- trajectory of passage. Hamiltonian dynamics can be used
nomena are required to be experiments, with complete for this. Classical kinematics also allows the description
descriptions of all the devices for the system, prepara- of a process analogous to the initial and nal condition
tive, intermediary, and nally measuring. The descrip- description used by quantum mechanics. Lagrangian me-
tions are in macroscopic terms, expressed in ordinary chanics applies to this.[62] For processes that need account
language, supplemented with the concepts of classical to be taken of actions of a small number of Planck con-
mechanics.[50][51][52][53] The initial condition and the - stants, classical kinematics is not adequate; quantum me-
nal condition of the system are respectively described by chanics is needed.
values in a conguration space, for example a position
space, or some equivalent space such as a momentum
space. Quantum mechanics does not admit a completely 10.4.3 General relativity and quantum me-
precise description, in terms of both position and mo-
chanics
mentum, of an initial condition or state (in the classical
sense of the word) that would support a precisely deter-
ministic and causal prediction of a nal condition.[54][55] Even with the dening postulates of both Einsteins the-
In this sense, advocated by Bohr in his mature writings, a ory of general relativity and quantum theory being in-
quantum phenomenon is a process, a passage from initial disputably supported by rigorous and repeated empirical
to nal condition, not an instantaneous state in the clas- evidence, and while they do not directly contradict each
sical sense of that word.[56][57] Thus there are two kinds other theoretically (at least with regard to their primary
of processes in quantum mechanics: stationary and tran- claims), they have proven extremely dicult to incorpo-
sitional. For a stationary process, the initial and nal rate into one consistent, cohesive model.[63]
condition are the same. For a transition, they are dif- Gravity is negligible in many areas of particle physics,
ferent. Obviously by denition, if only the initial con- so that unication between general relativity and quan-
dition is given, the process is not determined.[54] Given tum mechanics is not an urgent issue in those particular
its initial condition, prediction of its nal condition is applications. However, the lack of a correct theory of
possible, causally but only probabilistically, because the quantum gravity is an important issue in physical cosmol-
Schrdinger equation is deterministic for wave function ogy and the search by physicists for an elegant "Theory
evolution, but the wave function describes the system only of Everything" (TOE). Consequently, resolving the in-
probabilistically.[58][59] consistencies between both theories has been a major
10.5. PHILOSOPHICAL IMPLICATIONS 39

goal of 20th and 21st century physics. Many prominent tromagnetism or the discrete levels of the energy of the
physicists, including Stephen Hawking, have labored for atoms. But here it is space itself which is discrete. More
many years in the attempt to discover a theory underly- precisely, space can be viewed as an extremely ne fabric
ing everything. This TOE would combine not only the or network woven of nite loops. These networks of
dierent models of subatomic physics, but also derive loops are called spin networks. The evolution of a spin
the four fundamental forces of nature - the strong force, network over time is called a spin foam. The predicted
electromagnetism, the weak force, and gravity - from a size of this structure is the Planck length, which is ap-
single force or phenomenon. While Stephen Hawking proximately 1.6161035 m. According to theory, there
was initially a believer in the Theory of Everything, af- is no meaning to length shorter than this (cf. Planck scale
ter considering Gdels Incompleteness Theorem, he has energy). Therefore, LQG predicts that not just matter,
concluded that one is not obtainable, and has stated so but also space itself, has an atomic structure.
publicly in his lecture Gdel and the End of Physics
(2002).[64]

10.5 Philosophical implications


10.4.4 Attempts at a unied eld theory
Main article: Grand unied theory Main article: Interpretations of quantum mechanics

The quest to unify the fundamental forces through quan- Since its inception, the many counter-intuitive aspects
tum mechanics is still ongoing. Quantum electrody- and results of quantum mechanics have provoked strong
namics (or quantum electromagnetism), which is cur- philosophical debates and many interpretations. Even
rently (in the perturbative regime at least) the most accu- fundamental issues, such as Max Born's basic rules con-
rately tested physical theory in competition with general cerning probability amplitudes and probability distribu-
relativity,[65][66] has been successfully merged with the tions, took decades to be appreciated by society and
weak nuclear force into the electroweak force and work is many leading scientists. Richard Feynman once said, I
currently being done to merge the electroweak and strong think I can safely say that nobody understands quantum
force into the electrostrong force. Current predictions mechanics.[68] According to Steven Weinberg, There is
state that at around 1014 GeV the three aforementioned now in my opinion no entirely satisfactory interpretation
forces are fused into a single unied eld.[67] Beyond this of quantum mechanics.[69]
grand unication, it is speculated that it may be possi- The Copenhagen interpretation due largely to Niels
ble to merge gravity with the other three gauge symme- Bohr and Werner Heisenberg remains most widely ac-
tries, expected to occur at roughly 1019 GeV. However cepted amongst physicists, some 75 years after its enun-
and while special relativity is parsimoniously incor- ciation. According to this interpretation, the probabilis-
porated into quantum electrodynamics the expanded tic nature of quantum mechanics is not a temporary fea-
general relativity, currently the best theory describing the ture which will eventually be replaced by a deterministic
gravitation force, has not been fully incorporated into theory, but instead must be considered a nal renuncia-
quantum theory. One of those searching for a coherent tion of the classical idea of causality. It is also believed
TOE is Edward Witten, a theoretical physicist who for- therein that any well-dened application of the quantum
mulated the M-theory, which is an attempt at describ- mechanical formalism must always make reference to the
ing the supersymmetrical based string theory. M-theory experimental arrangement, due to the conjugate nature
posits that our apparent 4-dimensional spacetime is, in of evidence obtained under dierent experimental situa-
reality, actually an 11-dimensional spacetime containing tions.
10 spatial dimensions and 1 time dimension, although 7 Albert Einstein, himself one of the founders of quantum
of the spatial dimensions are - at lower energies - com- theory, did not accept some of the more philosophical or
pletely compactied (or innitely curved) and not read- metaphysical interpretations of quantum mechanics, such
ily amenable to measurement or probing. as rejection of determinism and of causality. He is fa-
Another popular theory is Loop quantum gravity (LQG), mously quoted as saying, in response to this aspect, God
a theory rst proposed by Carlo Rovelli that describes does not play with dice.[70] He rejected the concept that
the quantum properties of gravity. It is also a theory of the state of a physical system depends on the experimen-
quantum space and quantum time, because in general rel- tal arrangement for its measurement. He held that a state
ativity the geometry of spacetime is a manifestation of of nature occurs in its own right, regardless of whether or
gravity. LQG is an attempt to merge and adapt standard how it might be observed. In that view, he is supported
quantum mechanics and standard general relativity. The by the currently accepted denition of a quantum state,
main output of the theory is a physical picture of space which remains invariant under arbitrary choice of cong-
where space is granular. The granularity is a direct conse- uration space for its representation, that is to say, man-
quence of the quantization. It has the same nature of the ner of observation. He also held that underlying quantum
granularity of the photons in the quantum theory of elec- mechanics there should be a theory that thoroughly and
40 CHAPTER 10. QUANTUM MECHANICS

directly expresses the rule against action at a distance; in order to prove that the wave function did not collapse,
other words, he insisted on the principle of locality. He one would have to bring all these particles back and mea-
considered, but rejected on theoretical grounds, a partic- sure them again, together with the system that was orig-
ular proposal for hidden variables to obviate the indeter- inally measured. Not only is this completely impracti-
minism or acausality of quantum mechanical measure- cal, but even if one could theoretically do this, it would
ment. He considered that quantum mechanics was a cur- have to destroy any evidence that the original measure-
rently valid but not a permanently denitive theory for ment took place (including the physicists memory). In
quantum phenomena. He thought its future replacement light of these Bell tests, Cramer (1986) formulated his
would require profound conceptual advances, and would transactional interpretation.[73] Relational quantum me-
not come quickly or easily. The Bohr-Einstein debates chanics appeared in the late 1990s as the modern deriva-
provide a vibrant critique of the Copenhagen Interpre- tive of the Copenhagen Interpretation.
tation from an epistemological point of view. In argu-
ing for his views, he produced a series of objections, the
most famous of which has become known as the Einstein 10.6 Applications
PodolskyRosen paradox.
John Bell showed that this EPR paradox led to Quantum mechanics has had enormous[74] success in ex-
experimentally testable dierences between quantum plaining many of the features of our universe. Quantum
mechanics and theories that rely on added hidden vari- mechanics is often the only tool available that can reveal
ables. Experiments have been performed conrming the the individual behaviors of the subatomic particles that
accuracy of quantum mechanics, thereby demonstrating make up all forms of matter (electrons, protons, neutrons,
that quantum mechanics cannot be improved upon by ad- photons, and others). Quantum mechanics has strongly
dition of hidden variables.[71] Alain Aspects initial exper- inuenced string theories, candidates for a Theory of Ev-
iments in 1982, and many subsequent experiments since, erything (see reductionism).
have denitively veried quantum entanglement.
Quantum mechanics is also critically important for un-
Entanglement, as demonstrated in Bell-type experiments, derstanding how individual atoms combine covalently to
does not, however, violate causality, since no transfer of form molecules. The application of quantum mechanics
information happens. Quantum entanglement forms the to chemistry is known as quantum chemistry. Relativis-
basis of quantum cryptography, which is proposed for use tic quantum mechanics can, in principle, mathematically
in high-security commercial applications in banking and describe most of chemistry. Quantum mechanics can
government. also provide quantitative insight into ionic and covalent
The Everett many-worlds interpretation, formulated in bonding processes by explicitly showing which molecules
1956, holds that all the possibilities described by quan- are energetically favorable to which others and the mag-
tum theory simultaneously occur in a multiverse com- nitudes of the energies involved.[75] Furthermore, most
posed of mostly independent parallel universes.[72] This of the calculations performed in modern computational
is not accomplished by introducing some new axiom chemistry rely on quantum mechanics.
to quantum mechanics, but on the contrary, by removing In many aspects modern technology operates at a scale
the axiom of the collapse of the wave packet. All of the where quantum eects are signicant.
possible consistent states of the measured system and the
measuring apparatus (including the observer) are present
in a real physical - not just formally mathematical, as in 10.6.1 Electronics
other interpretations - quantum superposition. Such a su-
perposition of consistent state combinations of dierent Many modern electronic devices are designed using
systems is called an entangled state. While the multi- quantum mechanics. Examples include the laser, the
verse is deterministic, we perceive non-deterministic be- transistor (and thus the microchip), the electron micro-
havior governed by probabilities, because we can only scope, and magnetic resonance imaging (MRI). The study
observe the universe (i.e., the consistent state contribu- of semiconductors led to the invention of the diode and
tion to the aforementioned superposition) that we, as ob- the transistor, which are indispensable parts of modern
servers, inhabit. Everetts interpretation is perfectly con- electronics systems, computer and telecommunication
sistent with John Bell's experiments and makes them in- devices. Another application is the light emitting diode
tuitively understandable. However, according to the the- which is a high-eciency source of light.
ory of quantum decoherence, these parallel universes Many electronic devices operate under eect of quantum
will never be accessible to us. The inaccessibility can tunneling. It even exists in the simple light switch. The
be understood as follows: once a measurement is done, switch would not work if electrons could not quantum
the measured system becomes entangled with both the tunnel through the layer of oxidation on the metal con-
physicist who measured it and a huge number of other tact surfaces. Flash memory chips found in USB drives
particles, some of which are photons ying away at the use quantum tunneling to erase their memory cells. Some
speed of light towards the other end of the universe. In negative dierential resistance devices also utilizes quan-
10.7. EXAMPLES 41

Superuidity, the frictionless ow of a liquid at tempera-


tures near absolute zero, is one well-known example. So
is the closely related phenomenon of superconductivity,
the frictionless ow of an electron gas in a conducting
material (an electric current) at suciently low tempera-
tures. The fractional quantum hall eect is a topological
ordered state which corresponds to patterns of long-range
quantum entanglement.[76] States with dierent topolog-
ical orders (or dierent patterns of long range entangle-
ments) cannot change into each other without a phase
transition.

10.6.5 Quantum theory

A working mechanism of a resonant tunneling diode de- Quantum theory also provides accurate descriptions for
vice, based on the phenomenon of quantum tunneling through many previously unexplained phenomena, such as black-
potential barriers. (Left: band diagram; Center: transmission body radiation and the stability of the orbitals of elec-
coecient; Right: current-voltage characteristics) As shown in trons in atoms. It has also given insight into the work-
the band diagram(left), although there are two barriers, elec- ings of many dierent biological systems, including smell
trons still tunnel through via the conned states between two bar- receptors and protein structures.[77] Recent work on
riers(center), conducting current.
photosynthesis has provided evidence that quantum cor-
relations play an essential role in this fundamental pro-
tum tunneling eect, such as resonant tunneling diode. cess of plants and many other organisms.[78] Even so,
Unlike classical diodes, its current is carried by resonant classical physics can often provide good approximations
tunneling through two potential barriers (see right gure). to results otherwise obtained by quantum physics, typ-
Its negative resistance behavior can only be understood ically in circumstances with large numbers of particles
with quantum mechanics: As the conned state moves or large quantum numbers. Since classical formulas are
close to Fermi level, tunnel current increases. As it moves much simpler and easier to compute than quantum for-
away, current decreases. Quantum mechanics is vital to mulas, classical approximations are used and preferred
understanding and designing such electronic devices. when the system is large enough to render the eects of
quantum mechanics insignicant.

10.6.2 Cryptography
10.7 Examples
Researchers are currently seeking robust methods of di-
rectly manipulating quantum states. Eorts are be-
ing made to more fully develop quantum cryptography, 10.7.1 Free particle
which will theoretically allow guaranteed secure trans-
mission of information. For example, consider a free particle. In quantum me-
chanics, there is waveparticle duality, so the properties
of the particle can be described as the properties of a
10.6.3 Quantum computing wave. Therefore, its quantum state can be represented
as a wave of arbitrary shape and extending over space
A more distant goal is the development of quantum com- as a wave function. The position and momentum of the
puters, which are expected to perform certain computa- particle are observables. The Uncertainty Principle states
tional tasks exponentially faster than classical computers. that both the position and the momentum cannot simulta-
Instead of using classical bits, quantum computers use neously be measured with complete precision. However,
qubits, which can be in superpositions of states. Another one can measure the position (alone) of a moving free
active research topic is quantum teleportation, which particle, creating an eigenstate of position with a wave
deals with techniques to transmit quantum information function that is very large (a Dirac delta) at a particular
over arbitrary distances. position x, and zero everywhere else. If one performs a
position measurement on such a wave function, the resul-
tant x will be obtained with 100% probability (i.e., with
10.6.4 Macroscale quantum eects full certainty, or complete precision). This is called an
eigenstate of positionor, stated in mathematical terms,
While quantum mechanics primarily applies to the a generalized position eigenstate (eigendistribution). If the
smaller atomic regimes of matter and energy, some sys- particle is in an eigenstate of position, then its momentum
tems exhibit quantum mechanical eects on a large scale. is completely unknown. On the other hand, if the par-
42 CHAPTER 10. QUANTUM MECHANICS

ticle is in an eigenstate of momentum, then its position 10.7.3 Rectangular potential barrier
is completely unknown.[79] In an eigenstate of momen-
tum having a plane wave form, it can be shown that the Main article: Rectangular potential barrier
wavelength is equal to h/p, where h is Plancks constant
and p is the momentum of the eigenstate.[80] This is a model for the quantum tunneling eect which
plays an important role in the performance of modern
10.7.2 Step potential technologies such as ash memory and scanning tunnel-
ing microscopy. Quantum tunneling is central to physical
Main article: Solution of Schrdinger equation for a step phenomena involved in superlattices.
potential
The potential in this case is given by: 10.7.4 Particle in a box

Scattering at a nite potential step of height V0 , shown in green.


The amplitudes and direction of left- and right-moving waves
are indicated. Yellow is the incident wave, blue are reected and
transmitted waves, red does not occur. E > V0 for this gure.
1-dimensional potential energy box (or innite potential well)

{ Main article: Particle in a box


0, x < 0,
V (x) =
V0 , x 0. The particle in a one-dimensional potential energy box is
the most mathematically simple example where restraints
The solutions are superpositions of left- and right-moving lead to the quantization of energy levels. The box is de-
waves: ned as having zero potential energy everywhere inside a
( ) certain region, and innite potential energy everywhere
1 (x) = 1k A eik1 x + A eik1 x x< outside that region. For the one-dimensional case in the
1
0 ( 2 (x) = x direction, the time-independent Schrdinger equation
)
1 B e ik2 x
+ B e ik2 x
x>0 may be written[81]
k 2

where the wave vectors are related to the energy via 2 d2


= E.
2m dx2
With the dierential operator dened by
k1 = 2mE/2

k2 = 2m(E V0 )/2 d
px = i
with coecients A and B determined from the boundary dx
conditions and by imposing a continuous derivative on the the previous equation is evocative of the classic kinetic
solution. energy analogue,
Each term of the solution can be interpreted as an in-
cident, reected, or transmitted component of the wave,
1 2
allowing the calculation of transmission and reection co- px = E,
ecients. Notably, in contrast to classical mechanics, in- 2m
cident particles with energies greater than the potential with state in this case having energy E coincident with
step are partially reected. the kinetic energy of the particle.
10.7. EXAMPLES 43

The general solutions of the Schrdinger equation for the


particle in a box are

2 k 2
(x) = Aeikx + Beikx E=
2m
or, from Eulers formula,

(x) = C sin kx + D cos kx.

The innite potential walls of the box determine the val-


ues of C, D, and k at x = 0 and x = L where must be
zero. Thus, at x = 0,

(0) = 0 = C sin 0 + D cos 0 = D

and D = 0. At x = L,

(L) = 0 = C sin kL.


Some trajectories of a harmonic oscillator (i.e. a ball attached to
in which C cannot be zero as this would conict with the a spring) in classical mechanics (A-B) and quantum mechanics
Born interpretation. Therefore, since sin(kL) = 0, kL must (C-H). In quantum mechanics, the position of the ball is repre-
be an integer multiple of , sented by a wave (called the wave function), with the real part
shown in blue and the imaginary part shown in red. Some of
the trajectories (such as C,D,E,and F) are standing waves (or
n "stationary states"). Each standing-wave frequency is propor-
k= n = 1, 2, 3, . . . .
L tional to a possible energy level of the oscillator. This energy
quantization does not occur in classical physics, where the oscil-
The quantization of energy levels follows from this con-
lator can have any energy.
straint on k, since

2 2 n2 n2 h2 1
E= = . V (x) = m 2 x2 .
2mL2 8mL2 2
This problem can either be treated by directly solving the
10.7.5 Finite potential well Schrdinger equation, which is not trivial, or by using
the more elegant ladder method rst proposed by Paul
Main article: Finite potential well Dirac. The eigenstates are given by

( )
A nite potential well is the generalization of the in- 1 ( m )1/4 mx2 m
nite potential well problem to potential wells having nite n (x) = e 2 H
n x ,
2n n!
depth.
n = 0, 1, 2, . . . .
The nite potential well problem is mathematically more
complicated than the innite particle-in-a-box problem where Hn are the Hermite polynomials
as the wave function is not pinned to zero at the walls of
the well. Instead, the wave function must satisfy more n ( )
n x2 d x2
complicated mathematical boundary conditions as it is Hn (x) = (1) e dxn e ,
nonzero in regions outside the well.
and the corresponding energy levels are

10.7.6 Harmonic oscillator ( )


1
En = n + .
Main article: Quantum harmonic oscillator 2
As in the classical case, the potential for the quantum This is another example illustrating the quantication of
harmonic oscillator is given by energy for bound states.
44 CHAPTER 10. QUANTUM MECHANICS

10.8 See also [11] van Hove, Leon (1958). Von Neumanns contribu-
tions to quantum mechanics (PDF). Bulletin of the
Angular momentum diagrams (quantum mechanics) American Mathematical Society. 64 (3): Part2:9599.
doi:10.1090/s0002-9904-1958-10206-2.
EPR paradox
[12] Feynman, Richard. The Feynman Lectures on Physics
Fractional quantum mechanics III 21-4. California Institute of Technology. Retrieved
2015-11-24. "...it was long believed that the wave func-
List of quantum-mechanical systems with analytical tion of the Schrdinger equation would never have a
solutions macroscopic representation analogous to the macroscopic
representation of the amplitude for photons. On the other
Macroscopic quantum phenomena hand, it is now realized that the phenomena of supercon-
ductivity presents us with just this situation.
Phase space formulation
[13] Richard Packard (2006) Berkeley Experiments on
Regularization (physics) Superuid Macroscopic Quantum Eects Archived
November 25, 2015, at the Wayback Machine.
Spherical basis accessdate=2015-11-24

[14] Quantum - Denition and More from the Free Merriam-


10.9 Notes Webster Dictionary. Merriam-webster.com. Retrieved
2012-08-18.
[1] Feynman, Richard; Leighton, Robert; Sands, Matthew [15] Thall, Edwin. Thalls History of Quantum Mechanics.
(1964). The Feynman Lectures on Physics, Vol. 3. Florida Community College at Jacksonville. Archived
California Institute of Technology. p. 1.1. ISBN from the original on October 7, 2009. Retrieved May 23,
0201500647. 2009.
[2] Matson, John. What Is Quantum Mechanics Good for?". [16] ysne.com. ysne.com. Retrieved 11 September 2015.
Scientic American. Retrieved 18 May 2016.
[17] QUANTUM MECHANICS. geocities.com. 2009-10-
[3] Max Born & Emil Wolf, Principles of Optics, 1999, Cam-
26. Archived from the original on 2009-10-26. Retrieved
bridge University Press
2016-06-13.
[4] Mehra, J.; Rechenberg, H. (1982). The historical devel-
[18] P.A.M. Dirac, The Principles of Quantum Mechanics,
opment of quantum theory. New York: Springer-Verlag.
Clarendon Press, Oxford, 1930.
ISBN 0387906428.

[5] Kragh, Helge (2002). Quantum Generations: A History [19] D. Hilbert Lectures on Quantum Theory, 19151927
of Physics in the Twentieth Century. Princeton University
[20] J. von Neumann, Mathematische Grundlagen der Quan-
Press. p. 58. ISBN 0-691-09552-3. Extract of page 58
tenmechanik, Springer, Berlin, 1932 (English transla-
[6] Ben-Menahem, Ari (2009). Historical Encyclopedia of tion: Mathematical Foundations of Quantum Mechanics,
Natural and Mathematical Sciences, Volume 1. Springer. Princeton University Press, 1955).
p. 3678. ISBN 3540688315. Extract of page 3678
[21] H.Weyl The Theory of Groups and Quantum Mechan-
[7] E Arunan (2010). Peter Debye (PDF). Resonance (jour- ics, 1931 (original title: Gruppentheorie und Quanten-
nal). Indian Academy of Sciences. 15 (12). mechanik).

[8] Kuhn, T. S. (1978). Black-body theory and the quantum [22] Dirac, P.A.M. (1958). The Principles of Quantum Me-
discontinuity 1894-1912. Oxford: Clarendon Press. ISBN chanics, 4th edition, Oxford University Press, Oxford UK,
0195023838. p. ix: For this reason I have chosen the symbolic method,
introducing the representatives later merely as an aid to
[9] Kragh, Helge (1 December 2000), Max Planck: the reluc- practical calculation.
tant revolutionary, PhysicsWorld.com
[23] Greiner, Walter; Mller, Berndt (1994). Quantum Me-
[10] Einstein, A. (1905). "ber einen die Erzeugung chanics Symmetries, Second edition. Springer-Verlag. p.
und Verwandlung des Lichtes betreenden heu- 52. ISBN 3-540-58080-8., Chapter 1, p. 52
ristischen Gesichtspunkt" [On a heuristic point
of view concerning the production and trans- [24] Heisenberg - Quantum Mechanics, 19251927: The Un-
formation of light]. Annalen der Physik. 17 certainty Relations. Aip.org. Retrieved 2012-08-18.
(6): 132148. Bibcode:1905AnP...322..132E.
doi:10.1002/andp.19053220607. Reprinted in The [25] Greenstein, George; Zajonc, Arthur (2006). The Quan-
collected papers of Albert Einstein, John Stachel, editor, tum Challenge: Modern Research on the Foundations of
Princeton University Press, 1989, Vol. 2, pp. 149-166, Quantum Mechanics, Second edition. Jones and Bartlett
in German; see also Einsteins early work on the quantum Publishers, Inc. p. 215. ISBN 0-7637-2470-X., Chapter
hypothesis, ibid. pp. 134-148. 8, p. 215
10.9. NOTES 45

[26] "[Abstract] Visualization of Uncertain Particle Move- [45] N. P. Landsman (June 13, 2005). Between classical and
ment. Actapress.com. Retrieved 2012-08-18. quantum (PDF). Retrieved 2012-08-19. Handbook of
the Philosophy of Science Vol. 2: Philosophy of Physics
[27] Hirshleifer, Jack (2001). The Dark Side of the Force: Eco- (eds. John Earman & Jeremy Buttereld).
nomic Foundations of Conict Theory. Campbridge Uni-
versity Press. p. 265. ISBN 0-521-80412-4., Chapter , p. [46] (see macroscopic quantum phenomena, BoseEinstein
condensate, and Quantum machine)

[28] dict.cc dictionary :: eigen :: German-English transla- [47] Atomic Properties. Academic.brooklyn.cuny.edu. Re-
tion. dict.cc. Retrieved 11 September 2015. trieved 2012-08-18.

[29] Topics: Wave-Function Collapse. Phy.olemiss.edu. [48] http://assets.cambridge.org/97805218/29526/excerpt/


2012-07-27. Retrieved 2012-08-18. 9780521829526_excerpt.pdf
[30] Collapse of the wave-function. Farside.ph.utexas.edu. [49] Born, M., Heisenberg, W., Jordan, P. (1926). Z. Phys.
Retrieved 2012-08-18. 35: 557615. Translated as 'On quantum mechanics II',
pp. 321385 in Van der Waerden, B.L. (1967), Sources of
[31] Determinism and Naive Realism : philosophy. Red- Quantum Mechanics, North-Holland, Amsterdam, The
dit.com. 2009-06-01. Retrieved 2012-08-18. basic dierence between the theory proposed here and
[32] Michael Trott. Time-Evolution of a Wavepacket in that used hitherto ... lies in the characteristic kinematics
a Square Well Wolfram Demonstrations Project. ..., p. 385.
Demonstrations.wolfram.com. Retrieved 2010-10-15.
[50] Dirac, P.A.M. (1930/1958). The Principles of Quantum
[33] Michael Trott. Time Evolution of a Wavepacket In a Mechanics, fourth edition, Oxford University Press, Ox-
Square Well. Demonstrations.wolfram.com. Retrieved ford UK, p. 5: A question about what will happen to
2010-10-15. a particular photon under certain conditions is not really
very precise. To make it precise one must imagine some
[34] Mathews, Piravonu Mathews; Venkatesan, K. (1976). A experiment performed having a bearing on the question,
Textbook of Quantum Mechanics. Tata McGraw-Hill. p. and enquire what will be the result of the experiment.
36. ISBN 0-07-096510-2., Chapter 2, p. 36 Only questions about the results of experiments have a real
signicance and it is only such questions that theoretical
[35] Wave Functions and the Schrdinger Equation (PDF). physics has to consider.
Retrieved 2010-10-15.
[51] Bohr, N. (1939). The Causality Problem in Atomic
[36] Rechenberg, Helmut (1987). Erwin Schrdinger and Physics, in New Theories in Physics, Conference organized
the creation of wave mechanics (PDF). Acta Physica in collaboration with the International Union of Physics
Polonica B. 19 (8): 683695. Retrieved 13 June 2016. and the Polish Intellectual Co-operation Committee, War-
[37] Nancy Thorndike Greenspan, The End of the Certain saw, May 30th June 3rd 1938, International Institute
World: The Life and Science of Max Born (Basic Books, of Intellectual Co-operation, Paris, 1939, pp. 1130,
2005), pp. 124-8 and 285-6. reprinted in Niels Bohr, Collected Works, volume 7 (1933
1958) edited by J. Kalckar, Elsevier, Amsterdam, ISBN
[38] http://ocw.usu.edu/physics/classical-mechanics/pdf_ 0-444-89892-1, pp. 303322. The essential lesson of
lectures/06.pdf the analysis of measurements in quantum theory is thus
the emphasis on the necessity, in the account of the phe-
[39] The Nobel Prize in Physics 1979. Nobel Foundation. nomena, of taking the whole experimental arrangement
Retrieved 2010-02-16. into consideration, in complete conformity with the fact
that all unambiguous interpretation of the quantum me-
[40] Carl M. Bender; Daniel W. Hook; Karta Kooner (2009-
chanical formalism involves the xation of the external
12-31). Complex Elliptic Pendulum. arXiv:1001.0131
conditions, dening the initial state of the atomic system
[hep-th]. and the character of the possible predictions as regards
[41] See, for example, Precision tests of QED. The relativis- subsequent observable properties of that system. Any
tic renement of quantum mechanics known as quantum measurement in quantum theory can in fact only refer ei-
electrodynamics (QED) has been shown to agree with ex- ther to a xation of the initial state or to the test of such
periment to within 1 part in 108 for some atomic proper- predictions, and it is rst the combination of both kinds
ties. which constitutes a well-dened phenomenon.

[42] Tipler, Paul; Llewellyn, Ralph (2008). Modern Physics (5 [52] Bohr, N. (1948). On the notions of complementarity and
ed.). W. H. Freeman and Company. pp. 160161. ISBN causality, Dialectica 2: 312319. As a more appropriate
978-0-7167-7550-8. way of expression, one may advocate limitation of the use
of the word phenomenon to refer to observations obtained
[43] Quantum mechanics course iwhatisquantummechanics. under specied circumstances, including an account of the
Scribd.com. 2008-09-14. Retrieved 2012-08-18. whole experiment.

[44] A. Einstein, B. Podolsky, and N. Rosen, Can quantum- [53] Ludwig, G. (1987). An Axiomatic Basis for Quantum Me-
mechanical description of physical reality be considered chanics, volume 2, Quantum Mechanics and Macrosys-
complete? Phys. Rev. 47 777 (1935). tems, translated by K. Just, Springer, Berlin, ISBN 978-3-
46 CHAPTER 10. QUANTUM MECHANICS

642-71899-1, Chapter XIII, Special Structures in Prepa- [66] Tatsumi Aoyama; Masashi Hayakawa; Toichiro
ration and Registration Devices, 1, Measurement chains, Kinoshita; Makiko Nio (2012). Tenth-Order
p. 132. QED Contribution to the Electron g-2 and an Im-
proved Value of the Fine Structure Constant.
[54] Heisenberg, W. (1927). ber den anschaulichen Inhalt Physical Review Letters. 109 (11): 111807.
der quantentheoretischen Kinematik und Mechanik, Z. Bibcode:2012PhRvL.109k1807A. arXiv:1205.5368v2
Phys. 43: 172198. Translation as 'The actual content . doi:10.1103/PhysRevLett.109.111807.
of quantum theoretical kinematics and mechanics here ,
But in the rigorous formulation of the law of causality, [67] Parker, B. (1993). Overcoming some of the problems. pp.
If we know the present precisely, we can calculate the 259279.
future it is not the conclusion that is faulty, but the
[68] The Character of Physical Law (1965) Ch. 6; also quoted
premise.
in The New Quantum Universe (2003), by Tony Hey and
[55] Green, H.S. (1965). Matrix Mechanics, with a foreword Patrick Walters
by Max Born, P. Noordho Ltd, Groningen. It is not
[69] Weinberg, S. Collapse of the State Vector, Phys. Rev.
possible, therefore, to provide 'initial conditions for the
A 85, 062116 (2012).
prediction of the behaviour of atomic systems, in the way
contemplated by classical physics. This is accepted by [70] Harrison, Edward (16 March 2000). Cosmology: The Sci-
quantum theory, not merely as an experimental diculty, ence of the Universe. Cambridge University Press. p. 239.
but as a fundamental law of nature, p. 32. ISBN 978-0-521-66148-5.

[56] Rosenfeld, L. (1957). Misunderstandings about the foun- [71] Action at a Distance in Quantum Mechanics (Stanford
dations of quantum theory, pp. 4145 in Observation and Encyclopedia of Philosophy)". Plato.stanford.edu. 2007-
Interpretation, edited by S. Krner, Butterworths, London. 01-26. Retrieved 2012-08-18.
A phenomenon is therefore a process (endowed with the
characteristic quantal wholeness) involving a denite type [72] Everetts Relative-State Formulation of Quantum
of interaction between the system and the apparatus. Mechanics (Stanford Encyclopedia of Philosophy)".
Plato.stanford.edu. Retrieved 2012-08-18.
[57] Dirac, P.A.M. (1973). Development of the physicists
[73] The Transactional Interpretation of Quantum Mechanics
conception of nature, pp. 155 in The Physicists Concep-
by John Cramer. Reviews of Modern Physics 58, 647-688,
tion of Nature, edited by J. Mehra, D. Reidel, Dordrecht,
July (1986)
ISBN 90-277-0345-0, p. 5: That led Heisenberg to his
really masterful step forward, resulting in the new quan- [74] See, for example, the Feynman Lectures on Physics for
tum mechanics. His idea was to build up a theory entirely some of the technological applications which use quan-
in terms of quantities referring to two states. tum mechanics, e.g., transistors (vol III, pp. 1411 ),
integrated circuits, which are follow-on technology in
[58] Born, M. (1927). Physical aspects of quantum mechanics,
solid-state physics (vol II, pp. 86), and lasers (vol III,
Nature 119: 354357, These probabilities are thus dy-
pp. 913).
namically determined. But what the system actually does
is not determined ... [75] Pauling, Linus; Wilson, Edgar Bright (1985-03-01).
Introduction to Quantum Mechanics with Applications to
[59] Messiah, A. (1961). Quantum Mechanics, volume 1, Chemistry. ISBN 9780486648712. Retrieved 2012-08-
translated by G.M. Temmer from the French Mcanique 18.
Quantique, North-Holland, Amsterdam, p. 157.
[76] Chen, Xie; Gu, Zheng-Cheng; Wen, Xiao-Gang (2010).
[60] Bohr, N. (1928). The Quantum postulate and the recent Local unitary transformation, long-range quantum en-
development of atomic theory, Nature 121: 580590. tanglement, wave function renormalization, and topo-
logical order. Phys. Rev. B. 82: 155138.
[61] Heisenberg, W. (1930). The Physical Principles of the
Quantum Theory, translated by C. Eckart and F.C. Hoyt, Bibcode:2010PhRvB..82o5138C. arXiv:1004.3835 .
University of Chicago Press. doi:10.1103/physrevb.82.155138.

[62] Goldstein, H. (1950). Classical Mechanics, Addison- [77] Anderson, Mark (2009-01-13). Is Quantum Mechanics
Wesley, ISBN 0-201-02510-8. Controlling Your Thoughts? | Subatomic Particles. DIS-
COVER Magazine. Retrieved 2012-08-18.
[63] There is as yet no logically consistent and complete rela-
[78] Quantum mechanics boosts photosynthesis.
tivistic quantum eld theory., p. 4. V. B. Berestetskii,
physicsworld.com. Retrieved 2010-10-23.
E. M. Lifshitz, L P Pitaevskii (1971). J. B. Sykes, J. S.
Bell (translators). Relativistic Quantum Theory 4, part I. [79] Davies, P. C. W.; Betts, David S. (1984). Quantum Me-
Course of Theoretical Physics (Landau and Lifshitz) ISBN chanics, Second edition. Chapman and Hall. p. 79. ISBN
0-08-016025-5 0-7487-4446-0., Chapter 6, p. 79
[64] Stephen Hawking; Gdel and the end of physics. [80] Baofu, Peter (2007-12-31). The Future of Complexity:
cam.ac.uk. Retrieved 11 September 2015. Conceiving a Better Way to Understand Order and Chaos.
ISBN 9789812708991. Retrieved 2012-08-18.
[65] The Nature of Space and Time. google.com. Retrieved
11 September 2015. [81] Derivation of particle in a box, chemistry.tidalswan.com
10.11. FURTHER READING 47

10.10 References Hagen Kleinert, 2004. Path Integrals in Quantum


Mechanics, Statistics, Polymer Physics, and Financial
The following titles, all by working physicists, attempt to Markets, 3rd ed. Singapore: World Scientic. Draft
communicate quantum theory to lay people, using a min- of 4th edition.
imum of technical apparatus.
Gunther Ludwig, 1968. Wave Mechanics. London:
Pergamon Press. ISBN 0-08-203204-1
Chester, Marvin (1987) Primer of Quantum Me-
chanics. John Wiley. ISBN 0-486-42878-8 George Mackey (2004). The mathematical founda-
tions of quantum mechanics. Dover Publications.
Cox, Brian; Forshaw, Je (2011). The Quantum
ISBN 0-486-43517-2.
Universe: Everything That Can Happen Does Hap-
pen:. Allen Lane. ISBN 1-84614-432-9. Albert Messiah, 1966. Quantum Mechanics (Vol. I),
Richard Feynman, 1985. QED: The Strange The- English translation from French by G. M. Temmer.
ory of Light and Matter, Princeton University Press. North Holland, John Wiley & Sons. Cf. chpt. IV,
ISBN 0-691-08388-6. Four elementary lectures on section III.
quantum electrodynamics and quantum eld theory,
Omns, Roland (1999). Understanding Quantum
yet containing many insights for the expert.
Mechanics. Princeton University Press. ISBN 0-
Ghirardi, GianCarlo, 2004. Sneaking a Look at 691-00435-8. OCLC 39849482.
Gods Cards, Gerald Malsbary, trans. Princeton
Scerri, Eric R., 2006. The Periodic Table: Its Story
Univ. Press. The most technical of the works cited
and Its Signicance. Oxford University Press. Con-
here. Passages using algebra, trigonometry, and
siders the extent to which chemistry and the periodic
braket notation can be passed over on a rst read-
system have been reduced to quantum mechanics.
ing.
ISBN 0-19-530573-6
N. David Mermin, 1990, Spooky actions at a dis-
tance: mysteries of the QT in his Boojums all the Transnational College of Lex (1996). What is Quan-
way through. Cambridge University Press: 110-76. tum Mechanics? A Physics Adventure. Language
Research Foundation, Boston. ISBN 0-9643504-1-
Victor Stenger, 2000. Timeless Reality: Symme- 6. OCLC 34661512.
try, Simplicity, and Multiple Universes. Bualo
NY: Prometheus Books. Chpts. 5-8. Includes von Neumann, John (1955). Mathematical Founda-
cosmological and philosophical considerations. tions of Quantum Mechanics. Princeton University
Press. ISBN 0-691-02893-1.
More technical:
Hermann Weyl, 1950. The Theory of Groups and
Quantum Mechanics, Dover Publications.
Bryce DeWitt, R. Neill Graham, eds., 1973. The
Many-Worlds Interpretation of Quantum Mechanics, D. Greenberger, K. Hentschel, F. Weinert, eds.,
Princeton Series in Physics, Princeton University 2009. Compendium of quantum physics, Con-
Press. ISBN 0-691-08131-X cepts, experiments, history and philosophy, Springer-
Dirac, P. A. M. (1930). The Principles of Quantum Verlag, Berlin, Heidelberg.
Mechanics. ISBN 0-19-852011-5. The beginning
chapters make up a very clear and comprehensible
introduction. 10.11 Further reading
Hugh Everett, 1957, Relative State Formulation of
Bernstein, Jeremy (2009). Quantum Leaps. Cam-
Quantum Mechanics, Reviews of Modern Physics
bridge, Massachusetts: Belknap Press of Harvard
29: 454-62.
University Press. ISBN 978-0-674-03541-6.
Feynman, Richard P.; Leighton, Robert B.; Sands,
Matthew (1965). The Feynman Lectures on Physics. Bohm, David (1989). Quantum Theory. Dover Pub-
13. Addison-Wesley. ISBN 0-7382-0008-5. lications. ISBN 0-486-65969-0.

Griths, David J. (2004). Introduction to Quantum Eisberg, Robert; Resnick, Robert (1985). Quantum
Mechanics (2nd ed.). Prentice Hall. ISBN 0-13- Physics of Atoms, Molecules, Solids, Nuclei, and Par-
111892-7. OCLC 40251748. A standard under- ticles (2nd ed.). Wiley. ISBN 0-471-87373-X.
graduate text.
Libo, Richard L. (2002). Introductory Quantum
Max Jammer, 1966. The Conceptual Development Mechanics. Addison-Wesley. ISBN 0-8053-8714-
of Quantum Mechanics. McGraw Hill. 5.
48 CHAPTER 10. QUANTUM MECHANICS

Merzbacher, Eugen (1998). Quantum Mechanics. Stanford Continuing Education PHY 25: Quantum
Wiley, John & Sons, Inc. ISBN 0-471-88702-1. Mechanics by Leonard Susskind, see course de-
scription Fall 2007
Sakurai, J. J. (1994). Modern Quantum Mechanics.
Addison Wesley. ISBN 0-201-53929-2. 5 Examples in Quantum Mechanics

Shankar, R. (1994). Principles of Quantum Mechan- Imperial College Quantum Mechanics Course.
ics. Springer. ISBN 0-306-44790-8.
Spark Notes - Quantum Physics.
Stone, A. Douglas (2013). Einstein and the Quan-
Quantum Physics Online : interactive introduction
tum. Princeton University Press. ISBN 978-0-691-
to quantum mechanics (RS applets).
13968-5.
Experiments to the foundations of quantum physics
Martinus J. G. Veltman, 2003 Facts and Mysteries in
with single photons.
Elementary Particle Physics.
AQME : Advancing Quantum Mechanics for Engi-
Shushi, Tomer (2014). The Inuence of Particle
neers by T.Barzso, D.Vasileska and G.Klimeck
Interactions on the Existence of Quantum Particles
online learning resource with simulation tools on
Properties (PDF). Haifa, Israel: Journal of Physical
nanohub
Science and Application.
Quantum Mechanics by Martin Plenio
Zucav, Gary (1979, 2001). The Dancing Wu Li
Masters: An overview of the new physics (Perennial Quantum Mechanics by Richard Fitzpatrick
Classics Edition) HarperCollins.
Online course on Quantum Transport

10.12 External links FAQs

3D animations, applications and research for ba- Many-worlds or relative-state interpretation.


sic quantum eects (animations also available in Measurement in Quantum mechanics.
commons.wikimedia.org (Universit paris Sud))

Quantum Cook Book by R. Shankar, Open Yale Media


PHYS 201 material (4pp)
PHYS 201: Fundamentals of Physics II by Rama-
The Modern Revolution in Physics - an online text-
murti Shankar, Open Yale Course
book.
Lectures on Quantum Mechanics by Leonard
J. O'Connor and E. F. Robertson: A history of quan- Susskind
tum mechanics.
Everything you wanted to know about the quantum
Introduction to Quantum Theory at Quantiki. world archive of articles from New Scientist.
Quantum Physics Made Relatively Simple: three Quantum Physics Research from Science Daily
video lectures by Hans Bethe
Overbye, Dennis (December 27, 2005). Quantum
H is for h-bar. Trickery: Testing Einsteins Strangest Theory. The
New York Times. Retrieved April 12, 2010.
Quantum Mechanics Books Collection: Collection
of free books Audio: Astronomy Cast Quantum Mechanics
June 2009. Fraser Cain interviews Pamela L. Gay.
Course material
Philosophy
Quantum Physics Database - Fundamentals and His-
torical Background of Quantum Theory. Ismael, Jenann. Quantum Mechanics. Stanford
Encyclopedia of Philosophy.
Doron Cohen: Lecture notes in Quantum Mechan-
ics (comprehensive, with advanced topics). Krips, Henry. Measurement in Quantum Theory.
Stanford Encyclopedia of Philosophy.
MIT OpenCourseWare: Chemistry.

MIT OpenCourseWare: Physics. See 8.04


Chapter 11

Markov chain

In probability theory and related elds, a Markov pro- the internet search engine Google, is based on a Markov
cess (or Marko process), named after the Russian process.[26][27] Furthermore, Markov processes are the
mathematician Andrey Markov, is a stochastic process basis for general stochastic simulation methods known
that satises the Markov property[1][2] (sometimes char- as Gibbs sampling and Markov Chain Monte Carlo, are
acterized as "memorylessness"). Loosely speaking, a pro- used for simulating random objects with specic proba-
cess satises the Markov property if one can make pre- bility distributions, and have found extensive application
dictions for the future of the process based solely on its in Bayesian statistics.[25][28][29]
present state just as well as one could knowing the pro- The adjective Markovian is used to describe something
cesss full history; i.e., conditional on the present state of that is related to a Markov process.[30]
the system, its future and past states are independent.
A Markov chain is a type of Markov process that has ei-
ther discrete state space or discrete index set (often rep-
resenting time), but the precise denition of a Markov 0.3
chain varies.[3] For example, it is common to dene a
Markov chain as a Markov process in either discrete or
continuous time with a countable state space (thus regard-
less of the nature of time)[4][5][6][7] , but it is also common
E
to dene a Markov chain as having discrete time in ei- 0.7
ther countable or continuous state space (thus regardless
of the state space).[8]
Andrey Markov studied Markov processes in the early 0.4
20th century, publishing his rst paper on the topic
in 1906, but earlier uses of Markov processes already
existed.[9][10][11] Random walks on the integers and
A
the Gamblers ruin problem are examples of Markov 0.6
processes[12][13] and were studied hundreds of years
earlier.[14][15] Two important examples of Markov pro-
cesses are the Wiener process, also known as the Brow-
nian motion process, and the Poisson process,[16] which A diagram representing a two-state Markov process, with the
are considered the most important and central stochastic states labelled E and A. Each number represents the probability
processes in the theory of stochastic processes,[17][18][19] of the Markov process changing from one state to another state,
and were discovered repeatedly and independently, both with the direction indicated by the arrow. For example, if the
before and after 1906, in various settings.[20][21] These Markov process is in state A, then the probability it changes to
two processes are Markov processes in continuous time, state E is 0.4, while the probability it remains in state A is 0.6.
while random walks on the integers and the Gamblers
ruin problem are examples of Markov processes in dis-
crete time.[12][13]
11.1 Introduction
Markov chains have many applications as statistical mod-
els of real-world processes[22][23][24] , such as studying A Markov chain is a stochastic process with the Markov
cruise control systems in motor vehicles, queues or lines property. The term Markov chain refers to the
of customers arriving at an airport, exchange rates of sequence of random variables such a process moves
currencies, storage systems such as dams, and popula- through, with the Markov property dening serial depen-
tion growths of certain animal species.[25] The algorithm dence only between adjacent periods (as in a chain).
known as PageRank, which was originally proposed for It can thus be used for describing systems that follow a

49
50 CHAPTER 11. MARKOV CHAIN

analysis. Besides time-index and state-space parameters,


there are many other variations, extensions and general-
izations (see Variations). For simplicity, most of this arti-
cle concentrates on the discrete-time, discrete state-space
case, unless mentioned otherwise.
The changes of state of the system are called transitions.
The probabilities associated with various state changes
are called transition probabilities. The process is charac-
terized by a state space, a transition matrix describing the
probabilities of particular transitions, and an initial state
(or initial distribution) across the state space. By conven-
tion, we assume all possible states and transitions have
been included in the denition of the process, so there is
always a next state, and the process does not terminate.
A discrete-time random process involves a system which
is in a certain state at each step, with the state chang-
ing randomly between steps. The steps are often thought
of as moments in time, but they can equally well refer
to physical distance or any other discrete measurement.
Formally, the steps are the integers or natural numbers,
and the random process is a mapping of these to states.
The Markov property states that the conditional probabil-
ity distribution for the system at the next step (and in fact
Russian mathematician Andrey Markov. at all future steps) depends only on the current state of the
system, and not additionally on the state of the system at
previous steps.
chain of linked events, where what happens next depends Since the system changes randomly, it is generally im-
only on the current state of the system. possible to predict with certainty the state of a Markov
chain at a given point in the future. However, the statis-
The systems state space and time parameter index needs
tical properties of the systems future can be predicted.
to be specied. The following table gives an overview of
In many applications, it is these statistical properties that
the dierent instances of Markov processes for dierent
are important.
levels of state space generality and for discrete time vs.
continuous time: A famous Markov chain is the so-called drunkards
walk, a random walk on the number line where, at each
Note that there is no denitive agreement in the lit-
step, the position may change by +1 or 1 with equal
erature on the use of some of the terms that signify
probability. From any position there are two possible
special cases of Markov processes. Usually the term
transitions, to the next or previous integer. The transition
Markov chain is reserved for a process with a dis-
probabilities depend only on the current position, not on
crete set of times, i.e. a discrete-time Markov chain
[31][31] the manner in which the position was reached. For ex-
(DTMC), but a few authors use the term Markov
ample, the transition probabilities from 5 to 4 and 5 to 6
process to refer to a continuous-time Markov chain
[32][33][34] are both 0.5, and all other transition probabilities from 5
(CTMC) without explicit mention. In addition,
are 0. These probabilities are independent of whether the
there are other extensions of Markov processes that are
system was previously in 4 or 6.
referred to as such but do not necessarily fall within any
of these four categories (see Markov model). Moreover, Another example is the dietary habits of a creature who
the time index need not necessarily be real-valued; like eats only grapes, cheese, or lettuce, and whose dietary
with the state space, there are conceivable processes that habits conform to the following rules:
move through index sets with other mathematical con-
structs. Notice that the general state space continuous- It eats exactly once a day.
time Markov chain is general to such a degree that it has
no designated term. If it ate cheese today, tomorrow it will eat lettuce or
grapes with equal probability.
While the time parameter is usually discrete, the state
space of a Markov chain does not have any generally If it ate grapes today, tomorrow it will eat grapes
agreed-on restrictions: the term may refer to a process with probability 1/10, cheese with probability 4/10
on an arbitrary state space.[35] However, many applica- and lettuce with probability 5/10.
tions of Markov chains employ nite or countably innite
state spaces, which have a more straightforward statistical If it ate lettuce today, tomorrow it will eat grapes
11.3. EXAMPLES 51

with probability 4/10 or cheese with probability paper an equation, now called the ChapmanKolmogorov
6/10. It will not eat lettuce again tomorrow. equation, in a less mathematically rigorous way than Kol-
mogorov, while studying Brownian movement.[46] The
dierential equations are now called the Kolmogorov
This creatures eating habits can be modeled with a
equations[47] or the KolmogorovChapman equations.[48]
Markov chain since its choice tomorrow depends solely
Other mathematicians who contributed signicantly to
on what it ate today, not what it ate yesterday or any other
the foundations of Markov processes include William
time in the past. One statistical property that could be cal-
Feller, starting in 1930s, and then later Eugene Dynkin,
culated is the expected percentage, over a long period, of
starting in the 1950s.[43]
the days on which the creature will eat grapes.
A series of independent events (for example, a series
of coin ips) satises the formal denition of a Markov
chain. However, the theory is usually applied only when
the probability distribution of the next step depends non- 11.3 Examples
trivially on the current state.
Main article: Examples of Markov chains

11.2 History
Andrey Markov studied Markov chains in the early 20th
century. Markov was interested in studying an exten-
11.3.1 Gambling
sion of independent random sequences, motivated by a
diagreement with Pavel Nekrasov who claimed indepen- See also: random walk and Markov chain
dence was necessary for the weak law of large numbers to
hold.[36] In his rst paper on Markov chains, published in Suppose that you start with $10, and you wager $1 on an
1906, Markov showed that under certain conditions the unending, fair, coin toss indenitely, or until you lose all
average outcomes of the Markov chain would converge of your money. If Xn represents the number of dollars
to a xed vector of values, so proving a weak law of large you have after n tosses, with X0 = 10 , then the sequence
numbers without the independence assumption,[37][38][39] {Xn : n N} is a Markov process. If I know that you
which had been commonly regarded as a requirement have $12 now, then it would be expected that with even
for such mathematical laws to hold.[39] Markov later odds, you will either have $11 or $13 after the next toss.
used Markov chains to study the distribution of vowels This guess is not improved by the added knowledge that
in Eugene Onegin, written by Alexander Pushkin, and you started with $10, then went up to $11, down to $10,
proved a central limit theorem for such chains.[37] up to $11, and then to $12.
In 1912 Poincar studied Markov chains on nite groups The process described here is a Markov chain on a count-
with an aim to study card shuing. Other early uses of able state space that follows a random walk.
Markov chains include a diusion model, introduced by
Paul and Tatyana Ehrenfest in 1907, and a branching pro-
cess, introduced by Francis Galton and Henry William
Watson in 1873, preceding the work of Markov.[37][38] 11.3.2 A birth-death process
After the work of Galton and Watson, it was later
revealed that their branching process had been inde-
See also: birth-death process and Poisson point process
pendently discovered and studied around three decades
earlier by Irne-Jules Bienaym.[40] Starting in 1928,
Maurice Frchet became interested in Markov chains, If one pops one hundred kernels of popcorn, each ker-
eventually resulting in him publishing in 1938 a detailed nel popping at an independent exponentially-distributed
study on Markov chains.[37][41] time, then this would be a continuous-time Markov pro-
Andrei Kolmogorov developed in a 1931 paper a large cess. If Xt denotes the number of kernels which have
part of the early theory of continuous-time Markov popped up to time t, the problem can be dened as nd-
processes.[42][43] Kolmogorov was partly inspired by ing the number of kernels that will pop in some later time.
Louis Bacheliers 1900 work on uctuations in the stock The only thing one needs to know is the number of kernels
market as well as Norbert Wiener's work on Einsteins that have popped prior to the time t. It is not necessary
model of Brownian movement.[42][44] He introduced and to know when they popped, so knowing Xt for previous
studied a particular set of Markov processes known as times t is not relevant.
diusion processes, where he derived a set of dierential The process described here is an approximation of a
equations describing the processes.[42][45] Independent of Poisson point process - Poisson processes are also Markov
Kolmgorovs work, Sydney Chapman derived in a 1928 processes.
52 CHAPTER 11. MARKOV CHAIN

11.3.3 A non-Markov example 11.5.1 Discrete-time Markov chain

Suppose that you have a coin purse containing ve quar- A discrete-time Markov chain is a sequence of random
ters (each worth 25c), ve nickels (each worth 5c) and ve variables X1 , X2 , X3 , ... with the Markov property,
dimes (each worth 10c), and one-by-one, you randomly namely that the probability of moving to the next state
draw coins from the purse and set them on a table. If Xn depends only on the present state and not on the previous
represents the total value of the coins set on the table after states
n draws, with X0 = 0 , then the sequence {Xn : n N}
is not a Markov process. Pr(Xn+1 = x | X1 = x1 , X2 =
To see why this is the case, suppose that in your rst six x2 , . . . , Xn = xn ) = Pr(Xn+1 = x | Xn =
draws, you draw all ve nickels, and then a quarter. So xn ) , if both conditional probabilities are well
X6 = $0.50 . If we know not just X6 , but the earlier dened, i.e. if Pr(X1 = x1 , ..., Xn = xn ) >
values as well, then we can determine which coins have 0.
been drawn, and we know that the next coin will not be a
nickel, so we can determine that X7 $0.60 with prob- The possible values of Xi form a countable set S called
ability 1. But if we do not know the earlier values, then the state space of the chain.
based only on the value X6 we might guess that we had Markov chains are often described by a sequence of
drawn four dimes and two nickels, in which case it would directed graphs, where the edges of graph n are labeled
certainly be possible to draw another nickel next. Thus, by the probabilities of going from one state at time n to
our guesses about X7 are impacted by our knowledge of the other states at time n+1, Pr(Xn+1 = x | Xn = xn )
values prior to X6 . . The same information is represented by the transition
matrix from time n to time n+1. However, Markov
chains are frequently assumed to be time-homogeneous
11.4 Markov property (see variations below), in which case the graph and ma-
trix are independent of n and are thus not presented as
sequences.
Main article: Markov property
These descriptions highlight the structure of the Markov
chain that is independent of the initial distribution
Pr(X1 = x1 ) . When time-homogeneous, the chain can
11.4.1 The general case be interpreted as a state machine assigning a probability
of hopping from each vertex or state to an adjacent one.
Let (, F, P) be a probability space with a ltration The probability Pr(Xn = x|X1 = x1 ) of the machines
(Ft , t T ) , for some (totally ordered) index set T ; state can be analyzed as the statistical behavior of the ma-
and let (S, S) be a measure space. An S-valued stochas- chine with an element x1 of the state space as input, or as
tic process X = (Xt , t T ) adapted to the ltration is the behavior of the machine with the initial distribution
said to possess the Markov property with respect to the Pr(X1 = y) = [x1 = y] of states as input, where [P ] is
{Ft } if, for each A S and each s, t T with s < t, the Iverson bracket.
The fact that some sequences of states might have zero
P(Xt A|Fs ) = P(Xt A|Xs ). [49] probability of occurring corresponds to a graph with mul-
tiple connected components, where we omit edges that
would carry a zero transition probability. For example,
A Markov process is a stochastic process which satises
if a has a nonzero probability of going to b, but a and x
the Markov property with respect to its natural ltration.
lie in dierent connected components of the graph, then
Pr(Xn+1 = b|Xn = a) is dened, while Pr(Xn+1 =
b|X1 = x, ..., Xn = a) is not.
11.4.2 For discrete-time Markov chains

In the case where S is a discrete set with the discrete Variations


sigma algebra and T = N , this can be reformulated as
follows: Time-homogeneous Markov chains (or station-
ary Markov chains) are processes where

P(Xn = xn |Xn1 = xn1 , Xn2 = xn2 , . . . , X0 = x0 ) = P(Xn = xn |Xn1 = xn1 )


Pr(Xn+1 = x | Xn = y) = Pr(Xn = x | Xn1 = y)

11.5 Formal denition for all n. The probability of the transition is


independent of n.
11.5. FORMAL DEFINITION 53

A Markov chain of order m (or a Markov chain


with memory m), where m is nite, is a process sat- ( )
isfying x(n+3) = x(n+2) P = x(n+1) P P

( )
= x(n+1) P 2 = x(n) P P 2
Pr(Xn = xn | Xn1 = xn1 , Xn2 = xn2 , . . . , X1 = x1 ) = x(n) P 3
= Pr(Xn = xn | Xn1 = xn1 , Xn2 = xn2 , . . . , Xnm = xnm ) for n > m
In particular, if at time n the system is in state 2 (bear),
then at time n + 3 the distribution is
In other words, the future state depends on the
past m states. It is possible to construct a chain
3
(Yn) from (Xn) which has the 'classical' Markov 0.9 0.075 0.025
[ ]
property by taking as state space the ordered x(n+3) = 0 1 0 0.15 0.8 0.05
m-tuples of X values, ie. Yn = (Xn, Xn, ..., 0.25 0.25 0.5
Xnm).
[ ] 0.7745 0.17875 0.04675
= 0 1 0 0.3575 0.56825 0.07425
0.4675 0.37125 0.16125
Example [ ]
= 0.3575 0.56825 0.07425 .
Main article: Examples of Markov chains Using the transition matrix it is possible to calculate, for
A state diagram for a simple example is shown in the g- example, the long-term fraction of weeks during which
the market is stagnant, or the average number of weeks
it will take to go from a stagnant to a bull market. Using
the transition probabilities, the steady-state probabilities
indicate that 62.5% of weeks will be in a bull market,
31.25% of weeks will be in a bear market and 6.25% of
weeks will be stagnant, since:


0.625 0.3125 0.0625
lim P N = 0.625 0.3125 0.0625
N
0.625 0.3125 0.0625

A thorough development and many examples can be


found in the on-line monograph Meyn & Tweedie
2005.[50]
A nite state machine can be used as a representation of a
ure on the right, using a directed graph to picture the state Markov chain. Assuming a sequence of independent and
transitions. The states represent whether a hypothetical identically distributed input signals (for example, symbols
stock market is exhibiting a bull market, bear market, or from a binary alphabet chosen by coin tosses), if the ma-
stagnant market trend during a given week. According to chine is in state y at time n, then the probability that it
the gure, a bull week is followed by another bull week moves to state x at time n + 1 depends only on the current
90% of the time, a bear week 7.5% of the time, and a state.
stagnant week the other 2.5% of the time. Labelling the
state space {1 = bull, 2 = bear, 3 = stagnant} the transition
matrix for this example is 11.5.2 Continuous-time Markov chain
A continuous-time Markov chain (Xt)t is dened by
a nite or countable state space S, a transition rate matrix
0.9 0.075 0.025

P = 0.15 0.8 0.05 . Q with dimensions equal to that of the state space and
0.25 0.25 0.5 initial probability distribution dened on the state space.
For i j, the elements qij are non-negative and describe
The distribution over states can be written as a stochastic the rate of the process transitions from state i to state j.
row vector x with the relation x(n + 1) = x(n) P. So if at time The elements qii are chosen such that each row of the
n the system is in state x(n) , then three time periods later, transition rate matrix sums to zero.
at time n + 3 the distribution is There are three equivalent denitions of the process.[51]
54 CHAPTER 11. MARKOV CHAIN

Innitesimal denition 11.6 Transient evolution


The probability of going from state i to state j in n time
steps is

(n)
pij = Pr(Xn = j | X0 = i)

and the single-step transition is

pij = Pr(X1 = j | X0 = i).

For a time-homogeneous Markov chain:

(n)
The continuous time Markov chain is characterized by the tran- pij = Pr(Xk+n = j | Xk = i)
sition rates, the derivatives with respect to time of the transition
probabilities between states i and j. and

Let Xt be the random variable describing the state of the


process at time t, and assume that the process is in a state pij = Pr(Xk+1 = j | Xk = i).
i at time t. Then Xt is independent of previous values
(Xs : s t) and as h 0 uniformly in t for all j The n-step transition probabilities satisfy the Chapman
Kolmogorov equation, that for any k such that 0 < k <
n,
Pr(X(t + h) = j|X(t) = i) = ij + qij h + o(h)
(n)
(k) (nk)
using little-o notation. The qij can be seen as measuring pij = pir prj
how quickly the transition from i to j happens rS

where S is the state space of the Markov chain.


Jump chain/holding time denition The marginal distribution Pr(Xn = x) is the distribution
over states at time n. The initial distribution is Pr(X0 =
Dene a discrete-time Markov chain Yn to describe the x). The evolution of the process through one time step is
nth jump of the process and variables S 1 , S 2 , S 3 , ... to de- described by
scribe holding times in each of the states where Si follows
the exponential distribution with rate parameter qYiYi. (n)
Pr(Xn = j) = prj Pr(Xn1 = r) = prj Pr(X0 = r).
rS rS
Transition probability denition
Note: The superscript (n) is an index and not an exponent.
For any value n = 0, 1, 2, 3, ... and times indexed up to
this value of n: t 0 , t 1 , t 2 , ... and all states recorded at
these times i0 , i1 , i2 , i3 , ... it holds that 11.7 Properties

11.7.1 Reducibility
Pr(Xtn+1 = in+1 |Xt0 = i0 , Xt1 = i1 , . . . , Xtn = in ) = pin in+1 (tn+1 tn )
A Markov chain is said to be irreducible if it is possible
where pij is the solution of the forward equation (a rst- to get to any state from any state. The following explains
order dierential equation) this denition more formally.
A state j is said to be accessible from a state i (written i
j) if a system started in state i has a non-zero probability
P (t) = P (t)Q of transitioning into state j at some point. Formally, state
j is accessible from state i if there exists an integer nij
with initial condition P(0) is the identity matrix. 0 such that
11.7. PROPERTIES 55

aperiodic. An irreducible Markov chain only needs one


(n )
aperiodic state to imply all states are aperiodic.
Pr(Xnij = j | X0 = i) = pij ij > 0.
Every state of a bipartite graph has an even period.
This integer is allowed to be dierent for each pair of
states, hence the subscripts in n . Allowing n to be zero
means that every state is accessible from itself by deni- 11.7.3 Transience and recurrence
tion. The accessibility relation is reexive and transitive,
but not necessarily symmetric. A state i is said to be transient if, given that we start in
state i, there is a non-zero probability that we will never
A state i is said to communicate with state j (written i return to i. Formally, let the random variable Ti be the
j) if both i j and j i. A communicating class a rst return time to state i (the hitting time):
maximal set of states C such that every pair of states in
C communicates with each other. Communication is an
equivalence relation, and communicating classes are the
equivalence classes of this relation. Ti = inf{n 1 : Xn = i | X0 = i}.

A communicating class is closed if the probability of The number


leaving the class is zero, namely if i is in C but j is not,
then j is not accessible from i. The set of communicating
classes forms a directed, acyclic graph by inheriting the (n)
fii = Pr(Ti = n)
arrows from the original state space. A communicating
class is closed if and only if it has no outgoing arrows in is the probability that we return to state i for the rst time
this graph. after n steps. Therefore, state i is transient if
A state i is said to be essential or nal if for all j such
that i j it is also true that j i. A state i is inessential


if it is not essential.[52] A state is nal if and only if its (n)
communicating class is closed. Pr(Ti < ) = fii < 1.
n=1
A Markov chain is said to be irreducible if its state space
is a single communicating class; in other words, if it is State i is recurrent (or persistent) if it is not transient.
possible to get to any state from any state. Recurrent states are guaranteed (with probability 1) to
have a nite hitting time. Recurrence and transience are
class properties, that is, they either hold or do not hold
11.7.2 Periodicity equally for all members of a communicating class.

A state i has period k if any return to state i must occur in


multiples of k time steps. Formally, the period of a state Mean recurrence time
is dened as
Even if the hitting time is nite with probability 1, it need
not have a nite expectation. The mean recurrence time
k = gcd{n > 0 : Pr(X = i | X = i) > 0} at state i is the expected return time Mi:
n 0

(where gcd is the greatest common divisor) provided




that this set is not empty. Otherwise the period is not (n)
dened. Note that even though a state has period k, it may Mi = E[Ti ] = n fii .
n=1
not be possible to reach the state in k steps. For example,
suppose it is possible to return to the state in {6, 8, 10, State i is positive recurrent (or non-null persistent) if
12, ...} time steps; k would be 2, even though 2 does not Mi is nite; otherwise, state i is null recurrent (or null
appear in this list. persistent).
If k = 1, then the state is said to be aperiodic: returns to
state i can occur at irregular times. It can be demonstrated
that a state i is aperiodic if and only if there exists n such Expected number of visits
that for all n' n,
It can be shown that a state i is recurrent if and only if the
expected number of visits to this state is innite, i.e.,
Pr(Xn = i | X0 = i) > 0.


Otherwise (k > 1), the state is said to be periodic with (n)
pii = .
period k. A Markov chain is aperiodic if every state is n=0
56 CHAPTER 11. MARKOV CHAIN

Absorbing states where C is the normalizing constant. Further, if the pos-


itive recurrent chain is both irreducible and aperiodic, it
A state i is called absorbing if it is impossible to leave is said to have a limiting distribution; for any i and j,
this state. Therefore, the state i is absorbing if and only if

(n) C
lim pij = .
pii = 1 and pij = 0 for i = j. n Mj
Note that there is no assumption on the starting distri-
If every state can reach an absorbing state, then the
bution; the chain converges to the stationary distribution
Markov chain is an absorbing Markov chain.
regardless of where it begins. Such is called the equi-
librium distribution of the chain.
11.7.4 Ergodicity If a chain has more than one closed communicating class,
its stationary distributions will not be unique (consider
A state i is said to be ergodic if it is aperiodic and posi- any closed communicating class Ci in the chain; each
tive recurrent. In other words, a state i is ergodic if it is one will have its own unique stationary distribution i .
recurrent, has a period of 1, and has nite mean recur- Extending these distributions to the overall chain, setting
rence time. If all states in an irreducible Markov chain all values to zero outside the communication class, yields
are ergodic, then the chain is said to be ergodic. that the set of invariant measures of the original chain is
It can be shown that a nite state irreducible Markov the set of all convex combinations of the i 's). However,
chain is ergodic if it has an aperiodic state. More gen- if a state j is aperiodic, then
erally, a Markov chain is ergodic if there is a number N
such that any state can be reached from any other state in
C
at most N steps (in other words, the number of steps taken lim p(n) =
n jj M
are bounded by a nite positive integer N). In case of j

a fully connected transition matrix, where all transitions and for any other state i, let j be the probability that the
have a non-zero probability, this condition is fullled with chain ever visits state j if it starts at i,
N=1.
A Markov chain with more than one state and just one
out-going transition per state is either not irreducible or (n) fij
lim p =C .
not aperiodic, hence cannot be ergodic. n ij Mj
If a state i is periodic with period k > 1 then the limit
11.7.5 Steady-state analysis and limiting
distributions (n)
lim p
n ii
If the Markov chain is a time-homogeneous Markov
chain, so that the process is described by a single, time- does not exist, although the limit
independent matrix pij , then the vector is called a sta-
tionary distribution (or invariant measure) if j S (kn+r)
it satises lim p
n ii

does exist for every integer r.


0 j 1.
Steady-state analysis and the time-inhomogeneous
j = 1. Markov chain
jS
A Markov chain need not necessarily be time-
j = i pij .
homogeneous to have an equilibrium distribution.
iS
If there is a probability distribution over states such
An irreducible chain has a stationary distribution if and that
only if all of its states are positive recurrent.[53] In that
case, is unique and is related to the expected return

time: j = i Pr(Xn+1 = j | Xn = i)
iS

C for every state j and every time n then is an equilib-


j = , rium distribution of the Markov chain. Such can occur
Mj
11.8. FINITE STATE SPACE 57

in Markov chain Monte Carlo (MCMC) methods in sit- transition probability can be computed as the k-th power
uations where a number of dierent transition matrices of the transition matrix, Pk .
are used, because each is ecient for a particular kind If the Markov chain is irreducible and aperiodic, then
of mixing, but each matrix respects a shared equilibrium there is a unique stationary distribution . Additionally,
distribution. in this case Pk converges to a rank-one matrix in which
each row is the stationary distribution , that is,

11.8 Finite state space


lim Pk = 1
k
If the state space is nite, the transition probability dis-
tribution can be represented by a matrix, called the tran- where 1 is the column vector with all entries equal to 1.
sition matrix, with the (i, j)th element of P equal to This is stated by the PerronFrobenius theorem. If, by
whatever means, limk Pk is found, then the stationary
distribution of the Markov chain in question can be eas-
pij = Pr(Xn+1 = j | Xn = i). ily determined for any starting distribution, as will be ex-
plained below.
Since each row of P sums to one and all elements are For some stochastic matrices P, the limit lim Pk does not
non-negative, P is a right stochastic matrix. k
exist while the stationary distribution does, as shown by
this example:
11.8.1 Stationary distribution relation to
eigenvectors and simplices ( )
0 1
P= P2k = I P2k+1 = P
1 0
A stationary distribution is a (row) vector, whose en-
( )
tries are non-negative and sum to 1, is unchanged by the ( 1 1
) 0 1 ( )
operation of transition matrix P on it and so is dened by 2 2 = 12 12
1 0
Note that this example illustrates a periodic Markov
chain.
P = .
Because there are a number of dierent special cases to
By comparing this denition with that of an eigenvector consider, the process of nding this limit if it exists can be
we see that the two concepts are related and that a lengthy task. However, there are many techniques that
can assist in nding this limit. Let P be an nn matrix,
and dene Q=limk Pk .
e
= It is always true that
i ei

is a normalized ( i i = 1 ) multiple of a left eigenvec-
tor e of the transition matrix PT with an eigenvalue of 1. QP = Q.
If there is more than one unit eigenvector then a weighted
Subtracting Q from both sides and factoring then yields
sum of the corresponding stationary states is also a sta-
tionary state. But for a Markov chain one is usually more
interested in a stationary state that is the limit of the se-
quence of distributions for some initial distribution. Q(P In ) = 0n,n ,

The values of a stationary distribution i are associated where In is the identity matrix of size n, and 0n,n is the
with the state space of P and its eigenvectors have their zero matrix of size nn. Multiplying together stochas-
relative proportions preserved. Since the components of tic matrices always yields another stochastic matrix, so Q
are positive and the must be a stochastic matrix (see the denition above). It
constraint that their sum is unity
can be rewritten as i 1 i = 1 we see that the dot is sometimes sucient to use the matrix equation above
product of with a vector whose components are all 1 is and the fact that Q is a stochastic matrix to solve for Q.
unity and that lies on a simplex. Including the fact that the sum of each the rows in P is 1,
there are n+1 equations for determining n unknowns, so
it is computationally easier if on the one hand one selects
11.8.2 Time-homogeneous Markov chain one row in Q and substitute each of its elements by one,
with a nite state space and on the other one substitute the corresponding element
(the one in the same column) in the vector 0, and next left-
If the Markov chain is time-homogeneous, then the tran- multiply this latter vector by the inverse of transformed
sition matrix P is the same after each step, so the k-step former matrix to nd Q.
58 CHAPTER 11. MARKOV CHAIN

Here is one method for doing so: rst, dene the function
f(A) to return the matrix A with its right-most column

n
replaced with all 1s. If [f(P I )]1 exists then xT = a i ui
i=1

Q = f (0n,n )[f (P In )]1 . for some set of ai. If we start multiplying P with x
Explain: The original matrix equation is equiv- from left and continue this operation with the results, in
alent to a system of nn linear equations in nn the end we get the stationary distribution . In other
variables. And there are n more linear equa- words, = ui xPPP...P = xPk as k goes to innity.
tions from the fact that Q is a right stochastic That means
matrix whose each row sums to 1. So it needs
any nn independent linear equations of the
1 1 1
(nn+n) equations to solve for the nn vari- (k) = x(U U )(U U ) (U U )
ables. In this example, the n equations from Q
U1
k
multiplied by the right-most column of (P-In) = xU
have been replaced by the n stochastic ones.
since UU1 = I the identity matrix and power of a diag-
onal matrix is also a diagonal matrix where each entry is
One thing to notice is that if P has an element Pi,i on its taken to that power.
main diagonal that is equal to 1 and the ith row or col-
umn is otherwise lled with 0s, then that row or column
will remain unchanged in all of the subsequent powers = (a uT + a uT + + a uT )U k U1 ,
1 1 2 2 n n
Pk . Hence, the ith row or column of Q will have the 1
and the 0s in the same positions as in P. = a1 k1 u1 + a2 k2 u2 + + an kn un ,
since the eigenvectors are orthonormal. Then[55]
11.8.3 Convergence speed to the stationary
distribution { ( )k ( )k ( )k }
2 3 n
= k1 a 1 u1 + a 2 u2 + a3 u3 + + an un .
As stated earlier, from the equation = P , (if ex- 1 1 1
ists) the stationary (or steady state) distribution is a
left eigenvector of row stochastic matrix P. Then assum- Since = u1 , (k) approaches to as k goes to innity
ing that P is diagonalizable or equivalently that P has n with a speed in the order of 2 /1 exponentially. This
linearly independent eigenvectors, speed of convergence follows because |2 | |3 | ... |n|, hence 2 /1 is the
is elaborated as follows. (For non-diagonalizable, i.e. dominant term. Random noise in the state distribution
defective matrices, one may start with the Jordan normal can also speed up this convergence to the stationary
form of P and proceed with a bit more involved set of distribution.[56]
arguments in a similar way.[54] )
Let U be the matrix of eigenvectors (each normalized to
having an L2 norm equal to 1) where each column is a
11.9 Reversible Markov chain
left eigenvector of P and let be the diagonal matrix of
left eigenvalues of P, i.e. = diag(1 ,2 ,3 ,...,n). Then A Markov chain is said to be reversible if there is a prob-
by eigendecomposition ability distribution over its states such that

P=U U
1
. i Pr(Xn+1 = j | Xn = i) = j Pr(Xn+1 = i | Xn = j)

for all times n and all states i and j. This condition is


Let the eigenvalues be enumerated such that 1 = |1 | >
known as the detailed balance condition (some books
|2 | |3 | ... |n|. Since P is a row stochastic matrix,
call it the local balance equation).
its largest left eigenvalue is 1. If there is a unique sta-
tionary distribution, then the largest eigenvalue and the Considering a xed arbitrary time n and using the short-
corresponding eigenvector is unique too (because there is hand
no other which solves the stationary distribution equa-
tion above). Let ui be the ith column of U matrix, i.e. ui
is the left eigenvector of P corresponding to i. Also let pij = Pr(Xn+1 = j | Xn = i) ,
x be a length n row vector that represents a valid proba-
bility distribution; since the eigenvectors ui span Rn , we the detailed balance equation can be written more com-
can write pactly as
11.9. REVERSIBLE MARKOV CHAIN 59

11.9.1 Closest reversible Markov chain

i pij = j pji . For any time-homogeneous Markov chain given by a tran-


sition matrix P Rnn , any norm || || on Rnn which
The single time-step from n to n+1 can be thought of as is induced by a scalar product, and any probability vec-
each person i having i dollars initially and paying each tor , there exists a unique transition matrix P which
person j a fraction pij of it. The detailed balance con- is reversible according to and which is closest to P ac-
dition states that upon each payment, the other person cording to the norm ||||. The matrix P can be computed
pays exactly the same amount of money back.[57] Clearly by solving a quadratic-convex optimization problem.[58]
the total amount of money each person has remains the
For example, consider the following Markov chain:
same after the time-step, since every dollar spent is bal-
anced by a corresponding dollar received. This can be
shown more formally by the equality


i pij = j pji = j pji = j ,
i i i

which essentially states that the total amount of money


person j receives (including from himself) during the
time-step equals the amount of money he pays others,
which equals all the money he initially had because it was
assumed that all money is spent (i.e. pji sums to 1 over Simple Markov chain.
i). The assumption is a technical one, because the money
not really used is simply thought of as being paid from
person j to himself (i.e. pjj is not necessarily zero). This Markov chain is not reversible. According to the
Frobenius Norm the closest reversible Markov chain ac-
As n was arbitrary, this reasoning holds for any n, and cording to = ( 1 , 1 , 1 ) can be computed as
3 3 3
therefore for reversible Markov chains is always a
steady-state distribution of Pr(X = j | X = i) for ev-
ery n.
If the Markov chain begins in the steady-state distribu-
tion, i.e., if Pr(X0 = i) = i, then Pr(Xn = i) = i for all n
and the detailed balance equation can be written as

Pr(Xn = i, Xn+1 = j) = Pr(Xn+1 = i, Xn = j) .

The left- and right-hand sides of this last equation are


identical except for a reversing of the time indices n and
n + 1.
Kolmogorovs criterion gives a necessary and sucient
condition for a Markov chain to be reversible directly
from the transition matrix probabilities. The criterion
If we choose the probability vector randomly as =
requires that the products of probabilities around every ( 1 1 1 )
closed loop are the same in both directions around the 4 , 4 , 2 , then the closest reversible Markov chain ac-
cording to the Frobenius norm is approximately given by
loop.
Reversible Markov chains are common in Markov chain
Monte Carlo (MCMC) approaches because the detailed
balance equation for a desired distribution necessar-
ily implies that the Markov chain has been constructed
so that is a steady-state distribution. Even with time-
inhomogeneous Markov chains, where multiple transition
matrices are used, if each such transition matrix exhibits
detailed balance with the desired distribution, this nec-
essarily implies that is a steady-state distribution of the
Markov chain.
60 CHAPTER 11. MARKOV CHAIN

11.10 Bernoulli scheme panding the concept of the 'current' and 'future' states.
For example, let X be a non-Markovian process. Then
A Bernoulli scheme is a special case of a Markov chain dene a process Y, such that each state of Y represents
where the transition probability matrix has identical rows, a time-interval of states of X. Mathematically, this takes
which means that the next state is even independent of the the form:
current state (in addition to being independent of the past
states). A Bernoulli scheme with only two possible states
{ }
is known as a Bernoulli process. Y (t) = X(s) : s [a(t), b(t)] .

If Y has the Markov property, then it is a Markovian rep-


11.11 General state space resentation of X.
An example of a non-Markovian process with a Marko-
For an overview of Markov chains on a general state vian representation is an autoregressive time series of or-
space, see the article Markov chains on a measurable state der greater than one.[61]
space.

11.11.1 Harris chains 11.13 Transient behaviour


Many results for Markov chains with nite state space Write P(t) for the matrix with entries pij = P(Xt = j | X0
can be generalized to chains with uncountable state space = i). Then the matrix P(t) satises the forward equation,
through Harris chains. The main idea is to see if there is a a rst-order dierential equation
point in the state space that the chain hits with probability
one. Generally, it is not true for continuous state space,
however, we can dene sets A and B along with a positive
number and a probability measure , such that P (t) = P (t)Q

{ where the prime denotes dierentiation with respect to t.


IfA = inf{n 0 : Xn A}, then Pz (A < ) > 0The solution
all for z. to this equation is given by a matrix expo-
Ifx A and C B, nential
then p(x, C) (C).
Then we could collapse the sets into an auxiliary point ,
and a recurrent Harris chain can be modied to contain
P (t) = etQ
. Lastly, the collection of Harris chains is a comfort-
able level of generality, which is broad enough to con-
In a simple case such as a CTMC on the state space {1,2}.
tain a large number of interesting examples, yet restric-
The general Q matrix for such a process is the following
tive enough to allow for a rich theory.
2 2 matrix with , > 0
The use of Markov chains in Markov chain Monte Carlo
methods covers cases where the process follows a contin-
( )
uous state space.
Q= .

11.11.2 Locally interacting Markov chains The above relation for forward matrix can be solved ex-
plicitly in this case to give
Considering a collection of Markov chains whose evolu-
tion takes in account the state of other Markov chains, is
related to the notion of locally interacting Markov chains. ( )
(+)t (+)t
+ + e

This corresponds to the situation when the state space has P (t) = + + + e
(+)t (+)t
+ + e

a (Cartesian-) product form. See interacting particle sys- + + + e
tem and stochastic cellular automata (probabilistic cel-
lular automata). See for instance Interaction of Markov However, direct solutions are complicated to compute for
Processes[59] or[60] larger matrices. The fact that Q is the generator for a
semigroup of matrices

11.12 Markovian representations


P (t + s) = e(t+s)Q = etQ esQ = P (t)P (s)
In some cases, apparently non-Markovian processes may
still have Markovian representations, constructed by ex- is used.
11.14. STATIONARY DISTRIBUTION 61

11.14 Stationary distribution



0.025 0.02 0.005
The stationary distribution for an irreducible recurrent Q = 0.3 0.5 0.2 .
CTMC is the probability distribution to which the pro- 0.02 0.4 0.42
cess converges for large values of t. Observe that for the
two-state process considered earlier with P(t) given by
The stationary distribution of this chain can be found by
solving Q = 0 subject to the constraint that elements
( ) must sum to 1 to obtain
(+)t (+)t

+ +
+ e

+
+ e
P (t) = (+)t (+)t

+
+ e

+ +
+ e
( )
= 0.885 0.071 0.044 .
as t the distribution tends to

(

) 11.14.2 Example 2

+ +
P =
+ +

Observe that each row has the same distribution as this


does not depend on starting state. The row vector may
be found by solving[62]

Q = 0.

with the additional constraint that

i = 1. Transition graph with transition probabilities, exemplary for the


iS states 1, 5, 6 and 8. There is a bidirectional secret passage be-
tween states 2 and 8.

11.14.1 Example 1 The image to the right describes a discrete-time Markov


chain with state-space {1,2,3,4,5,6,7,8,9}. The player
controls Pac-Man through a maze, eating pac-dots.
Meanwhile, he is being hunted by ghosts. For conve-
nience, the maze shall be a small 3x3-grid and the mon-
sters move randomly in horizontal and vertical directions.
A secret passageway between states 2 and 8 can be used
in both directions. Entries with probability zero are re-
moved in the following transition matrix:
1 1

2 2
1 1 1 1
4 4 4 4
1 1
1 2 2
1 1
3 3 3
Q=

1
4
1
4
1
4
1
4


1 1 1
3 3 3
1 1
2 2
1 1 1 1
4 4 4 4
1 1
2 2
This Markov chain is irreducible, because the ghosts can
Directed graph representation of a continuous-time Markov
y from every state to every state in a nite amount of
chain describing the state of nancial markets (note: numbers
are made-up). time. Due to the secret passageway, the Markov chain is
also aperiodic, because the monsters can move from any
The image to the right describes a continuous-time state to any state both in an even and in an uneven number
Markov chain with state-space {Bull market, Bear mar- of state transitions. Therefore, a unique stationary distri-
ket, Stagnant market} and transition rate matrix bution exists and can be found by solving Q = 0 subject
62 CHAPTER 11. MARKOV CHAIN

to the constraint that elements must sum to 1. The so-


lution of this linear equation subject to the constraint is { q
= (7.7, 15.4, 7.7, 11.5, 15.4, 11.5, 7.7, 15.4, 7.7) %. sij =
ij ifi = j
k=i qik

The central state and the border states 2 and 8 of the ad- 0 otherwise.
jacent secret passageway are visited most and the corner
states are visited least. From this, S may be written as

1
S = I (diag(Q)) Q
11.15 Hitting times
where I is the identity matrix and diag(Q) is the diagonal
Main article: phase-type distribution matrix formed by selecting the main diagonal from the
matrix Q and setting all other elements to zero.

The hitting time is the time, starting in a given set of states To nd the stationary probability distribution vector, we
until the chain arrives in a given state or set of states. The must next nd such that
distribution of such a time period has a phase type distri-
bution. The simplest such distribution is that of a single
exponentially distributed transition. S = ,

with being a row vector, such that all elements in are


greater than 0 and ||||1 = 1. From this, may be found
11.15.1 Expected hitting times as

For a subset of states A S, the vector kA of hitting times


(where element kA i represents the expected value, starting (diag(Q))1
in state i that the chain enters one of the states in the set = .
(diag(Q))1 1
A) is the minimal non-negative solution to[62]
Note that S may be periodic, even if Q is not. Once is
found, it must be normalized to a unit vector.
kiA = 0 for i A Another discrete-time process that may be derived from
a continuous-time Markov chain is a -skeletonthe
qij kjA = 1 for i
/ A.
jS
(discrete-time) Markov chain formed by observing X(t) at
intervals of units of time. The random variables X(0),
X(), X(2), ... give the sequence of states visited by the
-skeleton.
11.16 Time reversal
For a CTMC Xt, the time-reversed process is dened to
be Xt =XT t . By Kellys lemma this process has the same
11.18 Applications
stationary distribution as the forward process.
Research has reported the application and usefulness of
A chain is said to be reversible if the reversed process is Markov chains in a wide range of topics such as physics,
the same as the forward process. Kolmogorovs criterion chemistry, medicine, music, game theory and sports.
states that the necessary and sucient condition for a pro-
cess to be reversible is that the product of transition rates
around a closed loop must be the same in both directions. 11.18.1 Physics

Markovian systems appear extensively in


thermodynamics and statistical mechanics, when-
11.17 Embedded Markov chain ever probabilities are used to represent unknown or
unmodelled details of the system, if it can be assumed
One method of nding the stationary probability distri- that the dynamics are time-invariant, and that no relevant
bution, , of an ergodic continuous-time Markov chain, history need be considered which is not already included
Q, is by rst nding its embedded Markov chain (EMC). in the state description.
Strictly speaking, the EMC is a regular discrete-time
Markov chain, sometimes referred to as a jump process.
Each element of the one-step transition probability ma- 11.18.2 Chemistry
trix of the EMC, S, is denoted by sij, and represents the
conditional probability of transitioning from state i into Michaelis-Menten kinetics. The enzyme (E) binds a
state j. These conditional probabilities may be found by substrate (S) and produces a product (P). Each reaction
11.18. APPLICATIONS 63

is a state transition in a Markov chain. 11.18.4 Speech recognition

Markov chains and continuous-time Markov processes Hidden Markov models are the basis for most modern
are useful in chemistry when physical systems closely ap- automatic speech recognition systems.
proximate the Markov property. For example, imagine a
large number n of molecules in solution in state A, each
of which can undergo a chemical reaction to state B with
11.18.5 Information and computer science
a certain average rate. Perhaps the molecule is an en-
Markov chains are used throughout information process-
zyme, and the states refer to how it is folded. The state of
ing. Claude Shannon's famous 1948 paper A Mathemati-
any single enzyme follows a Markov chain, and since the
cal Theory of Communication, which in a single step cre-
molecules are essentially independent of each other, the
ated the eld of information theory, opens by introduc-
number of molecules in state A or B at a time is n times
ing the concept of entropy through Markov modeling of
the probability a given molecule is in that state.
the English language. Such idealized models can capture
The classical model of enzyme activity, Michaelis many of the statistical regularities of systems. Even with-
Menten kinetics, can be viewed as a Markov chain, where out describing the full structure of the system perfectly,
at each time step the reaction proceeds in some direc- such signal models can make possible very eective data
tion. While Michaelis-Menten is fairly straightforward, compression through entropy encoding techniques such
far more complicated reaction networks can also be mod- as arithmetic coding. They also allow eective state esti-
eled with Markov chains. mation and pattern recognition. Markov chains also play
An algorithm based on a Markov chain was also used to an important role in reinforcement learning.
focus the fragment-based growth of chemicals in silico Markov chains are also the basis for hidden Markov mod-
towards a desired class of compounds such as drugs or els, which are an important tool in such diverse elds as
natural products.[63] As a molecule is grown, a fragment is telephone networks (which use the Viterbi algorithm for
selected from the nascent molecule as the current state. error correction), speech recognition and bioinformatics
It is not aware of its past (i.e., it is not aware of what (such as in rearrangements detection[65] ).
is already bonded to it). It then transitions to the next
The LZMA lossless data compression algorithm com-
state when a fragment is attached to it. The transition
bines Markov chains with Lempel-Ziv compression to
probabilities are trained on databases of authentic classes
achieve very high compression ratios.
of compounds.
Also, the growth (and composition) of copolymers may
be modeled using Markov chains. Based on the reactivity 11.18.6 Queueing theory
ratios of the monomers that make up the growing polymer
chain, the chains composition may be calculated (e.g., Main article: Queueing theory
whether monomers tend to add in alternating fashion or
in long runs of the same monomer). Due to steric eects,
Markov chains are the basis for the analytical treatment
second-order Markov eects may also play a role in the
of queues (queueing theory). Agner Krarup Erlang initi-
growth of some polymer chains.
ated the subject in 1917.[66] This makes them critical for
Similarly, it has been suggested that the crystallization optimizing the performance of telecommunications net-
and growth of some epitaxial superlattice oxide materi- works, where messages must often compete for limited
als can be accurately described by Markov chains.[64] resources (such as bandwidth).[67]
Numerous queueing models use continuous-time Markov
chains. For example, an M/M/1 queue is a CTMC on the
11.18.3 Testing non-negative integers where upward transitions from i to
i + 1 occur at rate according to a Poisson process and
Several theorists have proposed the idea of the Markov describe job arrivals, while transitions from i to i 1 (for
chain statistical test (MCST), a method of conjoining i > 1) occur at rate (job service times are exponentially
Markov chains to form a "Markov blanket", arranging distributed) and describe completed services (departures)
these chains in several recursive layers (wafering) and from the queue.
producing more ecient test setssamplesas a re-
placement for exhaustive testing. MCSTs also have uses
in temporal state-based networks; Chilukuri et al.'s pa- 11.18.7 Internet applications
per entitled Temporal Uncertainty Reasoning Networks
for Evidence Fusion with Applications to Object Detec- The PageRank of a webpage as used by Google is dened
tion and Tracking (ScienceDirect) gives a background by a Markov chain.[68] It is the probability to be at page
and case study for applying MCSTs to a wider range of i in the stationary distribution on the following Markov
applications. chain on all (known) webpages. If N is the number of
64 CHAPTER 11. MARKOV CHAIN

known webpages, and a page i has ki links to it then it structural factors, such as size of the middle class, the
has transition probability ki + 1
N for all pages that are ratio of urban to rural residence, the rate of political mo-
linked to and 1 for all pages that are not linked to. The bilization, etc., will generate a higher probability of tran-
N
parameter is taken to be about 0.85.[69] sitioning from authoritarian to democratic regime.[76]
Markov models have also been used to analyze web navi-
gation behavior of users. A users web link transition on a 11.18.11 Mathematical biology
particular website can be modeled using rst- or second-
order Markov models and can be used to make predic- Markov chains also have many applications in biologi-
tions regarding future navigation and to personalize the cal modelling, particularly population processes, which
web page for an individual user. are useful in modelling processes that are (at least) analo-
gous to biological populations. The Leslie matrix, is one
11.18.8 Statistics such example used to describe the population dynamics
of many species, though some of its entries are not proba-
Markov chain methods have also become very important bilities (they may be greater than 1). Another example is
for generating sequences of random numbers to accu- the modeling of cell shape in dividing sheets of epithelial
rately reect very complicated desired probability distri- cells.[77] Yet another example is the state of ion channels
butions, via a process called Markov chain Monte Carlo in cell membranes.
(MCMC). In recent years this has revolutionized the Markov chains are also used in simulations of brain
practicability of Bayesian inference methods, allowing a function, such as the simulation of the mammalian
wide range of posterior distributions to be simulated and neocortex.[78]
their parameters found numerically.

11.18.12 Genetics
11.18.9 Economics and nance
Markov chains have been used in population genetics
Markov chains are used in nance and economics to in order to describe the change in gene frequencies in
model a variety of dierent phenomena, including as- small populations aected by genetic drift, for exam-
set prices and market crashes. The rst nancial model ple in diusion equation method described by Motoo
to use a Markov chain was from Prasad et al. in Kimura.[79]
1974.[70] Another was the regime-switching model of
James D. Hamilton (1989), in which a Markov chain is
used to model switches between periods high and low 11.18.13 Games
GDP growth (or alternatively, economic expansions and
recessions).[71] A more recent example is the Markov Markov chains can be used to model many games of
Switching Multifractal model of Laurent E. Calvet and chance. The childrens games Snakes and Ladders and
Adlai J. Fisher, which builds upon the convenience of ear- "Hi Ho! Cherry-O", for example, are represented ex-
lier regime-switching models.[72][73] It uses an arbitrarily actly by Markov chains. At each turn, the player starts
large Markov chain to drive the level of volatility of asset in a given state (on a given square) and from there has
returns. xed odds of moving to certain other states (squares).
Dynamic macroeconomics heavily uses Markov chains.
An example is using Markov chains to exogenously
model prices of equity (stock) in a general equilibrium 11.18.14 Music
setting.[74]
Markov chains are employed in algorithmic music com-
Credit rating agencies produce annual tables of the transi- position, particularly in software such as CSound, Max
tion probabilities for bonds of dierent credit ratings.[75] and SuperCollider. In a rst-order chain, the states of
the system become note or pitch values, and a probability
vector for each note is constructed, completing a tran-
11.18.10 Social sciences
sition probability matrix (see below). An algorithm is
Markov chains are generally used in describing path- constructed to produce output note values based on the
dependent arguments, where current structural congu- transition matrix weightings, which could be MIDI note [80]
rations condition future outcomes. An example is the values, frequency (Hz), or any other desirable metric.
reformulation of the idea, originally due to Karl Marx's A second-order Markov chain can be introduced by con-
Das Kapital, tying economic development to the rise of sidering the current state and also the previous state, as
capitalism. In current research, it is common to use a indicated in the second table. Higher, nth-order chains
Markov chain to model how once a country reaches a spe- tend to group particular notes together, while 'breaking
cic level of economic development, the conguration of o' into other patterns and sequences occasionally. These
11.19. SEE ALSO 65

higher-order chains tend to generate results with a sense revealed that their branching process had been inde-
of phrasal structure, rather than the 'aimless wandering' pendently discovered and studied around three decades
produced by a rst-order system.[81] earlier by Irne-Jules Bienaym.[93] Starting in 1928,
Markov chains can be used structurally, as in Xenakiss Maurice Frchet became interested in Markov chains,
Analogique A and B.[82] Markov chains are also used in eventually resulting in him publishing in 1938 a detailed
[90][94]
systems which use a Markov model to react interactively study on Markov chains.
to music input.[83] Andrei Kolmogorov developed in a 1931 paper a large
Usually musical systems need to enforce specic control part of the early theory of continuous-time Markov
constraints on the nite-length sequences they generate, processes.[95][96] Kolmogorov was partly inspired by
but control constraints are not compatible with Markov Louis Bacheliers 1900 work on uctuations in the stock
models, since they induce long-range dependencies that market as well as Norbert Wiener's work on Einsteins
violate the Markov hypothesis of limited memory. In or- model of Brownian movement.[95][97] He introduced and
der to overcome this limitation, a new approach has been studied a particular set of Markov processes known as
proposed.[84] diusion processes, where he derived a set of dieren-
tial equations describing the processes.[95][98] Indepen-
dent of Kolmgorovs work, Sydney Chapman derived in
a 1928 paper an equation, now called the Chapman
11.18.15 Baseball Kolmogorov equation, in a less mathematically rig-
orous way than Kolmogorov, while studying Brown-
Markov chain models have been used in advanced base-
ian movement.[99] The dierential equations are now
ball analysis since 1960, although their use is still rare.
called the Kolmogorov equations[100] or the Kolmogorov
Each half-inning of a baseball game ts the Markov chain
Chapman equations.[101] Other mathematicians who con-
state when the number of runners and outs are consid-
tributed signicantly to the foundations of Markov pro-
ered. During any at-bat, there are 24 possible combina-
cesses include William Feller, starting in 1930s, and then
tions of number of outs and position of the runners. Mark
later Eugene Dynkin, starting in the 1950s.[96] </ref> ).
Pankin shows that Markov chain models can be used to
evaluate runs created for both individual players as well These processes are also used by spammers to inject real-
as a team.[85] He also discusses various kinds of strate- looking hidden paragraphs into unsolicited email and post
gies and play conditions: how Markov chain models have comments in an attempt to get these messages past spam
been used to analyze statistics for game situations such as lters.
bunting and base stealing and dierences when playing
on grass vs. astroturf.[86]
11.18.17 Bioinformatics

11.18.16 Markov text generators In the bioinformatics eld, they can be used to simulate
DNA sequences.[102]
Markov processes can also be used to generate super-
cially real-looking text given a sample document: they
are used in a variety of recreational "parody generator" 11.19 See also
software (see dissociated press, Je Harrison,[87] Mark
V Shaney[88][89] In his rst paper on Markov chains, Hidden Markov model
published in 1906, Markov showed that under certain
conditions the average outcomes of the Markov chain Markov blanket
would converge to a xed vector of values, so prov-
Markov chain geostatistics
ing a weak law of large numbers without the indepen-
dence assumption,[90][91][92] which had been commonly Markov chain mixing time
regarded as a requirement for such mathematical laws
to hold.[92] Markov later used Markov chains to study Markov chain Monte Carlo
the distribution of vowels in Eugene Onegin, written by
Alexander Pushkin, and proved a central limit theorem Markov decision process
for such chains.[90]
Markov information source
In 1912 Poincar studied Markov chains on nite groups
with an aim to study card shuing. Other early uses of Markov network
Markov chains include a diusion model, introduced by
Quantum Markov chain
Paul and Tatyana Ehrenfest in 1907, and a branching pro-
cess, introduced by Francis Galton and Henry William Semi-Markov process
Watson in 1873, preceding the work of Markov.[90][91]
After the work of Galton and Watson, it was later Telescoping Markov chain
66 CHAPTER 11. MARKOV CHAIN

Variable-order Markov model [12] Ionut Florescu (7 November 2014). Probability and
Stochastic Processes. John Wiley & Sons. pp. 373 and
Brownian motion 374. ISBN 978-1-118-59320-2.

Dynamics of Markovian particles [13] Samuel Karlin; Howard E. Taylor (2 December 2012). A
First Course in Stochastic Processes. Academic Press. p.
Examples of Markov chains 49. ISBN 978-0-08-057041-9.

Interacting particle system [14] Weiss, George H. (2006). Random Walks": 1.


doi:10.1002/0471667196.ess2180.pub2.
Stochastic cellular automaton
[15] Michael F. Shlesinger (1985). The Wonderful world
Markov decision process of stochastics: a tribute to Elliott W. Montroll. North-
Holland. pp. 810. ISBN 978-0-444-86937-1.
Markov model
[16] Sheldon M. Ross (1996). Stochastic processes. Wiley. pp.
Random walk 235 and 358. ISBN 978-0-471-12062-9.

Semi-Markov process [17] Emanuel Parzen (17 June 2015). Stochastic Processes.
Courier Dover Publications. p. 7 and 8. ISBN 978-0-
Markov chain approximation method 486-79688-8.

[18] Joseph L. Doob (1990). Stochastipoic processes. Wiley.


p. 46 and 47.
11.20 Notes
[19] Donald L. Snyder; Michael I. Miller (6 December 2012).
[1] Richard Serfozo (24 January 2009). Basics of Applied Random Point Processes in Time and Space. Springer Sci-
Stochastic Processes. Springer Science & Business Media. ence & Business Media. p. 32. ISBN 978-1-4612-3166-
p. 2. ISBN 978-3-540-89332-5. 0.

[2] Y.A. Rozanov (6 December 2012). Markov Random [20] Jarrow, Robert; Protter, Philip (2004). A short his-
Fields. Springer Science & Business Media. p. 58. ISBN tory of stochastic integration and mathematical nance:
978-1-4613-8190-7. the early years, 18801970": 7591. ISSN 0749-2170.
doi:10.1214/lnms/1196285381.
[3] Sren Asmussen (15 May 2003). Applied Probability and
Queues. Springer Science & Business Media. p. 7. ISBN [21] Guttorp, Peter; Thorarinsdottir, Thordis L. (2012).
978-0-387-00211-8. What Happened to Discrete Chaos, the Quenouille Pro-
cess, and the Sharp Markov Property? Some His-
[4] Emanuel Parzen (17 June 2015). Stochastic Processes. tory of Stochastic Point Processes. International Sta-
Courier Dover Publications. p. 188. ISBN 978-0-486- tistical Review. 80 (2): 253268. ISSN 0306-7734.
79688-8. doi:10.1111/j.1751-5823.2012.00181.x.

[5] Samuel Karlin; Howard E. Taylor (2 December 2012). A [22] Samuel Karlin; Howard E. Taylor (2 December 2012). A
First Course in Stochastic Processes. Academic Press. pp. First Course in Stochastic Processes. Academic Press. p.
29 and 30. ISBN 978-0-08-057041-9. 47. ISBN 978-0-08-057041-9.

[6] John Lamperti (1977). Stochastic processes: a survey of [23] Bruce Hajek (12 March 2015). Random Processes for En-
the mathematical theory. Springer-Verlag. pp. 106121. gineers. Cambridge University Press. ISBN 978-1-316-
ISBN 978-3-540-90275-1. 24124-0.

[7] Sheldon M. Ross (1996). Stochastic processes. Wiley. pp. [24] G. Latouche; V. Ramaswami (1 January 1999).
174 and 231. ISBN 978-0-471-12062-9. Introduction to Matrix Analytic Methods in Stochastic
Modeling. SIAM. pp. 4. ISBN 978-0-89871-425-8.
[8] Sren Asmussen (15 May 2003). Applied Probability and
Queues. Springer Science & Business Media. p. 7. ISBN [25] Sean Meyn; Richard L. Tweedie (2 April 2009). Markov
978-0-387-00211-8. Chains and Stochastic Stability. Cambridge University
Press. p. 3. ISBN 978-0-521-73182-9.
[9] Charles Miller Grinstead; James Laurie Snell (1997).
Introduction to Probability. American Mathematical Soc. [26] Gupta, Brij; Agrawal, Dharma P.; Yamaguchi, Shingo (16
pp. 464466. ISBN 978-0-8218-0749-1. May 2016). Handbook of Research on Modern Crypto-
graphic Solutions for Computer and Cyber Security. IGI
[10] Pierre Bremaud (9 March 2013). Markov Chains: Gibbs Global. pp. 448. ISBN 978-1-5225-0106-0.
Fields, Monte Carlo Simulation, and Queues. Springer Sci-
ence & Business Media. p. ix. ISBN 978-1-4757-3124-8. [27] Langville, Amy N.; Meyer, Carl D. (2006). A Reorder-
ing for the PageRank Problem. SIAM Journal on Scien-
[11] Hayes, Brian (2013). First links in the Markov chain. tic Computing. 27 (6): 21122113. ISSN 1064-8275.
American Scientist. 101 (2): 9296. doi:10.1137/040607551.
11.20. NOTES 67

[28] Reuven Y. Rubinstein; Dirk P. Kroese (20 September [44] Marc Barbut; Bernard Locker; Laurent Mazliak (23 Au-
2011). Simulation and the Monte Carlo Method. John Wi- gust 2016). Paul Lvy and Maurice Frchet: 50 Years of
ley & Sons. p. 225. ISBN 978-1-118-21052-9. Correspondence in 107 Letters. Springer London. p. 5.
ISBN 978-1-4471-7262-8.
[29] Dani Gamerman; Hedibert F. Lopes (10 May 2006).
Markov Chain Monte Carlo: Stochastic Simulation for [45] Valeriy Skorokhod (5 December 2005). Basic Principles
Bayesian Inference, Second Edition. CRC Press. ISBN and Applications of Probability Theory. Springer Science
978-1-58488-587-0. & Business Media. p. 146. ISBN 978-3-540-26312-8.

[30] Markovian. Oxford English Dictionary (3rd ed.). [46] Bernstein, Jeremy (2005). Bachelier. American Jour-
Oxford University Press. September 2005. (Subscription nal of Physics. 73 (5): 398396. ISSN 0002-9505.
or UK public library membership required.) doi:10.1119/1.1848117.

[47] William J. Anderson (6 December 2012). Continuous-


[31] Everitt,B.S. (2002) The Cambridge Dictionary of Statis-
Time Markov Chains: An Applications-Oriented Ap-
tics. CUP. ISBN 0-521-81099-X
proach. Springer Science & Business Media. p. vii. ISBN
[32] Parzen, E. (1962) Stochastic Processes, Holden-Day. 978-1-4612-3038-0.
ISBN 0-8162-6664-6 (Table 6.1)) [48] Kendall, D. G.; Batchelor, G. K.; Bingham, N. H.; Hay-
man, W. K.; Hyland, J. M. E.; Lorentz, G. G.; Moatt, H.
[33] Dodge, Y. (2003) The Oxford Dictionary of Statistical
K.; Parry, W.; Razborov, A. A.; Robinson, C. A.; Whit-
Terms, OUP. ISBN 0-19-920613-9 (entry for Markov
tle, P. (1990). Andrei Nikolaevich Kolmogorov (1903
chain)
1987)". Bulletin of the London Mathematical Society. 22
(1): 57. ISSN 0024-6093. doi:10.1112/blms/22.1.31.
[34] Dodge, Y. The Oxford Dictionary of Statistical Terms,
OUP. ISBN 0-19-920613-9 [49] Durrett, Rick (2010). Probability: Theory and Examples
(Fourth ed.). Cambridge: Cambridge University Press.
[35] Meyn, S. Sean P., and Richard L. Tweedie. (2009)
ISBN 978-0-521-76539-8.
Markov chains and stochastic stability. Cambridge Uni-
versity Press. (Preface, p. iii) [50] S. P. Meyn and R.L. Tweedie, 2005. Markov Chains and
Stochastic Stability
[36] Seneta, E. (1996). Markov and the Birth of Chain De-
pendence Theory. International Statistical Review / Re- [51] Norris, J. R. (1997). Continuous-time Markov chains
vue Internationale de Statistique. 64 (3): 255257. ISSN I. Markov Chains. p. 60. ISBN 9780511810633.
0306-7734. doi:10.2307/1403785. doi:10.1017/CBO9780511810633.004.

[37] Charles Miller Grinstead; James Laurie Snell (1997). [52] Asher Levin, David (2009). Markov chains and mixing
Introduction to Probability. American Mathematical Soc. times. p. 16. ISBN 978-0-8218-4739-8. Retrieved 2016-
pp. 464466. ISBN 978-0-8218-0749-1. 03-04.

[38] Pierre Bremaud (9 March 2013). Markov Chains: Gibbs [53] Serfozo, Richard (2009), Basics of Applied Stochas-
Fields, Monte Carlo Simulation, and Queues. Springer Sci- tic Processes, Probability and Its Applications, Berlin:
ence & Business Media. p. ix. ISBN 978-1-4757-3124-8. Springer-Verlag: 35, ISBN 978-3-540-89331-8, MR
2484222, doi:10.1007/978-3-540-89332-5
[39] Hayes, Brian (2013). First links in the Markov chain.
American Scientist. 101 (2): 9296. [54] Florian Schmitt and Franz Rothlauf, On the Mean of the
Second Largest Eigenvalue on the Convergence Rate of
[40] Seneta, E. (1998). I.J. Bienaym [1796-1878]: Criti- Genetic Algorithms, Working Paper 1/2001, Working
cality, Inequality, and Internationalization. International Papers in Information Systems, 2001. http://citeseerx.ist.
Statistical Review / Revue Internationale de Statistique. 66 psu.edu/viewdoc/summary?doi=10.1.1.28.6191
(3): 291292. ISSN 0306-7734. doi:10.2307/1403518.
[55] Gene H. Golub, Charles F. Van Loan, Matrix compu-
tations, Third Edition, The Johns Hopkins University
[41] Bru, B.; Hertz, S. (2001). Maurice Frchet": 331334.
Press, Baltimore and London, 1996.
doi:10.1007/978-1-4613-0179-0_71.
[56] Franzke, Brandon; Kosko, Bart (1 October 2011). Noise
[42] Kendall, D. G.; Batchelor, G. K.; Bingham, N. H.; Hay- can speed convergence in Markov chains. Physical Re-
man, W. K.; Hyland, J. M. E.; Lorentz, G. G.; Moatt, H. view E. 84 (4). doi:10.1103/PhysRevE.84.041112.
K.; Parry, W.; Razborov, A. A.; Robinson, C. A.; Whit-
tle, P. (1990). Andrei Nikolaevich Kolmogorov (1903 [57] Richard Durrett (19 May 2012). Essentials of Stochastic
1987)". Bulletin of the London Mathematical Society. 22 Processes. Springer Science & Business Media. p. 37.
(1): 33. ISSN 0024-6093. doi:10.1112/blms/22.1.31. ISBN 978-1-4614-3615-7.

[43] Cramer, Harald (1976). Half a Century with Proba- [58] A. Nielsen and M. Weber, Computing
bility Theory: Some Personal Recollections. The An- the nearest reversible Markov chain. Nu-
nals of Probability. 4 (4): 509546. ISSN 0091-1798. merical Linear Algebra with Applications,
doi:10.1214/aop/1176996025. 22(3):483-499, 2015.
68 CHAPTER 11. MARKOV CHAIN

[59] Spitzer, Frank (1970). Interaction of Markov Pro- [73] Calvet, Laurent; Adlai Fisher (2004). How to Forecast
cesses. Advances in Mathematics. 5 (2): 246290. long-run volatility: regime-switching and the estimation
doi:10.1016/0001-8708(70)90034-4. of multifractal processes. Journal of Financial Econo-
metrics. 2: 4983. doi:10.1093/jjnec/nbh003.
[60] R. L. Dobrushin; V. I. Kriu kov; A. L. Toom (1978).
Stochastic Cellular Systems: Ergodicity, Memory, Morpho- [74] Brennan, Michael; Xiab, Yihong. Stock Price Volatility
genesis. ISBN 9780719022067. Retrieved 2016-03-04. and the Equity Premium (PDF). Department of Finance,
the Anderson School of Management, UCLA.
[61] Doblinger, G., 1998. Smoothing of Noise AR Signals
Using an Adaptive Kalman Filter. In EUSIPCO 98. [75] A Markov Chain Example in Credit Risk Modelling
pp. 781784. Available at: http://citeseerx.ist.psu.edu/ Columbia University lectures
viewdoc/summary?doi=10.1.1.251.3078 [Accessed Jan-
[76] Acemoglu, Daron; Georgy Egorov; Konstantin Sonin
uary 15, 2015].
(2011). Political model of social evolution. Proceedings
[62] Norris, J. R. (1997). Continuous-time Markov chains of the National Academy of Sciences. 108: 2129221296.
II. Markov Chains. p. 108. ISBN 9780511810633. doi:10.1073/pnas.1019454108.
doi:10.1017/CBO9780511810633.005. [77] Gibson, Matthew C; Patel, Ankit P.; Perrimon, Nor-
[63] Kutchukian, Peter; Lou, David; Shakhnovich, Eugene bert; Perrimon, Norbert (2006). The emergence of ge-
(2009). FOG: Fragment Optimized Growth Algorithm ometric order in proliferating metazoan epithelia. Na-
for the de Novo Generation of Molecules occupying ture. 442 (7106): 10381041. PMID 16900102.
Druglike Chemical. Journal of Chemical Information doi:10.1038/nature05014.
and Modeling. 49 (7): 16301642. PMID 19527020. [78] George, Dileep; Hawkins, Je (2009). Friston, Karl
doi:10.1021/ci9000458. J., ed. Towards a Mathematical Theory of Cor-
[64] Kopp, V. S.; Kaganer, V. M.; Schwarzkopf, J.; tical Micro-circuits. PLoS Comput Biol. 5 (10):
Waidick, F.; Remmele, T.; Kwasniewski, A.; Schmid- e1000532. PMC 2749218 . PMID 19816557.
bauer, M. (2011). X-ray diraction from nonpe- doi:10.1371/journal.pcbi.1000532.
riodic layered structures with correlations: Analyti-
[79] Watterson, G. (1996). Motoo Kimuras Use of Diusion
cal calculation and experiment on mixed Aurivillius
Theory in Population Genetics. Theoretical Population
lms. Acta Crystallographica Section A. 68: 148155.
Biology 49 (2): 154188. doi:10.1006/tpbi.1996.0010.
doi:10.1107/S0108767311044874.
PMID 8813021.
[65] Pratas, D; Silva, R; Pinho, A; Ferreira, P (May 18, 2015). [80] K McAlpine; E Miranda; S Hoggar (1999).
An alignment-free method to nd and visualise rear- Making Music with Algorithms: A Case-Study
rangements between pairs of DNA sequences.. Scien- System. Computer Music Journal. 23 (2): 1930.
tic Reports (Group Nature). 5 (10203): 10203. PMID doi:10.1162/014892699559733.
25984837. doi:10.1038/srep10203.
[81] Curtis Roads (ed.) (1996). The Computer Music Tutorial.
[66] O'Connor, John J.; Robertson, Edmund F., Markov MIT Press. ISBN 0-262-18158-4.
chain, MacTutor History of Mathematics archive,
University of St Andrews. [82] Xenakis, Iannis; Kanach, Sharon (1992) Formalized Mu-
sic: Mathematics and Thought in Composition, Pendragon
[67] S. P. Meyn, 2007. Control Techniques for Complex Net- Press. ISBN 1576470792
works, Cambridge University Press, 2007.
[83] Continuator Archived July 13, 2012, at the Wayback Ma-
[68] U.S. Patent 6,285,999 chine.
[69] Page, Lawrence and Brin, Sergey and Motwani, Rajeev [84] Pachet, F.; Roy, P.; Barbieri, G. (2011) Finite-Length
and Winograd, Terry (1999). The PageRank Citation Markov Processes with Constraints, Proceedings of the
Ranking: Bringing Order to the Web (Technical report). 22nd International Joint Conference on Articial Intelli-
Retrieved 2016-03-04. gence, IJCAI, pages 635-642,Barcelona, Spain, July 2011

[70] Prasad, NR; RC Ender; ST Reilly; G Nesgos (1974). [85] Pankin, Mark D. MARKOV CHAIN MODELS: THE-
Allocation of resources on a minimized cost basis. ORETICAL BACKGROUND. Retrieved 2007-11-26.
1974 IEEE Conference on Decision and Control including
[86] Pankin, Mark D. BASEBALL AS A MARKOV
the 13th Symposium on Adaptive Processes. 13: 4023.
CHAIN. Retrieved 2009-04-24.
doi:10.1109/CDC.1974.270470.
[87] Poets Corner Fieralingue Archived December 6, 2010,
[71] Hamilton, James (1989). A new approach to the
at the Wayback Machine.
economic analysis of nonstationary time series and the
business cycle. Econometrica. Econometrica, Vol. [88] Kenner, Hugh; O'Rourke, Joseph (November 1984). A
57, No. 2. 57 (2): 35784. JSTOR 1912559. Travesty Generator for Micros. BYTE. 9 (12): 129131,
doi:10.2307/1912559. 449469.
[72] Calvet, Laurent E.; Fisher, Adlai J. (2001). Forecasting [89] Hartman, Charles (1996). Virtual Muse: Experiments
Multifractal Volatility. Journal of Econometrics. 105 (1): in Computer Poetry. Hanover, NH: Wesleyan University
2758. doi:10.1016/S0304-4076(01)00069-0. Press. ISBN 0-8195-2239-2.
11.22. REFERENCES 69

11.21 History [102] Pratas, Diogo; Bastos, Carlos; Pinho, Armando;


Neves, Antonio; Matos, Luis (June 2011). DNA
Andrey Markov studied Markov chains in the early 20th synthetic sequences generation using multiple competing
century. Markov was interested in studying an exten- Markov models. Statistical Signal Processing Work-
sion of independent random sequences, motivated by shop (SSP), 2011 IEEE. 9 (12). pp. 133136.
a diagreement with Pavel Nekrasov who claimed in- doi:10.1109/SSP.2011.5967639.
dependence was necessary for the weak law of large
numbers to hold.<ref name="Seneta1996">Seneta, E.
(1996). Markov and the Birth of Chain Dependence
Theory. International Statistical Review / Revue Interna-
11.22 References
tionale de Statistique. 64 (3): 255257. ISSN 0306-7734.
doi:10.2307/1403785. A.A. Markov. Rasprostranenie zakona bolshih
chisel na velichiny, zavisyaschie drug ot druga.
[90] Charles Miller Grinstead; James Laurie Snell (1997).
Izvestiya Fiziko-matematicheskogo obschestva pri
Introduction to Probability. American Mathematical Soc.
Kazanskom universitete, 2-ya seriya, tom 15, pp.
pp. 464466. ISBN 978-0-8218-0749-1.
135156, 1906.
[91] Pierre Bremaud (9 March 2013). Markov Chains: Gibbs
Fields, Monte Carlo Simulation, and Queues. Springer Sci- A.A. Markov. Extension of the limit theorems of
ence & Business Media. p. ix. ISBN 978-1-4757-3124-8. probability theory to a sum of variables connected in
a chain. reprinted in Appendix B of: R. Howard.
[92] Hayes, Brian (2013). First links in the Markov chain.
Dynamic Probabilistic Systems, volume 1: Markov
American Scientist. 101 (2): 9296.
Chains. John Wiley and Sons, 1971.
[93] Seneta, E. (1998). I.J. Bienaym [1796-1878]: Criti-
cality, Inequality, and Internationalization. International Classical Text in Translation: A. A. Markov,
Statistical Review / Revue Internationale de Statistique. 66 An Example of Statistical Investigation of the
(3): 291292. ISSN 0306-7734. doi:10.2307/1403518. Text Eugene Onegin Concerning the Connection
of Samples in Chains, trans. David Link. Sci-
[94] Bru, B.; Hertz, S. (2001). Maurice Frchet": 331334.
doi:10.1007/978-1-4613-0179-0_71. ence in Context 19.4 (2006): 591600. Online:
http://journals.cambridge.org/production/action/
[95] Kendall, D. G.; Batchelor, G. K.; Bingham, N. H.; Hay- cjoGetFulltext?fulltextid=637500
man, W. K.; Hyland, J. M. E.; Lorentz, G. G.; Moatt, H.
K.; Parry, W.; Razborov, A. A.; Robinson, C. A.; Whit- Leo Breiman. Probability. Original edition pub-
tle, P. (1990). Andrei Nikolaevich Kolmogorov (1903 lished by Addison-Wesley, 1968; reprinted by Soci-
1987)". Bulletin of the London Mathematical Society. 22 ety for Industrial and Applied Mathematics, 1992.
(1): 33. ISSN 0024-6093. doi:10.1112/blms/22.1.31. ISBN 0-89871-296-3. (See Chapter 7.)
[96] Cramer, Harald (1976). Half a Century with Proba-
bility Theory: Some Personal Recollections. The An-
J.L. Doob. Stochastic Processes. New York: John
nals of Probability. 4 (4): 509546. ISSN 0091-1798. Wiley and Sons, 1953. ISBN 0-471-52369-0.
doi:10.1214/aop/1176996025.
S. P. Meyn and R. L. Tweedie. Markov Chains
[97] Marc Barbut; Bernard Locker; Laurent Mazliak (23 Au- and Stochastic Stability. London: Springer-Verlag,
gust 2016). Paul Lvy and Maurice Frchet: 50 Years of 1993. ISBN 0-387-19832-6. online: https://
Correspondence in 107 Letters. Springer London. p. 5. netfiles.uiuc.edu/meyn/www/spm_files/book.html
ISBN 978-1-4471-7262-8. . Second edition to appear, Cambridge University
[98] Valeriy Skorokhod (5 December 2005). Basic Principles Press, 2009.
and Applications of Probability Theory. Springer Science
& Business Media. p. 146. ISBN 978-3-540-26312-8. S. P. Meyn. Control Techniques for Complex Net-
works. Cambridge University Press, 2007. ISBN
[99] Bernstein, Jeremy (2005). Bachelier. American Jour- 978-0-521-88441-9. Appendix contains abridged
nal of Physics. 73 (5): 398396. ISSN 0002-9505. Meyn & Tweedie. online: https://netfiles.uiuc.edu/
doi:10.1119/1.1848117. meyn/www/spm_files/CTCN/CTCN.htmlBooth,
[100] William J. Anderson (6 December 2012). Continuous- +Taylor+L.+(1967).+Sequential+Machines+
Time Markov Chains: An Applications-Oriented Ap- and+Automata+Theory+(1st+ed.).+New+York:
proach. Springer Science & Business Media. p. vii. ISBN +John+Wiley+and+Sons,+Inc.+Library+of+
978-1-4612-3038-0. Congress+Card+Catalog+Number+67-25924.
Extensive, wide-ranging book meant for specialists,
[101] Kendall, D. G.; Batchelor, G. K.; Bingham, N. H.; Hay-
man, W. K.; Hyland, J. M. E.; Lorentz, G. G.; Moatt, H. written for both theoretical computer scientists
K.; Parry, W.; Razborov, A. A.; Robinson, C. A.; Whit- as well as electrical engineers. With detailed
tle, P. (1990). Andrei Nikolaevich Kolmogorov (1903 explanations of state minimization techniques,
1987)". Bulletin of the London Mathematical Society. 22 FSMs, Turing machines, Markov processes, and
(1): 57. ISSN 0024-6093. doi:10.1112/blms/22.1.31. undecidability. Excellent treatment of Markov
70 CHAPTER 11. MARKOV CHAIN

processes pp. 449. Discusses Z-transforms, D


transforms in their context.
Kemeny, John G.; Hazleton Mirkil; J. Laurie Snell;
Gerald L. Thompson (1959). Finite Mathemati-
cal Structures (1st ed.). Englewood Clis, N.J.:
Prentice-Hall, Inc. Library of Congress Card Cata-
log Number 59-12841. Classical text. cf Chapter 6
Finite Markov Chains pp. 384.
E. Nummelin. General irreducible Markov chains
and non-negative operators. Cambridge University
Press, 1984, 2004. ISBN 0-521-60494-X

Seneta, E. Non-negative matrices and Markov


chains. 2nd rev. ed., 1981, XVI, 288 p., Softcover
Springer Series in Statistics. (Originally published
by Allen & Unwin Ltd., London, 1973) ISBN 978-
0-387-29765-1

Kishor S. Trivedi, Probability and Statistics with Re-


liability, Queueing, and Computer Science Applica-
tions, John Wiley & Sons, Inc. New York, 2002.
ISBN 0-471-33341-7.

K.S.Trivedi and R.A.Sahner, SHARPE at the age of


twenty-two, vol. 36, no. 4, pp.52-57, ACM SIG-
METRICS Performance Evaluation Review, 2009.
R.A.Sahner, K.S.Trivedi and A. Puliato, Perfor-
mance and reliability analysis of computer systems:
an example-based approach using the SHARPE soft-
ware package, Kluwer Academic Publishers, 1996.
ISBN 0-7923-9650-2.

G.Bolch, S.Greiner, H.de Meer and K.S.Trivedi,


Queueing Networks and Markov Chains, John Wi-
ley, 2nd edition, 2006. ISBN 978-0-7923-9650-5.

11.23 External links


Introduction to Markov Chains on YouTube
Hazewinkel, Michiel, ed. (2001), Markov chain,
Encyclopedia of Mathematics, Springer, ISBN 978-
1-55608-010-4

Techniques to Understand Computer Simulations:


Markov Chain Analysis

Markov Chains chapter in American Mathematical


Societys introductory probability book(pdf)

A beautiful visual explanation of Markov Chains


Chapter 5: Markov Chain Models

Making Sense and Nonsense of Markov Chains


Chapter 12

Density matrix

See also: Quantum statistical mechanics 12.2 Pure and mixed states
Not to be confused with dense matrix.
In quantum mechanics, the state of a quantum system is
A density matrix is a matrix that describes a quantum represented by a state vector (or ket) | . A quantum
system in a mixed state, a statistical ensemble of sev- system with a state vector | is called a pure state. How-
eral quantum states. This should be contrasted with a ever, it is also possible for a system to be in a statistical
single state vector that describes a quantum system in a ensemble of dierent state vectors: For example, there
pure state. The density matrix is the quantum-mechanical may be a 50% probability that the state vector is |1 and
analogue to a phase-space probability measure (probabil- a 50% chance that the state vector is |2 . This system
ity distribution of position and momentum) in classical would be in a mixed state. The density matrix is especially
statistical mechanics. useful for mixed states, because any state, pure or mixed,
can be characterized by a single density matrix.
Mixed states arise in situations where the experimenter
does not know which particular states are being manip- A mixed state is dierent from a quantum superposition.
ulated. Examples include a system in thermal equilib- The probabilities in a mixed state are classical probabil-
rium (or additionally chemical equilibrium) or a system ities (as in the probabilities one learns in classic proba-
with an uncertain or randomly varying preparation his- bility theory / statistics), unlike the quantum probabilites
tory (so one does not know which pure state the system in a quantum superposition. In fact, a quantum super-
is in). Also, if a quantum system has two or more sub- position of pure states is another pure state, for example
systems that are entangled, then each subsystem must be |= (|1 + |2 )/ 2 . In this case, the coecients
treated as a mixed state even if the complete system is in 1/ 2 are not probabilities, but rather probability ampli-
a pure state.[1] The density matrix is also a crucial tool in tudes.
quantum decoherence theory.
The density matrix is a representation of a linear oper-
ator called the density operator. The density matrix is
12.2.1 Example: Light polarization
obtained from the density operator by choice of basis in
the underlying space. In practice, the terms density ma- An example of pure and mixed states is light polarization.
trix and density operator are often used interchangeably. Photons can have two helicities, corresponding to two or-
Both matrix and operator are self-adjoint (or Hermitian), thogonal quantum states, |R (right circular polarization)
positive semi-denite, of trace one, and may be innite- and |L (left circular polarization). A photon can also be
dimensional.[2] in a superposition state, such as (|R + |L)/ 2 (verti-
cal polarization) or (|R |L)/ 2 (horizontal polariza-
tion). More generally, it can be in any state |R + |L
(with ||2 + ||2 = 1 ), corresponding to linear, circular,

or elliptical polarization. If we pass (|R + |L)/ 2 po-
larized light through a circular polarizer which allows ei-
ther only |R polarized light, or only |L polarized light,
intensity would be reduced by half in both cases. This
12.1 History may make it seem like half of the photons are in state |R
and the other half in state |L . But this is not correct:
Both |R and |L photons are partly absorbed
by a verti-
The formalism of density operators and matrices was in- cal linear polarizer, but the (|R+|L)/ 2 light will pass
[3]
troduced by John von Neumann in 1927 and indepen- through that polarizer with no absorption whatsoever.
dently, but less systematically by Lev Landau[4] and Felix However, unpolarized light (such as the light from an
Bloch[5] in 1927 and 1946 respectively. incandescent light bulb) is dierent from any state like

71
72 CHAPTER 12. DENSITY MATRIX

Where do mixed states come from? To answer that, con-


4 sider how to generate unpolarized light. One way is to
use a system in thermal equilibrium, a statistical mixture
of enormous numbers of microstates, each with a cer-
2 tain probability (the Boltzmann factor), switching rapidly
from one to the next due to thermal uctuations. Ther-
mal randomness explains why an incandescent light bulb,
for example, emits unpolarized light. A second way to
generate unpolarized light is to introduce uncertainty in
3 the preparation of the system, for example, passing it
through a birefringent crystal with a rough surface, so that
1 slightly dierent parts of the beam acquire dierent po-
larizations. A third way to generate unpolarized light uses
an EPR setup: A radioactive decay can emit two pho-
tons traveling in opposite
directions, in the quantum state
The incandescent light bulb (1) emits completely random
(|R, L+|L, R)/ 2 . The two photons together are in a
polarized photons (2) with mixed state density matrix
pure state, but if you only look at one of the photons and
[ ] ignore the other, the photon behaves just like unpolarized
0.5 0 light.
0 0.5
More generally, mixed states commonly arise from a sta-
After passing through vertical plane polarizer (3), the remaining tistical mixture of the starting state (such as in thermal
photons are all vertically polarized (4) and have pure state equilibrium), from uncertainty in the preparation proce-
density matrix dure (such as slightly dierent paths that a photon can
[ ] travel), or from looking at a subsystem entangled with
1 0 something else.
0 0

12.2.2 Mathematical description


The state vector | of a pure state completely deter-
|R + |L (linear, circular, or elliptical polarization). mines the statistical behavior of a measurement. For
Unlike linearly or elliptically polarized light, it passes concreteness, take an observable quantity, and let A be
through a polarizer with 50% intensity loss whatever the the associated observable operator that has a representa-
orientation of the polarizer; and unlike circularly polar- tion on the Hilbert space H of the quantum system. For
ized light, it cannot be made linearly polarized with any any real-valued, analytical function F dened on the real
wave plate because randomly oriented polarization will numbers,[6] suppose that F(A) is the result of applying F
emerge from a wave plate with random orientation. In- to the outcome of a measurement. The expectation value
deed, unpolarized light cannot be described as any state of F(A) is
of the form |R + |L in a denite sense. However,
unpolarized light can be described with ensemble aver-
ages, e.g. that each photon is either |R with 50% prob- |F (A)| .
ability or |L with 50% probability. The same behavior
Now consider a mixed state prepared by statistically com-
would occur if each photon was either vertically polarized
bining two dierent pure states | and | , with the as-
with 50% probability or horizontally polarized with 50%
sociated probabilities p and 1 p, respectively. The asso-
probability.
ciated probabilities mean that the preparation process for
Therefore, unpolarized light cannot be described by any the quantum system ends in the state | with probability
pure state, but can be described as a statistical ensem- p and in the state | with probability 1 p.
ble of pure states in at least two ways (the ensemble of
It is not hard to show that the statistical properties of the
half left and half right circularly polarized, or the ensem-
observable for the system prepared in such a mixed state
ble of half vertically and half horizontally linearly po-
are completely determined. However, there is no state
larized). These two ensembles are completely indistin-
vector | which determines this statistical behaviour in
guishable experimentally, and therefore they are consid-
the sense that the expectation value of F(A) is
ered the same mixed state. One of the advantages of the
density matrix is that there is just one density matrix for
each mixed state, whereas there are many statistical en-
|F (A)| .
sembles of pure states for each mixed state. Nevertheless,
the density matrix contains all the information necessary Nevertheless, there is a unique operator such that the
to calculate any measurable property of the mixed state. expectation value of F(A) can be written as
12.4. MEASUREMENT 73

can be characterized precisely. Two ensembles , ' de-


ne the same density operator if and only if there is a
tr[F (A)] , matrix U with

where the operator is the density operator of the mixed


system. A simple calculation shows that the operator U U = 11
for the above discussion is given by
i.e., U is unitary and such that

= p|| + (1 p)|| .
|i pi = uij |j pj .
For the above example of unpolarized light, the density j

operator is This is simply a restatement of the following fact from


linear algebra: for two square matrices M and N, M M *
= N N * if and only if M = NU for some unitary U. (See
= 2 |RR| + 2 |LL|.
1 1 square root of a matrix for more details.) Thus there is a
unitary freedom in the ket mixture or ensemble that gives
the same density operator. However, if the kets making
up the mixture are restricted to be orthonormal, then the
12.3 Denition original probabilities pj are recoverable as the eigenvalues
of the density matrix.
For a nite-dimensional function space, the most general
density operator is of the form In operator language, a density operator is a positive
semidenite, hermitian operator of trace 1 acting on the
state space.[7] A density operator describes a pure state if
it is a rank one projection. Equivalently, a density opera-
= pj |j j | tor describes a pure state if and only if
j

where the coecients pj are non-negative and add up to


one. This represents a statistical mixture of pure states. If = 2
the given system is closed, then one can think of a mixed i.e. the state is idempotent. This is true regardless of
state as representing a single system with an uncertain whether H is nite-dimensional or not.
preparation history, as explicitly detailed above; or we
can regard the mixed state as representing an ensemble Geometrically, when the state is not expressible as a
of systems, i.e. a large number of copies of the system in convex combination of other states, it is a pure state.[8]
question, where pj is the proportion of the ensemble be- The family of mixed states is a convex set and a state is
ing in the state |j . An ensemble is described by a pure pure if it is an extremal point of that set.
state if every copy of the system in that ensemble is in the It follows from the spectral theorem for compact self-
same state, i.e. it is a pure ensemble. If the system is not adjoint operators that every mixed state is a countable
closed, however, then it is simply not correct to claim that convex combination of pure states. This representation is
it has some denite but unknown state vector, as the den- not unique. Furthermore, a theorem of Andrew Gleason
sity operator may record physical entanglements to other states that certain functions dened on the family of pro-
systems. jections and taking values in [0,1] (which can be regarded
Consider a quantum ensemble of size N with occupancy as quantum analogues of probability measures) are de-
numbers n1 , n2 ,...,nk corresponding to the orthonormal termined by unique mixed states. See quantum logic for
states |1, ..., |k , respectively, where n1 +...+nk = N, and, more details.
thus, the coecients pj = nj /N. For a pure ensemble,
where all N particles are in state |i , we have nj = 0, for
all j i, from which we recover the corresponding den- 12.4 Measurement
sity operator = |ii| . However, the density operator
of a mixed state does not capture all the information about Let A be an observable of the system, and suppose the en-
the ingredients that went into the mixture; in particular, semble is in a mixed state such that each of the pure states
the coecients pj and the kets j are not recoverable |j occurs with probability pj. Then the corresponding
from the operator without additional information. This density operator is:
non-uniqueness implies that dierent ensembles or mix-
tures may correspond to the same density operator. Such

equivalent ensembles or mixtures cannot be distinguished = pj |j j |.
by measurement of observables alone. This equivalence j
74 CHAPTER 12. DENSITY MATRIX

The expectation value of the measurement can be cal- decrease with a projective measurement, however gen-
culated by extending from the case of pure states (see eralised measurements can decrease entropy.[9][10] The
Measurement in quantum mechanics): entropy of a pure state is zero, while that of a proper
mixture always greater than zero. Therefore, a pure state
may be converted into a mixture by a measurement,
but
a proper mixture can never be converted into a pure
A = pj j |A|j = pj tr (|j j |A) = (pj |jThus
tr state. j |A) ofmeasurement
= tr
the act =a fundamen-
pj |j induces
j |A tr(A),
j j j tal irreversible change onjthe density matrix; this is analo-
gous to the collapse of the state vector, or wavefunction
where tr denotes trace. Moreover, if A has spectral reso- collapse. Perhaps counterintuitively, the measurement
lution actually decreases information by erasing quantum inter-
ference in the composite systemcf. quantum entangle-
ment, einselection, and quantum decoherence.
A= ai |ai ai | = ai Pi ,
i i (A subsystem of a larger system can be turned from a
mixed to a pure state, but only by increasing the von Neu-
where Pi = |ai ai | , the corresponding density operator mann entropy elsewhere in the system. This is analogous
after the measurement is given by: to how the entropy of an object can be lowered by putting
it in a refrigerator: The air outside the refrigerators heat-
exchanger warms up, gaining even more entropy than was
= Pi Pi . lost by the object in the refrigerator. See second law of
i thermodynamics. See Entropy in thermodynamics and
Note that the above density operator describes the full en- information theory.)
semble after measurement. The sub-ensemble for which
the measurement result was the particular value ai is de-
scribed by the dierent density operator 12.6 The von Neumann equation
for time evolution
Pi Pi
i = . See also: Liouvilles theorem (Hamiltonian) Quantum
tr[Pi ]
Liouville equation
This is true assuming that |ai is the only eigenket (up
to phase) with eigenvalue ai; more generally, Pi in this
Just as the Schrdinger equation describes how pure
expression would be replaced by the projection operator
states evolve in time, the von Neumann equation (also
into the eigenspace corresponding to eigenvalue ai.
known as the Liouvillevon Neumann equation) de-
scribes how a density operator evolves in time (in fact,
the two equations are equivalent, in the sense that either
12.5 Entropy can be derived from the other.) The von Neumann equa-
tion dictates that[11][12]
The von Neumann entropy S of a mixture can be ex-
pressed in terms of the eigenvalues of or in terms of
the trace and logarithm of the density operator . Since i = [H, ] ,
is a positive semi-deniteoperator, it has a spectral de- t
composition such that = i i | i i | where |i are where the brackets denote a commutator.
orthonormal vectors, i > 0 and i = 1 . Then the Note that this equation only holds when the density opera-
entropy of a quantum system with density matrix is tor is taken to be in the Schrdinger picture, even though
this equation seems at rst look to emulate the Heisen-
berg equation of motion in the Heisenberg picture, with
S= i ln i = tr( ln ) . a crucial sign dierence:
i

Also it can be shown that


dA(H)
i = [H, A(H) ] ,
dt
( )
where A(H) (t) is some Heisenberg picture operator; but in
S = pi i = H(pi ) + pi S(i )
this picture the density matrix is not time-dependent, and
i i
the relative sign ensures that the time derivative of the ex-
when i have orthogonal support, where H(p) is the pected value A comes out the same as in the Schrdinger
Shannon entropy. This entropy can increase but never picture.
12.8. COMPOSITE SYSTEMS 75

Taking the density operator to be in the Schrdinger pic- 12.8 Composite systems
ture makes sense, since it is composed of 'Schrdinger'
kets and bras evolved in time, as per the Schrdinger pic- The joint density matrix of a composite system of two
ture. If the Hamiltonian is time-independent, this dier- systems A and B is described by AB . Then the subsys-
ential equation can be easily solved to yield tems are described by their reduced density operator.

(t) = eiHt/ (0)eiHt/ . A = trB AB

For a more general Hamiltonian, if G(t) is the wavefunc- trB is called partial trace over system B. If A and B are
tion propagator over some interval, then the time evolu- two distinct and independent systems then AB = A
tion of the density matrix over that same interval is given B which is a product state.
by

12.9 C*-algebraic formulation of


(t) = G(t)(0)G(t) . states
It is now generally accepted that the description of quan-
12.7 Quantum Liouville, tum mechanics in which all self-adjoint operators repre-
sent observables is untenable.[13][14] For this reason, ob-
Moyals equation servables are identied with elements of an abstract C*-
algebra A (that is one without a distinguished represen-
The density matrix operator may also be realized in phase tation as an algebra of operators) and states are positive
space. Under the Wigner map, the density matrix trans- linear functionals on A. However, by using the GNS con-
forms into the equivalent Wigner function, struction, we can recover Hilbert spaces which realize A
as a subalgebra of operators.
Geometrically, a pure state on a C*-algebra A is a state

def1 which is an extreme point of the set of all states on A.
W (x, p) = (x + y)(x y)e2ipy/ dy . By properties of the GNS construction these states corre-

spond to irreducible representations of A.
The equation for the time-evolution of the Wigner func- The states of the C*-algebra of compact operators K(H)
tion is then the Wigner-transform of the above von Neu- correspond exactly to the density operators, and therefore
mann equation, the pure states of K(H) are exactly the pure states in the
sense of quantum mechanics.
The C*-algebraic formulation can be seen to include both
classical and quantum systems. When the system is clas-
W (q, p, t) sical, the algebra of observables become an abelian C*-
= {{W (q, p, t), H(q, p)}} ,
t algebra. In that case the states become probability mea-
sures, as noted in the introduction.
where H(q,p) is the Hamiltonian, and { { , } }
is the Moyal bracket, the transform of the quantum
commutator. 12.10 See also
The evolution equation for the Wigner function is then
analogous to that of its classical limit, the Liouville equa- Quantum statistical mechanics
tion of classical physics. In the limit of vanishing Plancks
Atomic electron transition
constant , W(q,p,t) reduces to the classical Liouville
probability density function in phase space. Born rule
The classical Liouville equation can be solved using the Density functional theory
method of characteristics for partial dierential equa-
tions, the characteristic equations being Hamiltons equa- Gleasons theorem
tions. The Moyal equation in quantum mechanics simi-
larly admits formal solutions in terms of quantum char- GreenKubo relations
acteristics, predicated on the product of phase space, Greens function (many-body theory)
although, in actual practice, solution-seeking follows dif-
ferent methods. Lindblad equation
76 CHAPTER 12. DENSITY MATRIX

Quantum state [13] See appendix, Mackey, George Whitelaw (1963), Mathe-
matical Foundations of Quantum Mechanics, Dover Books
POVM Generalized measurement of density states on Mathematics, New York: Dover Publications, ISBN
978-0-486-43517-6
Purication of quantum state
[14] Emch, Gerard G. (1972), Algebraic methods in statistical
Wave function mechanics and quantum eld theory, Wiley-Interscience,
ISBN 978-0-471-23900-0
Wigner quasi-probability distribution

12.11 Notes and references


[1] Hall, B.C. (2013), Quantum Theory for Mathemati-
cians, Graduate Texts in Mathematics, 267, Springer,
p. 419, ISBN 978-1-4614-7115-8, doi:10.1007/978-1-
4614-7116-5

[2] Fano, Ugo (1957), Description of States in Quan-


tum Mechanics by Density Matrix and Oper-
ator Techniques, Reviews of Modern Physics,
29: 7493, Bibcode:1957RvMP...29...74F,
doi:10.1103/RevModPhys.29.74.

[3] von Neumann, John (1927), Wahrscheinlichkeitsthe-


oretischer Aufbau der Quantenmechanik, Gttinger
Nachrichten, 1: 245272

[4] Schlter, Michael and Lu Jeu Sham (1982), Density


functional theory, Physics Today, 35 (2): 36,
Bibcode:1982PhT....35b..36S, doi:10.1063/1.2914933

[5] Ugo Fano (June 1995), Density matrices as polar-


ization vectors, Rendiconti Lincei, 6 (2): 123130,
doi:10.1007/BF03001661

[6] Technically, F must be a Borel function

[7] Hall, B.C. (2013), Quantum Theory for Mathemati-


cians, Graduate Texts in Mathematics, 267, Springer,
p. 423, ISBN 978-1-4614-7115-8, doi:10.1007/978-1-
4614-7116-5

[8] Hall, B.C. (2013), Quantum Theory for Mathemati-


cians, Graduate Texts in Mathematics, 267, Springer,
p. 439, ISBN 978-1-4614-7115-8, doi:10.1007/978-1-
4614-7116-5

[9] Nielsen, Michael; Chuang, Isaac (2000), Quantum Com-


putation and Quantum Information, Cambridge Univer-
sity Press, ISBN 978-0-521-63503-5. Chapter 11: En-
tropy and information, Theorem 11.9, Projective mea-
surements cannot decrease entropy

[10] Everett, Hugh (1973), The Theory of the Universal


Wavefunction (1956) Appendix I. Monotone decrease of
information for stochastic processes"", The Many-Worlds
Interpretation of Quantum Mechanics, Princeton Series in
Physics, Princeton University Press, pp. 128129, ISBN
978-0-691-08131-1

[11] Breuer, Heinz; Petruccione, Francesco (2002), The the-


ory of open quantum systems, p. 110, ISBN 978-0-19-
852063-4

[12] Schwabl, Franz (2002), Statistical mechanics, p. 16, ISBN


978-3-540-43163-3
Chapter 13

Matrix (mathematics)

For other uses, see Matrix. multiplied element-wise by a scalar from its associated
Matrix theory redirects here. For the physics topic, see eld. A major application of matrices is to represent
Matrix string theory. linear transformations, that is, generalizations of linear
In mathematics, a matrix (plural matrices) is a functions such as f(x) = 4x. For example, the rotation of
vectors in three-dimensional space is a linear transforma-
tion, which can be represented by a rotation matrix R: if
m-by-n matrix v is a column vector (a matrix with only one column) de-
ai,j n columns j changes scribing the position of a point in space, the product Rv is
m a column vector describing the position of that point after
rows a rotation. The product of two transformation matrices
a1,1 a1,2 a1,3 is a matrix that represents the composition of two linear
.
.
.

i transformations. Another application of matrices is in the


c a2,1 a2,2 a2,3 solution of systems of linear equations. If the matrix is
.
.
.

h square, it is possible to deduce some of its properties by


a
computing its determinant. For example, a square matrix
n a3,1 a3,2 a3,3
.
.
.

g has an inverse if and only if its determinant is not zero.


e . . . Insight into the geometry of a linear transformation is ob-
. . .
.

s tainable (along with other information) from the matrixs


..

. . . eigenvalues and eigenvectors.


Applications of matrices are found in most scientic
elds. In every branch of physics, including classical
The m rows are horizontal and the n columns are vertical. Each
mechanics, optics, electromagnetism, quantum mechan-
element of a matrix is often denoted by a variable with two
ics, and quantum electrodynamics, they are used to study
subscripts. For example, a2,1 represents the element at the second
row and rst column of a matrix A. physical phenomena, such as the motion of rigid bodies.
In computer graphics, they are used to manipulate 3D
[1]
rectangular array of numbers, symbols, or expressions, models and project them onto a 2-dimensional screen.
arranged in rows and columns.[2][3] For example, the di- In probability theory and statistics, stochastic matrices
mensions of the matrix below are 2 3 (read two by are used to describe sets of probabilities; for instance,
three), because there are two rows and three columns: they are used within the PageRank algorithm that ranks
the pages in a Google search.[5] Matrix calculus gener-
alizes classical analytical notions such as derivatives and
[ ]
1 9 13 exponentials to higher dimensions. Matrices are used in
.
20 5 6 economics to describe systems of economic relationships.
A major branch of numerical analysis is devoted to the
The individual items in an m n matrix A, often de-
development of ecient algorithms for matrix compu-
noted by ai,j, where max i = m and max j = n, are
tations, a subject that is centuries old and is today an ex-
called its elements or entries.[4] Provided that they have
panding area of research. Matrix decomposition methods
the same size (each matrix has the same number of rows
simplify computations, both theoretically and practically.
and the same number of columns as the other), two ma-
Algorithms that are tailored to particular matrix struc-
trices can be added or subtracted element by element
tures, such as sparse matrices and near-diagonal matrices,
(see Conformable matrix). The rule for matrix multipli-
expedite computations in nite element method and other
cation, however, is that two matrices can be multiplied
computations. Innite matrices occur in planetary theory
only when the number of columns in the rst equals the
and in atomic theory. A simple example of an innite
number of rows in the second (i.e., the inner dimensions
matrix is the matrix representing the derivative operator,
are the same, n for Am,n Bn,p). Any matrix can be

77
78 CHAPTER 13. MATRIX (MATHEMATICS)

which acts on the Taylor series of a function. a,), represent the entries. In addition to using upper-
case letters to symbolize matrices, many authors use a
special typographical style, commonly boldface upright
13.1 Denition (non-italic), to further distinguish matrices from other
mathematical objects. An alternative notation involves
the use of a double-underline with the variable name, with
A matrix is a rectangular array of numbers or other math-
or without boldface style, (for example, A ).
ematical objects for which operations such as addition
[6]
and multiplication are dened. Most commonly, a ma- The entry in the i-th row and j-th column of a matrix A
trix over a eld F is a rectangular array of scalars each of is sometimes referred to as the i,j, (i,j), or (i,j)th entry of
which is a member of F.[7][8] Most of this article focuses the matrix, and most commonly denoted as ai,j, or aij.
on real and complex matrices, that is, matrices whose el- Alternative notations for that entry are A[i,j] or Ai,j. For
ements are real numbers or complex numbers, respec- example, the (1,3) entry of the following matrix A is 5
tively. More general types of entries are discussed below. (also denoted a13 , a,, A[1,3] or A1,3):
For instance, this is a real matrix:

4 7 5 0
1.3 0.6
A = 2 0 11 8
A = 20.4 5.5 . 19 1 3 12
9.7 6.2
The numbers, symbols or expressions in the matrix are Sometimes, the entries of a matrix can be dened by a
called its entries or its elements. The horizontal and ver- formula such as ai,j = f(i, j). For example, each of the
tical lines of entries in a matrix are called rows and entries of the following matrix A is determined by aij = i
columns, respectively. j.


13.1.1 Size 0 1 2 3
A = 1 0 1 2
The size of a matrix is dened by the number of rows 2 1 0 1
and columns that it contains. A matrix with m rows and
n columns is called an m n matrix or m-by-n matrix, In this case, the matrix itself is sometimes dened by that
while m and n are called its dimensions. For example, the formula, within square brackets or double parentheses.
matrix A above is a 3 2 matrix. For example, the matrix above is dened as A = [i-j], or
Matrices which have a single row are called row vectors, A = ((i-j)). If matrix size is m n, the above-mentioned
and those which have a single column are called column formula f(i, j) is valid for any i = 1, ..., m and any j = 1,
vectors. A matrix which has the same number of rows ..., n. This can be either specied separately, or using m
and columns is called a square matrix. A matrix with an n as a subscript. For instance, the matrix A above is 3
innite number of rows or columns (or both) is called an 4 and can be dened as A = [i j] (i = 1, 2, 3; j = 1, ...,
innite matrix. In some contexts, such as computer al- 4), or A = [i j]34.
gebra programs, it is useful to consider a matrix with no Some programming languages utilize doubly subscripted
rows or no columns, called an empty matrix. arrays (or arrays of arrays) to represent an m--n matrix.
Some programming languages start the numbering of ar-
ray indexes at zero, in which case the entries of an m-by-
13.2 Notation n matrix are indexed by 0 i m 1 and 0 j n
1.[9] This article follows the more common convention in
mathematical writing where enumeration starts from 1.
Matrices are commonly written in box brackets or
parentheses: An asterisk is occasionally used to refer to whole rows or
columns in a matrix. For example, ai, refers to the ith
row ofA, and a,j refers to the jth column of A. The set
a11 a12 a1n a11 a12 ofa1n all m-by-n matrices is denoted (m, n).
a21 a22 a2n a21 a22 a2n

A= . .. = .. = (aij ) R
mn
.. .. .. .. . . .
.. . . . . . . .
am1 am2 amn am1 am2 13.3 amn Basic operations
The specics of symbolic matrix notation vary widely,
with some prevailing trends. Matrices are usually sym- There are a number of basic operations that can be ap-
bolized using upper-case letters (such as A in the ex- plied to modify matrices, called matrix addition, scalar
amples above), while the corresponding lower-case let- multiplication, transposition, matrix multiplication, row
ters, with two subscript indices (for example, a11 , or operations, and submatrix.[11]
13.3. BASIC OPERATIONS 79

13.3.1 Addition, scalar multiplication and Matrix multiplication satises the rules (AB)C = A(BC)
transposition (associativity), and (A+B)C = AC+BC as well as
C(A+B) = CA+CB (left and right distributivity), when-
Main articles: Matrix addition, Scalar multiplication, ever the size of the matrices is such that the various prod-
and Transpose ucts are dened.[14] The product AB may be dened with-
out BA being dened, namely if A and B are m-by-n and
n-by-k matrices, respectively, and m k. Even if both
Familiar properties of numbers extend to these operations products are dened, they need not be equal, that is, gen-
of matrices: for example, addition is commutative, that erally
is, the matrix sum does not depend on the order of the
summands: A + B = B + A.[12] The transpose is compat-
ible with addition and scalar multiplication, as expressed AB BA,
T T T T T T T
by (cA) = c(A ) and (A + B) = A + B . Finally, (A )
= A. that is, matrix multiplication is not commutative, in
marked contrast to (rational, real, or complex) numbers
whose product is independent of the order of the factors.
13.3.2 Matrix multiplication An example of two matrices not commuting with each
other is:
Main article: Matrix multiplication
Multiplication of two matrices is dened if and only if [ ][ ] [ ]
1 2 0 1 0 1
= ,
B 3 4 0 0 0 3

whereas
b1,1 b1,2 b1,3

b2,1 b2,2 b2,3 [ ][ ] [ ]


0 1 1 2 3 4
= .
0 0 3 4 0 0
a1,1 a1,2 Besides the ordinary matrix multiplication just described,
there exist other less frequently used operations on ma-
a2,1 a2,2
A a3,1 a3,2
trices that can be considered forms of multiplication,
such as the Hadamard product and the Kronecker prod-
uct.[15] They arise in solving matrix equations such as the
a4,1 a4,2 Sylvester equation.

13.3.3 Row operations


Schematic depiction of the matrix product AB of two matrices A
and B. Main article: Row operations
the number of columns of the left matrix is the same as
the number of rows of the right matrix. If A is an m- There are three types of row operations:
by-n matrix and B is an n-by-p matrix, then their matrix
product AB is the m-by-p matrix whose entries are given 1. row addition, that is adding a row to another.
by dot product of the corresponding row of A and the
corresponding column of B: 2. row multiplication, that is multiplying all entries of
a row by a non-zero constant;
[AB]i,j = Ai,1 B1,j + Ai,2 B2,j + + 3. row switching, that is interchanging two rows of a
n
Ai,n Bn,j = r=1 Ai,r Br,j , matrix;

where 1 i m and 1 j p.[13] For example, the un- These operations are used in a number of ways, including
derlined entry 2340 in the product is calculated as (2 solving linear equations and nding matrix inverses.
1000) + (3 100) + (4 10) = 2340:


13.3.4 Submatrix
[ ] 0 1000 [ ]
2 3 4 3 2340
1 100 = . A submatrix of a matrix is obtained by deleting any col-
1 0 0 0 1000 lection of rows and/or columns.[16][17][18] For example,
0 10
80 CHAPTER 13. MATRIX (MATHEMATICS)

from the following 3-by-4 matrix, we can construct a 2- 13.5 Linear transformations
by-3 submatrix by removing row 3 and column 2:
Main articles: Linear transformation and Transformation
matrix
1 2 3 4 [ ] Matrices and matrix multiplication reveal their essen-
1 3 4
A = 5 6 7 8 .
5 7 8
9 10 11 12
(a+c,b+d)
The minors and cofactors of a matrix are found by com-
puting the determinant of certain submatrices.[18][19] (c,d)
A principal submatrix is a square submatrix obtained by
removing certain rows and columns. The denition varies
from author to author. According to some authors, a prin-
cipal submatrix is a submatrix in which the set of row in-
dices that remain is the same as the set of column indices
that remain.[20][21] Other authors dene a principal sub-
adbc
matrix to be one in which the rst k rows and columns, for
some number k, are the ones that remain;[22] this type of
submatrix has also been called a leading principal sub-
matrix.[23]

(a,b)

13.4 Linear equations (0,0)

Main articles: Linear equation and System of linear


equations
The vectors represented by a 2-by-2 matrix correspond to the
sides of a unit square transformed into a parallelogram.
Matrices can be used to compactly write and work with
multiple linear equations, that is, systems of linear equa- tial features when related to linear transformations, also
tions. For example, if A is an m-by-n matrix, x desig- known as linear maps. A real m-by-n matrix A gives rise
nates a column vector (that is, n1-matrix) of n variables to a linear transformation Rn Rm mapping each vec-
x1 , x2 , ..., xn, and b is an m1-column vector, then the tor x in Rn to the (matrix) product Ax, which is a vector
matrix equation in Rm . Conversely, each linear transformation f: Rn
Rm arises from a unique m-by-n matrix A: explicitly, the
(i, j)-entry of A is the ith coordinate of f(ej), where ej =
(0,...,0,1,0,...,0) is the unit vector with 1 in the j th position
Ax = b and 0 elsewhere. The matrix A is said to represent the lin-
ear map f, and A is called the transformation matrix of
is equivalent to the system of linear equations f.
For example, the 22 matrix
A,x1 + A,x2 + ... + A,nxn = b1
... [ ]
a c
A=
Am,x1 + Am,x2 + ... + Am,nxn = bm .[24] b d

can be viewed as the transform of the unit square into a


Using matrices, this can be solved more compactly than parallelogram with vertices at (0, 0), (a, b), (a + c, b + d),
would be possible by writing out all the equations sepa- and (c, d). The parallelogram pictured at the right is ob-
rately. If n = m and the equations are independent, this tained
[ ] [ by] multiplying
[ ] [ ]A with each of the column vectors
can be done by writing 0 1 1 0
, , and in turn. These vectors dene the
0 0 1 1
vertices of the unit square.
x = A1 b,
The following table shows a number of 2-by-2 matrices
with the associated linear maps of R2 . The blue original
1
where A is the inverse matrix of A. If A has no inverse, is mapped to the green grid and shapes. The origin (0,0)
solutions if any can be found using its generalized inverse. is marked with a black point.
13.6. SQUARE MATRIX 81

Under the 1-to-1 correspondence between matrices


and linear maps, matrix multiplication corresponds to
composition of maps:[25] if a k-by-m matrix B represents 1 0 0
[ ] 0 1 0
another linear map g : Rm Rk , then the composition g I = [1], I = 1 0
, , In = . .

1 2 .. ..
f is represented by BA since 0 1 .. .. . .
0 0 1
(g f)(x) = g(f(x)) = g(Ax) = B(Ax) = (BA)x. It is a square matrix of order n, and also a special kind
of diagonal matrix. It is called an identity matrix because
The last equality follows from the above-mentioned asso- multiplication with it leaves a matrix unchanged:
ciativity of matrix multiplication.
The rank of a matrix A is the maximum number of AIn = ImA = A for any m-by-n matrix A.
linearly independent row vectors of the matrix, which is
the same as the maximum number of linearly indepen- A nonzero scalar multiple of an identity matrix is called a
dent column vectors.[26] Equivalently it is the dimension scalar matrix. If the matrix entries come from a eld, the
of the image of the linear map represented by A.[27] scalar matrices form a group, under matrix multiplication,
The rank-nullity theorem states that the dimension of the that is isomorphic to the multiplicative group of nonzero
kernel of a matrix plus the rank equals the number of elements of the eld.
columns of the matrix.[28]
Symmetric or skew-symmetric matrix

13.6 Square matrix A square matrix A that is equal to its transpose, that is,
A = AT , is a symmetric matrix. If instead, A is equal to
the negative of its transpose, that is, A = AT , then A is a
Main article: Square matrix skew-symmetric matrix. In complex matrices, symmetry
is often replaced by the concept of Hermitian matrices,
A square matrix is a matrix with the same number of rows which satisfy A = A, where the star or asterisk denotes
and columns. An n-by-n matrix is known as a square ma- the conjugate transpose of the matrix, that is, the trans-
trix of order n. Any two square matrices of the same pose of the complex conjugate of A.
order can be added and multiplied. The entries aii form By the spectral theorem, real symmetric matrices and
the main diagonal of a square matrix. They lie on the complex Hermitian matrices have an eigenbasis; that is,
imaginary line which runs from the top left corner to the every vector is expressible as a linear combination of
bottom right corner of the matrix. eigenvectors. In both cases, all eigenvalues are real.[29]
This theorem can be generalized to innite-dimensional
situations related to matrices with innitely many rows
13.6.1 Main types and columns, see below.

Invertible matrix and its inverse

A square matrix A is called invertible or non-singular if


Diagonal and triangular matrix there exists a matrix B such that

If all entries of A below the main diagonal are zero, A is


AB = BA = In ,[30][31]
called an upper triangular matrix. Similarly if all entries
of A above the main diagonal are zero, A is called a lower
triangular matrix. If all entries outside the main diagonal where In is the nn identity matrix with 1s on the main
are zero, A is called a diagonal matrix. diagonal and 0s elsewhere. If B exists, it is unique and is
called the inverse matrix of A, denoted A1 .

Identity matrix Denite matrix

Main article: Identity matrix A symmetric nn-matrix A is called positive-denite (re-


spectively negative-denite; indenite), if for all nonzero
n
The identity matrix In of size n is the n-by-n matrix in vectors x R the associated quadratic form given by
which all the elements on the main diagonal are equal to
1 and all other elements are equal to 0, for example, Q(x) = xT Ax
82 CHAPTER 13. MATRIX (MATHEMATICS)

takes only positive values (respectively only negative val- tr(AB) = tr(BA).
ues; both some negative and some positive values).[32]
If the quadratic form takes only non-negative (respec- This is immediate from the denition of matrix multipli-
tively only non-positive) values, the symmetric ma- cation:
trix is called positive-semidenite (respectively negative-
semidenite); hence the matrix is indenite precisely
when it is neither positive-semidenite nor negative- m
n
semidenite. tr(AB) = Aij Bji = tr(BA).
i=1 j=1
A symmetric matrix is positive-denite if and only if all
its eigenvalues are positive, that is, the matrix is positive- Also, the trace of a matrix is equal to that of its transpose,
semidenite and it is invertible.[33] The table at the right that is,
shows two possibilities for 2-by-2 matrices.
Allowing as input two dierent vectors instead yields the tr(A) = tr(AT ).
bilinear form associated to A:

Determinant
BA (x, y) = xT Ay.[34]
Main article: Determinant
Orthogonal matrix The determinant det(A) or |A| of a square matrix A is a

Main article: Orthogonal matrix


( 01 11)
x2 f(x1 )
An orthogonal matrix is a square matrix with real entries
whose columns and rows are orthogonal unit vectors (that
is, orthonormal vectors). Equivalently, a matrix A is or- x1
thogonal if its transpose is equal to its inverse:
f(x2 )

A linear transformation on R2 given by the indicated matrix. The


AT = A1 ,
determinant of this matrix is 1, as the area of the green paral-
which entails lelogram at the right is 1, but the map reverses the orientation,
since it turns the counterclockwise orientation of the vectors to a
clockwise one.

AT A = AAT = In , number encoding certain properties of the matrix. A ma-


trix is invertible if and only if its determinant is nonzero.
where I is the identity matrix of size n. Its absolute value equals the area (in R2 ) or volume (in
An orthogonal matrix A is necessarily invertible (with in- R3 ) of the image of the unit square (or cube), while its
verse A1 = AT ), unitary (A1 = A*), and normal (A*A = sign corresponds to the orientation of the corresponding
AA*). The determinant of any orthogonal matrix is either linear map: the determinant is positive if and only if the
+1 or 1. A special orthogonal matrix is an orthogonal orientation is preserved.
matrix with determinant +1. As a linear transformation, The determinant of 2-by-2 matrices is given by
every orthogonal matrix with determinant +1 is a pure
rotation, while every orthogonal matrix with determinant
1 is either a pure reection, or a composition of reec- [ ]
a b
tion and rotation. det = ad bc.
c d
The complex analogue of an orthogonal matrix is a
unitary matrix. The determinant of 3-by-3 matrices involves 6 terms (rule
of Sarrus). The more lengthy Leibniz formula generalises
these two formulae to all dimensions.[35]
13.6.2 Main operations The determinant of a product of square matrices equals
the product of their determinants:
Trace

The trace, tr(A) of a square matrix A is the sum of its det(AB) = det(A) det(B).[36]
diagonal entries. While matrix multiplication is not com-
mutative as mentioned above, the trace of the product of Adding a multiple of any row to another row, or a mul-
two matrices is independent of the order of the factors: tiple of any column to another column, does not change
13.8. DECOMPOSITION 83

the determinant. Interchanging two rows or two columns the eectiveness and precision of all the available al-
aects the determinant by multiplying it by 1.[37] Us- gorithms. The domain studying these matters is called
ing these operations, any matrix can be transformed to a numerical linear algebra.[45] As with other numerical situ-
lower (or upper) triangular matrix, and for such matrices ations, two main aspects are the complexity of algorithms
the determinant equals the product of the entries on the and their numerical stability.
main diagonal; this provides a method to calculate the de- Determining the complexity of an algorithm means nd-
terminant of any matrix. Finally, the Laplace expansion ing upper bounds or estimates of how many elementary
expresses the determinant in terms of minors, that is, de- operations such as additions and multiplications of scalars
terminants of smaller matrices.[38] This expansion can be
are necessary to perform some algorithm, for example,
used for a recursive denition of determinants (taking as multiplication of matrices. For example, calculating the
starting case the determinant of a 1-by-1 matrix, which
matrix product of two n-by-n matrix using the deni-
is its unique entry, or even the determinant of a 0-by-0 tion given above needs n3 multiplications, since for any
matrix, which is 1), that can be seen to be equivalent to
of the n2 entries of the product, n multiplications are nec-
the Leibniz formula. Determinants can be used to solve essary. The Strassen algorithm outperforms this naive
linear systems using Cramers rule, where the division of
algorithm; it needs only n2.807 multiplications.[46] A re-
the determinants of two related square matrices equates ned approach also incorporates specic features of the
to the value of each of the systems variables.[39] computing devices.
In many practical situations additional information about
Eigenvalues and eigenvectors the matrices involved is known. An important case are
sparse matrices, that is, matrices most of whose entries
Main article: Eigenvalues and eigenvectors are zero. There are specically adapted algorithms for,
say, solving linear systems Ax = b for sparse matrices A,
A number and a non-zero vector v satisfying such as the conjugate gradient method.[47]
An algorithm is, roughly speaking, numerically stable, if
Av = v little deviations in the input values do not lead to big devi-
ations in the result. For example, calculating the inverse
of a matrix via Laplaces formula (Adj (A) denotes the
are called an eigenvalue and an eigenvector of A,
adjugate matrix of A)
respectively.[40][41] The number is an eigenvalue of an
nn-matrix A if and only if AIn is not invertible, which
A1 = Adj(A) / det(A)
is equivalent to
may lead to signicant rounding errors if the determinant
det(A I) = 0. [42]
of the matrix is very small. The norm of a matrix can be
used to capture the conditioning of linear algebraic prob-
[48]
The polynomial pA in an indeterminate X given by lems, such as computing a matrixs inverse.
evaluation the determinant det(XInA) is called the Although most computer languages are not designed with
characteristic polynomial of A. It is a monic polynomial commands or libraries for matrices, as early as the 1970s,
of degree n. Therefore the polynomial equation pA() = some engineering desktop computers such as the HP
0 has at most n dierent solutions, that is, eigenvalues of 9830 had ROM cartridges to add BASIC commands
the matrix.[43] They may be complex even if the entries of for matrices. Some computer languages such as APL
A are real. According to the CayleyHamilton theorem, were designed to manipulate matrices, and various math-
pA(A) = 0, that is, the result of substituting the matrix it- ematical programs can be used to aid computing with
self into its own characteristic polynomial yields the zero matrices.[49]
matrix.

13.8 Decomposition
13.7 Computational aspects
Main articles: Matrix decomposition, Matrix diagonal-
Matrix calculations can be often performed with dier- ization, Gaussian elimination, and Montantes method
ent techniques. Many problems can be solved by both
direct algorithms or iterative approaches. For example, There are several methods to render matrices into a more
the eigenvectors of a square matrix can be obtained by easily accessible form. They are generally referred to as
nding a sequence of vectors xn converging to an eigen- matrix decomposition or matrix factorization techniques.
vector when n tends to innity.[44] The interest of all these techniques is that they preserve
To be able to choose the more appropriate algorithm for certain properties of the matrices in question, such as de-
each specic problem, it is important to determine both terminant, rank or inverse, so that these quantities can be
84 CHAPTER 13. MATRIX (MATHEMATICS)

calculated after applying the transformation, or that cer- A instead. This can be used to compute the matrix ex-
tain matrix operations are algorithmically easier to carry ponential eA , a need frequently arising in solving linear
out for some types of matrices. dierential equations, matrix logarithms and square roots
[54]
The LU decomposition factors matrices as a product of matrices. To avoid numerically ill-conditioned sit-
of lower (L) and an upper triangular matrices (U). [50] uations, further algorithms such as the Schur decomposi-
[55]
Once this decomposition is calculated, linear systems tion can be employed.
can be solved more eciently, by a simple technique
called forward and back substitution. Likewise, in-
verses of triangular matrices are algorithmically easier 13.9 Abstract algebraic aspects
to calculate. The Gaussian elimination is a similar algo-
rithm; it transforms any matrix to row echelon form.[51]
and generalizations
Both methods proceed by multiplying the matrix by suit-
able elementary matrices, which correspond to permuting Matrices can be generalized in dierent ways. Abstract
rows or columns and adding multiples of one row to an- algebra uses matrices with entries in more general elds
other row. Singular value decomposition expresses any or even rings, while linear algebra codies properties of
matrix A as a product UDV , where U and V are unitary matrices in the notion of linear maps. It is possible
matrices and D is a diagonal matrix. to consider matrices with innitely many columns and
rows. Another extension are tensors, which can be seen
as higher-dimensional arrays of numbers, as opposed to
vectors, which can often be realised as sequences of num-
bers, while matrices are rectangular or two-dimensional
arrays of numbers.[56] Matrices, subject to certain re-
quirements tend to form groups known as matrix groups.
Similarly under certain conditions matrices form rings
known as matrix rings. Though the product of matrices
is not in general commutative yet certain matrices form
elds known as matrix elds.

13.9.1 Matrices with more general entries

This article focuses on matrices whose entries are real or


complex numbers. However, matrices can be considered
with much more general types of entries than real or com-
plex numbers. As a rst step of generalization, any eld,
that is, a set where addition, subtraction, multiplication
An example of a matrix in Jordan normal form. The grey blocks and division operations are dened and well-behaved,
are called Jordan blocks.
may be used instead of R or C, for example rational num-
bers or nite elds. For example, coding theory makes
The eigendecomposition or diagonalization expresses A
use of matrices over nite elds. Wherever eigenvalues
as a product VDV1 , where D is a diagonal matrix and
are considered, as these are roots of a polynomial they
V is a suitable invertible matrix.[52] If A can be written
may exist only in a larger eld than that of the entries of
in this form, it is called diagonalizable. More generally,
the matrix; for instance they may be complex in case of a
and applicable to all matrices, the Jordan decomposition
matrix with real entries. The possibility to reinterpret the
transforms a matrix into Jordan normal form, that is to say
entries of a matrix as elements of a larger eld (for ex-
matrices whose only nonzero entries are the eigenvalues
ample, to view a real matrix as a complex matrix whose
1 to of A, placed on the main diagonal and possibly
entries happen to be all real) then allows considering each
entries equal to one directly above the main diagonal, as
square matrix to possess a full set of eigenvalues. Alter-
shown at the right.[53] Given the eigendecomposition, the
natively one can consider only matrices with entries in an
nth power of A (that is, n-fold iterated matrix multiplica-
algebraically closed eld, such as C, from the outset.
tion) can be calculated via
More generally, abstract algebra makes great use of ma-
trices with entries in a ring R.[57] Rings are a more general
An = (VDV1 )n = VDV1 VDV1 ...VDV1 =
notion than elds in that a division operation need not ex-
VDn V1
ist. The very same addition and multiplication operations
of matrices extend to this setting, too. The set M(n, R) of
and the power of a diagonal matrix can be calculated by all square n-by-n matrices over R is a ring called matrix
taking the corresponding powers of the diagonal entries, ring, isomorphic to the endomorphism ring of the left R-
which is much easier than doing the exponentiation for module Rn .[58] If the ring R is commutative, that is, its
13.9. ABSTRACT ALGEBRAIC ASPECTS AND GENERALIZATIONS 85

multiplication is commutative, then M(n, R) is a unitary objects together with a binary operation, that is, an op-
noncommutative (unless n = 1) associative algebra over eration combining any two objects to a third, subject to
R. The determinant of square matrices over a commuta- certain requirements.[63] A group in which the objects are
tive ring R can still be dened using the Leibniz formula; matrices and the group operation is matrix multiplication
such a matrix is invertible if and only if its determinant is called a matrix group.[64][65] Since in a group every ele-
is invertible in R, generalising the situation over a eld F, ment has to be invertible, the most general matrix groups
where every nonzero element is invertible.[59] Matrices are the groups of all invertible matrices of a given size,
over superrings are called supermatrices.[60] called the general linear groups.
Matrices do not always have all their entries in the same Any property of matrices that is preserved under matrix
ring or even in any ring at all. One special but com- products and inverses can be used to dene further matrix
mon case is block matrices, which may be considered as groups. For example, matrices with a given size and with
matrices whose entries themselves are matrices. The en- a determinant of 1 form a subgroup of (that is, a smaller
tries need not be quadratic matrices, and thus need not be group contained in) their general linear group, called a
members of any ordinary ring; but their sizes must full special linear group.[66] Orthogonal matrices, determined
certain compatibility conditions. by the condition

13.9.2 Relationship to linear maps MT M = I,

Linear maps Rn Rm are equivalent to m-by-n matrices, form the orthogonal group.[67] Every orthogonal matrix
as described above. More generally, any linear map f: has determinant 1 or 1. Orthogonal matrices with deter-
V W between nite-dimensional vector spaces can be minant 1 form a subgroup called special orthogonal group.
described by a matrix A = (aij), after choosing bases v1 ,
Every nite group is isomorphic to a matrix group, as one
..., vn of V, and w1 , ..., wm of W (so n is the dimension
can see by considering the regular representation of the
of V and m is the dimension of W), which is such that
symmetric group.[68] General groups can be studied using
matrix groups, which are comparatively well-understood,
by means of representation theory.[69]

m
f (vj ) = ai,j wi for j = 1, . . . , n.
i=1
13.9.4 Innite matrices
In other words, column j of A expresses the image of vj
in terms of the basis vectors wi of W; thus this relation It is also possible to consider matrices with innitely
uniquely determines the entries of the matrix A. Note that many rows and/or columns[70] even if, being innite ob-
the matrix depends on the choice of the bases: dierent jects, one cannot write down such matrices explicitly. All
choices of bases give rise to dierent, but equivalent ma- that matters is that for every element in the set index-
trices.[61] Many of the above concrete notions can be rein- ing rows, and every element in the set indexing columns,
terpreted in this light, for example, the transpose matrix there is a well-dened entry (these index sets need not
AT describes the transpose of the linear map given by A, even be subsets of the natural numbers). The basic oper-
with respect to the dual bases.[62] ations of addition, subtraction, scalar multiplication and
These properties can be restated in a more natural way: transposition can still be dened without problem; how-
the category of all matrices with entries in a eld k with ever matrix multiplication may involve innite summa-
multiplication as composition is equivalent to the category tions to dene the resulting entries, and these are not de-
of nite dimensional vector spaces and linear maps over ned in general.
this eld. If R is any ring
with unity, then the ring of endomor-
More generally, the set of mn matrices can be used to phisms of M = iI R as a right R module is isomor-
represent the R-linear maps between the free modules phic to the ring of column nite matrices CFMI (R)
Rm and Rn for an arbitrary ring R with unity. When n whose entries are indexed by I I , and whose columns
= m composition of these maps is possible, and this gives each contain only nitely many nonzero entries. The en-
rise to the matrix ring of nn matrices representing the domorphisms of M considered as a left R module result in
endomorphism ring of Rn . an analogous object, the row nite matrices RFMI (R)
whose rows each only have nitely many nonzero entries.
If innite matrices are used to describe linear maps, then
13.9.3 Matrix groups only those matrices can be used all of whose columns
have but a nite number of nonzero entries, for the fol-
Main article: Matrix group lowing reason. For a matrix A to describe a linear map
f: VW, bases for both spaces must have been chosen;
A group is a mathematical structure consisting of a set of recall that by denition this means that every vector in
86 CHAPTER 13. MATRIX (MATHEMATICS)

the space can be written uniquely as a (nite) linear com- 13.10 Applications
bination of basis vectors, so that written as a (column)
vector v of coecients, only nitely many entries vi are There are numerous applications of matrices, both in
nonzero. Now the columns of A describe the images by f mathematics and other sciences. Some of them merely
of individual basis vectors of V in the basis of W, which is take advantage of the compact representation of a set of
only meaningful if these columns have only nitely many numbers in a matrix. For example, in game theory and
nonzero entries. There is no restriction on the rows of A economics, the payo matrix encodes the payo for two
however: in the product Av there are only nitely many players, depending on which out of a given (nite) set of
nonzero coecients of v involved, so every one of its en- alternatives the players choose.[74] Text mining and au-
tries, even if it is given as an innite sum of products, tomated thesaurus compilation makes use of document-
involves only nitely many nonzero terms and is there- term matrices such as tf-idf to track frequencies of cer-
fore well dened. Moreover, this amounts to forming a tain words in several documents.[75]
linear combination of the columns of A that eectively
involves only nitely many of them, whence the result has Complex numbers can be represented by particular real
only nitely many nonzero entries, because each of those 2-by-2 matrices via
columns do. One also sees that products of two matrices
of the given type is well dened (provided as usual that [ ]
the column-index and row-index sets match), is again of a b
a + ib ,
the same type, and corresponds to the composition of lin- b a
ear maps.
under which addition and multiplication of complex num-
If R is a normed ring, then the condition of row or col- bers and matrices correspond to each other. For example,
umn niteness can be relaxed. With the norm in place, 2-by-2 rotation matrices represent the multiplication with
absolutely convergent series can be used instead of nite some complex number of absolute value 1, as above. A
sums. For example, the matrices whose column sums similar interpretation is possible for quaternions[76] and
are absolutely convergent sequences form a ring. Analo- Cliord algebras in general.
gously of course, the matrices whose row sums are abso-
lutely convergent series also form a ring. Early encryption techniques such as the Hill cipher also
used matrices. However, due to the linear nature of ma-
In that vein, innite matrices can also be used to de- trices, these codes are comparatively easy to break.[77]
scribe operators on Hilbert spaces, where convergence Computer graphics uses matrices both to represent ob-
and continuity questions arise, which again results in cer- jects and to calculate transformations of objects us-
tain constraints that have to be imposed. However, the ing ane rotation matrices to accomplish tasks such
explicit point of view of matrices tends to obfuscate the as projecting a three-dimensional object onto a two-
matter,[71] and the abstract and more powerful tools of dimensional screen, corresponding to a theoretical cam-
functional analysis can be used instead. era observation.[78] Matrices over a polynomial ring are
important in the study of control theory.
Chemistry makes use of matrices in various ways, par-
ticularly since the use of quantum theory to discuss
molecular bonding and spectroscopy. Examples are the
overlap matrix and the Fock matrix used in solving the
13.9.5 Empty matrices Roothaan equations to obtain the molecular orbitals of
the HartreeFock method.

An empty matrix is a matrix in which the number of rows


or columns (or both) is zero.[72][73] Empty matrices help 13.10.1 Graph theory
dealing with maps involving the zero vector space. For
example, if A is a 3-by-0 matrix and B is a 0-by-3 matrix, The adjacency matrix of a nite graph is a basic no-
then AB is the 3-by-3 zero matrix corresponding to the tion of graph theory.[79] It records which vertices of the
null map from a 3-dimensional space V to itself, while graph are connected by an edge. Matrices containing just
BA is a 0-by-0 matrix. There is no common notation for two dierent values (1 and 0 meaning for example yes
empty matrices, but most computer algebra systems allow and no, respectively) are called logical matrices. The
creating and computing with them. The determinant of distance (or cost) matrix contains information about dis-
the 0-by-0 matrix is 1 as follows from regarding the empty tances of the edges.[80] These concepts can be applied to
product occurring in the Leibniz formula for the determi- websites connected by hyperlinks or cities connected by
nant as 1. This value is also consistent with the fact that roads etc., in which case (unless the connection network
the identity map from any nite dimensional space to it- is extremely dense) the matrices tend to be sparse, that is,
self has determinant 1, a fact that is often used as a part contain few nonzero entries. Therefore, specically tai-
of the characterization of determinants. lored matrix algorithms can be used in network theory.
13.10. APPLICATIONS 87

Another matrix frequently used in geometrical situations


is the Jacobi matrix of a dierentiable map f: Rn Rm .

2
If f 1 , ..., fm denote the components of f, then the Jacobi
matrix is dened as [83]

3 J(f ) =
[
fi
]
.
xj 1im,1jn

If n > m, and if the rank of the Jacobi matrix attains its

1
maximal value m, f is locally invertible at that point, by
the implicit function theorem.[84]
Partial dierential equations can be classied by consid-
ering the matrix of coecients of the highest-order dif-
ferential operators of the equation. For elliptic partial dif-
ferential equations this matrix is positive denite, which

1 1 0 has decisive inuence on the set of possible solutions of
An undirected graph with adjacency matrix 1 0 1 . the equation in question.[85]
0 1 0
The nite element method is an important numerical
method to solve partial dierential equations, widely ap-
plied in simulating complex physical systems. It attempts
13.10.2 Analysis and geometry
to approximate the solution to some equation by piece-
wise linear functions, where the pieces are chosen with
The Hessian matrix of a dierentiable function : Rn
respect to a suciently ne grid, which in turn can be
R consists of the second derivatives of with respect to
recast as a matrix equation.[86]
the several coordinate directions, that is, [81]

[ ] 13.10.3 Probability theory and statistics


2f
H(f ) = .
xi xj

It encodes information about the local growth behaviour

Two dierent Markov chains. The chart depicts the number of


particles (of a total of 1000) in state 2. Both limiting values
can[be determined
] from[ the transition
] matrices, which are given
At the saddle point (x = 0, y =[0) (red)]of the function f(x,y) .7 0 .7 .2
by (red) and (black).
2 0 .3 1 .3 .8
= x2 y2 , the Hessian matrix is indenite.
0 2
Stochastic matrices are square matrices whose rows
of the function: given a critical point x = (x1 , ..., xn), that are probability vectors, that is, whose entries are non-
is, a point where the rst partial derivatives f /xi of negative and sum up to one. Stochastic matrices are used
vanish, the function has a local minimum if the Hessian to dene Markov chains with nitely many states.[87] A
matrix is positive denite. Quadratic programming can row of the stochastic matrix gives the probability distri-
be used to nd global minima or maxima of quadratic bution for the next position of some particle currently in
functions closely related to the ones attached to matrices the state that corresponds to the row. Properties of the
(see above).[82] Markov chain like absorbing states, that is, states that any
88 CHAPTER 13. MATRIX (MATHEMATICS)

particle attains eventually, can be read o the eigenvec- also referred to as matrix mechanics. One particular ex-
tors of the transition matrices.[88] ample is the density matrix that characterizes the mixed
Statistics also makes use of matrices in many dierent state of a quantum system as[97] a linear combination of el-
[89]
forms. Descriptive statistics is concerned with describ- ementary, pure eigenstates.
ing data sets, which can often be represented as data ma- Another matrix serves as a key tool for describing the
trices, which may then be subjected to dimensionality scattering experiments that form the cornerstone of ex-
reduction techniques. The covariance matrix encodes perimental particle physics: Collision reactions such as
the mutual variance of several random variables.[90] An- occur in particle accelerators, where non-interacting par-
other technique using matrices are linear least squares, a ticles head towards each other and collide in a small inter-
method that approximates a nite set of pairs (x1 , y1 ), action zone, with a new set of non-interacting particles as
(x2 , y2 ), ..., (xN, yN), by a linear function the result, can be described as the scalar product of out-
going particle states and a linear combination of ingoing
yi axi + b, i = 1, ..., N particle states. The linear combination is given by a ma-
trix known as the S-matrix, which encodes all information
about the possible interactions between particles.[98]
which can be formulated in terms of matrices, related to
the singular value decomposition of matrices.[91]
Random matrices are matrices whose entries are ran-
dom numbers, subject to suitable probability distribu- 13.10.6 Normal modes
tions, such as matrix normal distribution. Beyond prob-
ability theory, they are applied in domains ranging from A general application of matrices in physics is to the
number theory to physics.[92][93] description of linearly coupled harmonic systems. The
equations of motion of such systems can be described in
matrix form, with a mass matrix multiplying a general-
13.10.4 Symmetries and transformations ized velocity to give the kinetic term, and a force ma-
in physics trix multiplying a displacement vector to characterize the
interactions. The best way to obtain solutions is to de-
Further information: Symmetry in physics termine the systems eigenvectors, its normal modes, by
diagonalizing the matrix equation. Techniques like this
Linear transformations and the associated symmetries are crucial when it comes to the internal dynamics of
play a key role in modern physics. For example, molecules: the internal vibrations of systems [99]
consisting
elementary particles in quantum eld theory are classi- of mutually bound component atoms. They are also
ed as representations of the Lorentz group of special needed for describing mechanical vibrations, and oscilla-
[100]
relativity and, more specically, by their behavior un- tions in electrical circuits.
der the spin group. Concrete representations involving
the Pauli matrices and more general gamma matrices are
an integral part of the physical description of fermions,
which behave as spinors.[94] For the three lightest quarks, 13.10.7 Geometrical optics
there is a group-theoretical representation involving the
special unitary group SU(3); for their calculations, physi-Geometrical optics provides further matrix applications.
cists use a convenient matrix representation known as the In this approximative theory, the wave nature of light is
Gell-Mann matrices, which are also used for the SU(3) neglected. The result is a model in which light rays are
gauge group that forms the basis of the modern descrip- indeed geometrical rays. If the deection of light rays
tion of strong nuclear interactions, quantum chromody- by optical elements is small, the action of a lens or re-
namics. The CabibboKobayashiMaskawa matrix, in ective element on a given light ray can be expressed as
turn, expresses the fact that the basic quark states that multiplication of a two-component vector with a two-by-
are important for weak interactions are not the same as, two matrix called ray transfer matrix: the vectors com-
but linearly related to the basic quark states that dene ponents are the light rays slope and its distance from the
particles with specic and distinct masses.[95] optical axis, while the matrix encodes the properties of
the optical element. Actually, there are two kinds of ma-
trices, viz. a refraction matrix describing the refraction
13.10.5 Linear combinations of quantum at a lens surface, and a translation matrix, describing the
states translation of the plane of reference to the next refracting
surface, where another refraction matrix applies. The op-
The rst model of quantum mechanics (Heisenberg, tical system, consisting of a combination of lenses and/or
1925) represented the theorys operators by innite- reective elements, is simply described by the matrix re-
dimensional matrices acting on quantum states.[96] This is sulting from the product of the components matrices.[101]
13.11. HISTORY 89

13.10.8 Electronics distributive properties held true. Cayley investigated and


demonstrated the non-commutative property of matrix
Traditional mesh analysis and nodal analysis in electronics multiplication as well as the commutative property of ma-
lead to a system of linear equations that can be described trix addition.[103] Early matrix theory had limited the use
with a matrix. of arrays almost exclusively to determinants and Arthur
The behaviour of many electronic components can be de- Cayleys abstract matrix operations were revolutionary.
scribed using matrices. Let A be a 2-dimensional vector He was instrumental in proposing a matrix concept inde-
with the components input voltage v1 and input current pendent of equation systems. In 1858 Cayley published
i1 as its elements, and let B be a 2-dimensional vector his A memoir on the theory of matrices[109][110] in which
with the components output voltage v2 and output current he proposed and demonstrated the Cayley-Hamilton the-
i2 as its elements. Then the behaviour of the electronic orem.[103]
component can be described by B = H A, where H is An English mathematician named Cullis was the rst to
a 2 x 2 matrix containing one impedance element (h12 ), use modern bracket notation for matrices in 1913 and he
one admittance element (h21 ) and two dimensionless el- simultaneously demonstrated the rst signicant use of
ements (h11 and h22 ). Calculating a circuit now reduces the notation A = [ai,j] to represent a matrix where ai,j
to multiplying matrices. refers to the ith row and the jth column.[103]
The study of determinants sprang from several
sources.[111] Number-theoretical problems led Gauss to
13.11 History relate coecients of quadratic forms, that is, expressions
such as x2 + xy 2y2 , and linear maps in three dimen-
Matrices have a long history of application in solv- sions to matrices. Eisenstein further developed these
ing linear equations but they were known as arrays un- notions, including the remark that, in modern parlance,
til the 1800s. The Chinese text The Nine Chapters matrix products are non-commutative. Cauchy was the
on the Mathematical Art written in 10th2nd century rst to prove general statements about determinants,
BCE is the rst example of the use of array methods using as denition of the determinant of a matrix A =
to solve simultaneous equations,[102] including the con- [ai,j] the following: replace the powers aj k by ajk in the
cept of determinants. In 1545 Italian mathematician polynomial
Girolamo Cardano brought the method to Europe when
he published Ars Magna.[103] The Japanese mathemati-
cian Seki used the same array methods to solve simultane- a1 a2 an (aj ai )
ous equations in 1683.[104] The Dutch Mathematician Jan i<j
de Witt represented transformations using arrays in his
1659 book Elements of Curves (1659).[105] Between 1700 where denotes the product of the indicated terms.
and 1710 Gottfried Wilhelm Leibniz publicized the use He also showed, in 1829, that the eigenvalues of
of arrays for recording information or solutions and ex- symmetric matrices are real.[112] Jacobi studied func-
perimented with over 50 dierent systems of arrays.[103] tional determinantslater called Jacobi determinants
Cramer presented his rule in 1750. by Sylvesterwhich can be used to describe geomet-
ric transformations at a local (or innitesimal) level, see
The term matrix (Latin for womb, derived from above; Kroneckers Vorlesungen ber die Theorie der De-
matermother[106] ) was coined by James Joseph terminanten[113] and Weierstrass Zur Determinantentheo-
Sylvester in 1850,[107] who understood a matrix as an rie,[114] both published in 1903, rst treated determinants
object giving rise to a number of determinants today axiomatically, as opposed to previous more concrete ap-
called minors, that is to say, determinants of smaller proaches such as the mentioned formula of Cauchy. At
matrices that derive from the original one by removing that point, determinants were rmly established.
columns and rows. In an 1851 paper, Sylvester explains:
Many theorems were rst established for small matri-
I have in previous papers dened a Matrix ces only, for example the CayleyHamilton theorem was
as a rectangular array of terms, out of which proved for 22 matrices by Cayley in the aforementioned
dierent systems of determinants may be en- memoir, and by Hamilton for 44 matrices. Frobenius,
gendered as from the womb of a common working on bilinear forms, generalized the theorem to
parent.[108] all dimensions (1898). Also at the end of the 19th cen-
tury the GaussJordan elimination (generalizing a special
Arthur Cayley published a treatise on geometric trans- case now known as Gauss elimination) was established by
formations using matrices that were not rotated versions Jordan. In the early 20th century, matrices attained a cen-
of the coecients being investigated as had previously tral role in linear algebra.[115] partially due to their use in
been done. Instead he dened operations such as addi- classication of the hypercomplex number systems of the
tion, subtraction, multiplication, and division as transfor- previous century.
mations of those matrices and showed the associative and The inception of matrix mechanics by Heisenberg, Born
90 CHAPTER 13. MATRIX (MATHEMATICS)

and Jordan led to studying matrices with innitely many Matrix calculus
rows and columns.[116] Later, von Neumann carried out
the mathematical formulation of quantum mechanics, by Matrix function
further developing functional analytic notions such as Periodic matrix set
linear operators on Hilbert spaces, which, very roughly
speaking, correspond to Euclidean space, but with an in- Tensor
nity of independent directions.

13.11.1 Other historical usages of the word


13.13 Notes
matrix in mathematics [1] Equivalently, table.

The word has been used in unusual ways by at least two [2] Anton (1987, p. 23)
authors of historical importance.
[3] Beauregard & Fraleigh (1973, p. 56)
Bertrand Russell and Alfred North Whitehead in their
Principia Mathematica (19101913) use the word ma- [4] Young, Cynthia. Precalculus. Laurie Rosatone. p. 727.
trix in the context of their axiom of reducibility. They [5] K. Bryan and T. Leise. The $25,000,000,000 eigenvec-
proposed this axiom as a means to reduce any function to tor: The linear algebra behind Google. SIAM Review,
one of lower type, successively, so that at the bottom (0 48(3):569581, 2006.
order) the function is identical to its extension:
[6] Lang 2002

Let us give the name of matrix to any function, [7] Fraleigh (1976, p. 209)
of however many variables, which does not in-
[8] Nering (1970, p. 37)
volve any apparent variables. Then any possi-
ble function other than a matrix is derived from [9] Oualline 2003, Ch. 5
a matrix by means of generalization, that is, by
considering the proposition which asserts that [10] How to organize, add and multiply matrices - Bill
the function in question is true with all possi- Shillito. TED ED. Retrieved April 6, 2013.
ble values or with some value of one of the ar- [11] Brown 1991, Denition I.2.1 (addition), Denition I.2.4
guments, the other argument or arguments re- (scalar multiplication), and Denition I.2.33 (transpose)
maining undetermined.[117]
[12] Brown 1991, Theorem I.2.6
For example, a function (x, y) of two variables x and [13] Brown 1991, Denition I.2.20
y can be reduced to a collection of functions of a single
variable, for example, y, by considering the function for [14] Brown 1991, Theorem I.2.24
all possible values of individuals ai substituted in place [15] Horn & Johnson 1985, Ch. 4 and 5
of variable x. And then the resulting collection of func-
tions of the single variable y, that is, a: (ai, y), can be [16] Bronson (1970, p. 16)
reduced to a matrix of values by considering the func- [17] Kreyszig (1972, p. 220)
tion for all possible values of individuals bi substituted
in place of variable y: [18] Protter & Morrey (1970, p. 869)

[19] Kreyszig (1972, pp. 241,244)


b a: (ai, b ).
[20] Schneider, Hans; Barker, George Phillip (2012), Matrices
Alfred Tarski in his 1946 Introduction to Logic used the and Linear Algebra, Dover Books on Mathemat-
ics, Courier Dover Corporation, p. 251, ISBN
word matrix synonymously with the notion of truth ta-
9780486139302.
ble as used in mathematical logic.[118]
[21] Perlis, Sam (1991), Theory of Matrices, Dover books on
advanced mathematics, Courier Dover Corporation, p.
13.12 See also 103, ISBN 9780486668109.

[22] Anton, Howard (2010), Elementary Linear Algebra (10th


Algebraic multiplicity ed.), John Wiley & Sons, p. 414, ISBN 9780470458211.

Geometric multiplicity [23] Horn, Roger A.; Johnson, Charles R. (2012), Matrix Anal-
ysis (2nd ed.), Cambridge University Press, p. 17, ISBN
GramSchmidt process 9780521839402.

List of matrices [24] Brown 1991, I.2.21 and 22


13.13. NOTES 91

[25] Greub 1975, Section III.2 [62] Greub 1975, Section III.3.13

[26] Brown 1991, Denition II.3.3 [63] See any standard reference in group.

[27] Greub 1975, Section III.1 [64] Additionally, the group is required to be closed in the gen-
eral linear group.
[28] Brown 1991, Theorem II.3.22
[65] Baker 2003, Def. 1.30
[29] Horn & Johnson 1985, Theorem 2.5.6
[66] Baker 2003, Theorem 1.2
[30] Brown 1991, Denition I.2.28
[67] Artin 1991, Chapter 4.5
[31] Brown 1991, Denition I.5.13
[68] Rowen 2008, Example 19.2, p. 198
[32] Horn & Johnson 1985, Chapter 7
[69] See any reference in representation theory or group rep-
[33] Horn & Johnson 1985, Theorem 7.2.1
resentation.
[34] Horn & Johnson 1985, Example 4.0.6, p. 169
[70] See the item Matrix in It, ed. 1987
[35] Brown 1991, Denition III.2.1
[71] Not much of matrix theory carries over to innite-
[36] Brown 1991, Theorem III.2.12 dimensional spaces, and what does is not so useful, but
it sometimes helps. Halmos 1982, p. 23, Chapter 5
[37] Brown 1991, Corollary III.2.16
[72] Empty Matrix: A matrix is empty if either its row or
[38] Mirsky 1990, Theorem 1.4.1 column dimension is zero, Glossary, O-Matrix v6 User
Guide
[39] Brown 1991, Theorem III.3.18
[73] A matrix having at least one dimension equal to zero is
[40] Eigen means own in German and in Dutch. called an empty matrix, MATLAB Data Structures
[41] Brown 1991, Denition III.4.1 [74] Fudenberg & Tirole 1983, Section 1.1.1
[42] Brown 1991, Denition III.4.9 [75] Manning 1999, Section 15.3.4
[43] Brown 1991, Corollary III.4.10 [76] Ward 1997, Ch. 2.8
[44] Householder 1975, Ch. 7
[77] Stinson 2005, Ch. 1.1.5 and 1.2.4
[45] Bau III & Trefethen 1997
[78] Association for Computing Machinery 1979, Ch. 7
[46] Golub & Van Loan 1996, Algorithm 1.3.1
[79] Godsil & Royle 2004, Ch. 8.1
[47] Golub & Van Loan 1996, Chapters 9 and 10, esp. section
[80] Punnen 2002
10.2
[81] Lang 1987a, Ch. XVI.6
[48] Golub & Van Loan 1996, Chapter 2.3
[82] Nocedal 2006, Ch. 16
[49] For example, Mathematica, see Wolfram 2003, Ch. 3.7
[83] Lang 1987a, Ch. XVI.1
[50] Press, Flannery & Teukolsky 1992
[84] Lang 1987a, Ch. XVI.5. For a more advanced, and more
[51] Stoer & Bulirsch 2002, Section 4.1
general statement see Lang 1969, Ch. VI.2
[52] Horn & Johnson 1985, Theorem 2.5.4
[85] Gilbarg & Trudinger 2001
[53] Horn & Johnson 1985, Ch. 3.1, 3.2
[86] olin 2005, Ch. 2.5. See also stiness method.
[54] Arnold & Cooke 1992, Sections 14.5, 7, 8
[87] Latouche & Ramaswami 1999
[55] Bronson 1989, Ch. 15
[88] Mehata & Srinivasan 1978, Ch. 2.8
[56] Coburn 1955, Ch. V
[89] Healy, Michael (1986), Matrices for Statistics, Oxford
[57] Lang 2002, Chapter XIII University Press, ISBN 978-0-19-850702-4

[58] Lang 2002, XVII.1, p. 643 [90] Krzanowski 1988, Ch. 2.2., p. 60

[59] Lang 2002, Proposition XIII.4.16 [91] Krzanowski 1988, Ch. 4.1

[60] Reichl 2004, Section L.2 [92] Conrey 2007

[61] Greub 1975, Section III.3 [93] Zabrodin, Brezin & Kazakov et al. 2006
92 CHAPTER 13. MATRIX (MATHEMATICS)

[94] Itzykson & Zuber 1980, Ch. 2 [116] Mehra & Rechenberg 1987

[95] see Burgess & Moore 2007, section 1.6.3. (SU(3)), sec- [117] Whitehead, Alfred North; and Russell, Bertrand (1913)
tion 2.4.3.2. (KobayashiMaskawa matrix) Principia Mathematica to *56, Cambridge at the Univer-
sity Press, Cambridge UK (republished 1962) cf page
[96] Schi 1968, Ch. 6 162.
[97] Bohm 2001, sections II.4 and II.8 [118] Tarski, Alfred; (1946) Introduction to Logic and the
[98] Weinberg 1995, Ch. 3 Methodology of Deductive Sciences, Dover Publications,
Inc, New York NY, ISBN 0-486-28462-X.
[99] Wherrett 1987, part II

[100] Riley, Hobson & Bence 1997, 7.17


13.14 References
[101] Guenther 1990, Ch. 5

[102] Shen, Crossley & Lun 1999 cited by Bretscher 2005, p. 1 Anton, Howard (1987), Elementary Linear Algebra
(5th ed.), New York: Wiley, ISBN 0-471-84819-0
[103] Discrete Mathematics 4th Ed. Dossey, Otto, Spense, Van-
den Eynden, Published by Addison Wesley, October 10, Arnold, Vladimir I.; Cooke, Roger (1992), Ordi-
2001 ISBN 978-0321079121 | p.564-565 nary dierential equations, Berlin, DE; New York,
[104] Needham, Joseph; Wang Ling (1959). Science and Civil-
NY: Springer-Verlag, ISBN 978-3-540-54813-3
isation in China. III. Cambridge: Cambridge University Artin, Michael (1991), Algebra, Prentice Hall, ISBN
Press. p. 117. ISBN 9780521058018.
978-0-89871-510-1
[105] Discrete Mathematics 4th Ed. Dossey, Otto, Spense, Van-
den Eynden, Published by Addison Wesley, October 10, Association for Computing Machinery (1979),
2001 ISBN 978-0321079121 | p.564 Computer Graphics, Tata McGrawHill, ISBN 978-
0-07-059376-3
[106] MerriamWebster dictionary, MerriamWebster,
retrieved April 20, 2009 Baker, Andrew J. (2003), Matrix Groups: An Intro-
duction to Lie Group Theory, Berlin, DE; New York,
[107] Although many sources state that J. J. Sylvester coined
NY: Springer-Verlag, ISBN 978-1-85233-470-3
the mathematical term matrix in 1848, Sylvester pub-
lished nothing in 1848. (For proof that Sylvester pub- Bau III, David; Trefethen, Lloyd N. (1997), Nu-
lished nothing in 1848, see: J. J. Sylvester with H. F. merical linear algebra, Philadelphia, PA: Society for
Baker, ed., The Collected Mathematical Papers of James
Industrial and Applied Mathematics, ISBN 978-0-
Joseph Sylvester (Cambridge, England: Cambridge Uni-
89871-361-9
versity Press, 1904), vol. 1.) His earliest use of the term
matrix occurs in 1850 in: J. J. Sylvester (1850) Addi- Beauregard, Raymond A.; Fraleigh, John B. (1973),
tions to the articles in the September number of this jour-
A First Course In Linear Algebra: with Optional
nal, On a new class of theorems, and on Pascals the-
orem, The London, Edinburgh and Dublin Philosophical
Introduction to Groups, Rings, and Fields, Boston:
Magazine and Journal of Science, 37 : 363-370. From Houghton Miin Co., ISBN 0-395-14017-X
page 369: For this purpose we must commence, not with
Bretscher, Otto (2005), Linear Algebra with Appli-
a square, but with an oblong arrangement of terms consist-
ing, suppose, of m lines and n columns. This will not in cations (3rd ed.), Prentice Hall
itself represent a determinant, but is, as it were, a Matrix
Bronson, Richard (1970), Matrix Methods: An In-
out of which we may form various systems of determi-
troduction, New York: Academic Press, LCCN
nants "
70097490
[108] The Collected Mathematical Papers of James Joseph
Sylvester: 18371853, Paper 37, p. 247 Bronson, Richard (1989), Schaums outline of the-
ory and problems of matrix operations, New York:
[109] Phil.Trans. 1858, vol.148, pp.17-37 Math. Papers II 475- McGrawHill, ISBN 978-0-07-007978-6
496
Brown, William C. (1991), Matrices and vector
[110] Dieudonn, ed. 1978, Vol. 1, Ch. III, p. 96
spaces, New York, NY: Marcel Dekker, ISBN 978-
[111] Knobloch 1994 0-8247-8419-5

[112] Hawkins 1975 Coburn, Nathaniel (1955), Vector and tensor analy-
sis, New York, NY: Macmillan, OCLC 1029828
[113] Kronecker 1897
Conrey, J. Brian (2007), Ranks of elliptic curves and
[114] Weierstrass 1915, pp. 271286
random matrix theory, Cambridge University Press,
[115] Bcher 2004 ISBN 978-0-521-69964-8
13.14. REFERENCES 93

Fraleigh, John B. (1976), A First Course In Abstract Lang, Serge (2002), Algebra, Graduate Texts in
Algebra (2nd ed.), Reading: Addison-Wesley, ISBN Mathematics, 211 (Revised third ed.), New York:
0-201-01984-1 Springer-Verlag, ISBN 978-0-387-95385-4, MR
1878556
Fudenberg, Drew; Tirole, Jean (1983), Game The-
ory, MIT Press Latouche, Guy; Ramaswami, Vaidyanathan (1999),
Introduction to matrix analytic methods in stochas-
Gilbarg, David; Trudinger, Neil S. (2001), Elliptic tic modeling (1st ed.), Philadelphia, PA: Society for
partial dierential equations of second order (2nd Industrial and Applied Mathematics, ISBN 978-0-
ed.), Berlin, DE; New York, NY: Springer-Verlag, 89871-425-8
ISBN 978-3-540-41160-4
Manning, Christopher D.; Schtze, Hinrich (1999),
Godsil, Chris; Royle, Gordon (2004), Algebraic Foundations of statistical natural language process-
Graph Theory, Graduate Texts in Mathematics, 207, ing, MIT Press, ISBN 978-0-262-13360-9
Berlin, DE; New York, NY: Springer-Verlag, ISBN
Mehata, K. M.; Srinivasan, S. K. (1978), Stochas-
978-0-387-95220-8
tic processes, New York, NY: McGrawHill, ISBN
Golub, Gene H.; Van Loan, Charles F. (1996), Ma- 978-0-07-096612-3
trix Computations (3rd ed.), Johns Hopkins, ISBN Mirsky, Leonid (1990), An Introduction to Linear
978-0-8018-5414-9 Algebra, Courier Dover Publications, ISBN 978-0-
486-66434-7
Greub, Werner Hildbert (1975), Linear algebra,
Graduate Texts in Mathematics, Berlin, DE; New Nering, Evar D. (1970), Linear Algebra and Ma-
York, NY: Springer-Verlag, ISBN 978-0-387- trix Theory (2nd ed.), New York: Wiley, LCCN 76-
90110-7 91646
Halmos, Paul Richard (1982), A Hilbert space prob- Nocedal, Jorge; Wright, Stephen J. (2006), Numeri-
lem book, Graduate Texts in Mathematics, 19 (2nd cal Optimization (2nd ed.), Berlin, DE; New York,
ed.), Berlin, DE; New York, NY: Springer-Verlag, NY: Springer-Verlag, p. 449, ISBN 978-0-387-
ISBN 978-0-387-90685-0, MR 675952 30303-1

Horn, Roger A.; Johnson, Charles R. (1985), Matrix Oualline, Steve (2003), Practical C++ program-
Analysis, Cambridge University Press, ISBN 978-0- ming, O'Reilly, ISBN 978-0-596-00419-4
521-38632-6
Press, William H.; Flannery, Brian P.; Teukolsky,
Householder, Alston S. (1975), The theory of ma- Saul A.; Vetterling, William T. (1992), LU
trices in numerical analysis, New York, NY: Dover Decomposition and Its Applications, Numerical
Publications, MR 0378371 Recipes in FORTRAN: The Art of Scientic Comput-
ing (PDF) (2nd ed.), Cambridge University Press,
Kreyszig, Erwin (1972), Advanced Engineering pp. 3442
Mathematics (3rd ed.), New York: Wiley, ISBN 0-
Protter, Murray H.; Morrey, Jr., Charles B. (1970),
471-50728-8.
College Calculus with Analytic Geometry (2nd ed.),
Krzanowski, Wojtek J. (1988), Principles of multi- Reading: Addison-Wesley, LCCN 76087042
variate analysis, Oxford Statistical Science Series, Punnen, Abraham P.; Gutin, Gregory (2002),
3, The Clarendon Press Oxford University Press, The traveling salesman problem and its variations,
ISBN 978-0-19-852211-9, MR 969370 Boston, MA: Kluwer Academic Publishers, ISBN
978-1-4020-0664-7
It, Kiyosi, ed. (1987), Encyclopedic dictionary of
mathematics. Vol. I-IV (2nd ed.), MIT Press, ISBN Reichl, Linda E. (2004), The transition to chaos:
978-0-262-09026-1, MR 901762 conservative classical systems and quantum mani-
festations, Berlin, DE; New York, NY: Springer-
Lang, Serge (1969), Analysis II, Addison-Wesley Verlag, ISBN 978-0-387-98788-0
Lang, Serge (1987a), Calculus of several variables Rowen, Louis Halle (2008), Graduate Algebra: non-
(3rd ed.), Berlin, DE; New York, NY: Springer- commutative view, Providence, RI: American Math-
Verlag, ISBN 978-0-387-96405-8 ematical Society, ISBN 978-0-8218-4153-2
Lang, Serge (1987b), Linear algebra, Berlin, DE; olin, Pavel (2005), Partial Dierential Equations
New York, NY: Springer-Verlag, ISBN 978-0-387- and the Finite Element Method, Wiley-Interscience,
96412-6 ISBN 978-0-471-76409-0
94 CHAPTER 13. MATRIX (MATHEMATICS)

Stinson, Douglas R. (2005), Cryptography, Dis- 13.14.2 Historical references


crete Mathematics and its Applications, Chapman
& Hall/CRC, ISBN 978-1-58488-508-5 A. Cayley A memoir on the theory of matrices. Phil.
Trans. 148 1858 17-37; Math. Papers II 475-496
Stoer, Josef; Bulirsch, Roland (2002), Introduc-
tion to Numerical Analysis (3rd ed.), Berlin, DE; Bcher, Maxime (2004), Introduction to higher alge-
New York, NY: Springer-Verlag, ISBN 978-0-387- bra, New York, NY: Dover Publications, ISBN 978-
95452-3 0-486-49570-5, reprint of the 1907 original edition

Cayley, Arthur (1889), The collected mathematical


Ward, J. P. (1997), Quaternions and Cayley num- papers of Arthur Cayley, I (18411853), Cambridge
bers, Mathematics and its Applications, 403, Dor- University Press, pp. 123126
drecht, NL: Kluwer Academic Publishers Group,
ISBN 978-0-7923-4513-8, MR 1458894 Dieudonn, Jean, ed. (1978), Abrg d'histoire des
mathmatiques 1700-1900, Paris, FR: Hermann
Wolfram, Stephen (2003), The Mathematica Book
(5th ed.), Champaign, IL: Wolfram Media, ISBN Hawkins, Thomas (1975), Cauchy and the spectral
978-1-57955-022-6 theory of matrices, Historia Mathematica, 2: 129,
ISSN 0315-0860, MR 0469635, doi:10.1016/0315-
0860(75)90032-4
13.14.1 Physics references Knobloch, Eberhard (1994), From Gauss to Weier-
strass: determinant theory and its historical eval-
Bohm, Arno (2001), Quantum Mechanics: Foun- uations, The intersection of history and mathe-
dations and Applications, Springer, ISBN 0-387- matics, Science Networks Historical Studies, 15,
95330-2 Basel, Boston, Berlin: Birkhuser, pp. 5166, MR
1308079
Burgess, Cli; Moore, Guy (2007), The Standard
Model. A Primer, Cambridge University Press, Kronecker, Leopold (1897), Hensel, Kurt, ed.,
ISBN 0-521-86036-9 Leopold Kroneckers Werke, Teubner

Guenther, Robert D. (1990), Modern Optics, John Mehra, Jagdish; Rechenberg, Helmut (1987), The
Wiley, ISBN 0-471-60538-7 Historical Development of Quantum Theory (1st
ed.), Berlin, DE; New York, NY: Springer-Verlag,
Itzykson, Claude; Zuber, Jean-Bernard (1980), ISBN 978-0-387-96284-9
Quantum Field Theory, McGrawHill, ISBN 0-07- Shen, Kangshen; Crossley, John N.; Lun, Anthony
032071-3 Wah-Cheung (1999), Nine Chapters of the Mathe-
matical Art, Companion and Commentary (2nd ed.),
Riley, Kenneth F.; Hobson, Michael P.; Bence, Oxford University Press, ISBN 978-0-19-853936-0
Stephen J. (1997), Mathematical methods for physics
and engineering, Cambridge University Press, ISBN Weierstrass, Karl (1915), Collected works, 3
0-521-55506-X

Schi, Leonard I. (1968), Quantum Mechanics (3rd 13.15 External links


ed.), McGrawHill

Weinberg, Steven (1995), The Quantum Theory of Encyclopedic articles


Fields. Volume I: Foundations, Cambridge Univer-
sity Press, ISBN 0-521-55001-7 Hazewinkel, Michiel, ed. (2001), Matrix,
Encyclopedia of Mathematics, Springer, ISBN 978-
Wherrett, Brian S. (1987), Group Theory for Atoms, 1-55608-010-4
Molecules and Solids, PrenticeHall International,
ISBN 0-13-365461-3 History

Zabrodin, Anton; Brezin, douard; Kazakov,


MacTutor: Matrices and determinants
Vladimir; Serban, Didina; Wiegmann, Paul (2006),
Applications of Random Matrices in Physics (NATO Matrices and Linear Algebra on the Earliest Uses
Science Series II: Mathematics, Physics and Chem- Pages
istry), Berlin, DE; New York, NY: Springer-Verlag,
ISBN 978-1-4020-4530-1 Earliest Uses of Symbols for Matrices and Vectors
13.15. EXTERNAL LINKS 95

Online books

Kaw, Autar K., Introduction to Matrix Algebra,


ISBN 978-0-615-25126-4
The Matrix Cookbook (PDF), retrieved 24 March
2014
Brookes, Mike (2005), The Matrix Reference Man-
ual, London: Imperial College, retrieved 10 Dec
2008

Online matrix calculators

SimplyMath (Matrix Calculator)

Free C++ Library


Matrix Calculator (DotNumerics)

Xiao, Gang, Matrix calculator, retrieved 10 Dec


2008

Online matrix calculator, retrieved 10 Dec 2008


Online matrix calculator (ZK framework), retrieved
26 Nov 2009
Oehlert, Gary W.; Bingham, Christopher,
MacAnova, University of Minnesota, School
of Statistics, retrieved 10 Dec 2008, a freeware
package for matrix algebra and statistics
Online matrix calculator, retrieved 14 Dec 2009

Operation with matrices in R (determinant, track,


inverse, adjoint, transpose)

Matrix operations widget in Wolfram|Alpha


Chapter 14

Eigenvalues and eigenvectors

Characteristic root redirects here. For other uses, see adopted from the German word eigen for proper, in-
Characteristic root (disambiguation). herent"; own, individual, special"; specic, pe-
culiar, or characteristic.[4] Originally utilized to study
In linear algebra, an eigenvector or characteristic vec- principal axes of the rotational motion of rigid bodies,
eigenvalues and eigenvectors have a wide range of appli-
tor of a linear transformation is a non-zero vector whose
direction does not change when that linear transformation cations, for example in stability analysis, vibration analy-
sis, atomic orbitals, facial recognition, and matrix diago-
is applied to it. More formally, if T is a linear transfor-
mation from a vector space V over a eld F into itself and nalization.
v is a vector in V that is not the zero vector, then v is an In essence, an eigenvector v of a linear transformation T
eigenvector of T if T(v) is a scalar multiple of v. This is a non-zero vector that, when T is applied to it, does
condition can be written as the equation not change direction. Applying T to the eigenvector only
scales the eigenvector by the scalar value , called an
eigenvalue. This condition can be written as the equation
T (v) = v,

where is a scalar in the eld F, known as the eigenvalue,


T (v) = v,
characteristic value, or characteristic root associated
with the eigenvector v. referred to as the eigenvalue equation or eigenequation.
If the vector space V is nite-dimensional, then the linear In general, may be any scalar. For example, may be
transformation T can be represented as a square matrix A, negative, in which case the eigenvector reverses direction
and the vector v by a column vector, rendering the above as part of the scaling, or it may be zero or complex.
mapping as a matrix multiplication on the left hand side
and a scaling of the column vector on the right hand side
in the equation

Av = v.

There is a correspondence between n by n square matrices


and linear transformations from an n-dimensional vector
space to itself. For this reason, it is equivalent to dene
eigenvalues and eigenvectors using either the language of
matrices or the language of linear transformations.[1][2]
Geometrically an eigenvector, corresponding to a real
nonzero eigenvalue, points in a direction that is stretched
by the transformation and the eigenvalue is the factor by In this shear mapping the red arrow changes direction but the
which it is stretched. If the eigenvalue is negative, the blue arrow does not. The blue arrow is an eigenvector of this
shear mapping because it doesn't change direction, and since its
direction is reversed.[3]
length is unchanged, its eigenvalue is 1.

The Mona Lisa example pictured at right provides a sim-


14.1 Overview ple illustration. Each point on the painting can be repre-
sented as a vector pointing from the center of the painting
Eigenvalues and eigenvectors feature prominently in the to that point. The linear transformation in this example is
analysis of linear transformations. The prex eigen- is called a shear mapping. Points in the top half are moved

96
14.2. HISTORY 97

to the right and points in the bottom half are moved to the 14.2 History
left proportional to how far they are from the horizontal
axis that goes through the middle of the painting. The
vectors pointing to each point in the original image are
therefore tilted right or left and made longer or shorter
by the transformation. Notice that points along the hor-
izontal axis do not move at all when this transformation
Eigenvalues are often introduced in the context of linear
is applied. Therefore, any vector that points directly to
algebra or matrix theory. Historically, however, they
the right or left with no vertical component is an eigen-
arose in the study of quadratic forms and dierential
vector of this transformation because the mapping does
equations.
not change its direction. Moreover, these eigenvectors
all have an eigenvalue equal to one because the mapping In the 18th century Euler studied the rotational motion
does not change their length, either. of a rigid body and discovered the importance of the
principal axes.[9] Lagrange realized that the principal axes
Linear transformations can take many dierent forms,
are the eigenvectors of the inertia matrix.[10] In the early
mapping vectors in a variety of vector spaces, so the
19th century, Cauchy saw how their work could be used
eigenvectors can also take many forms. For example, the
to classify the quadric surfaces, and generalized it to
linear transformation could be a dierential operator like
d arbitrary dimensions.[11] Cauchy also coined the term
dx , in which case the eigenvectors are functions called racine caractristique (characteristic root) for what is now
eigenfunctions that are scaled by that dierential opera-
called eigenvalue; his term survives in characteristic equa-
tor, such as
tion.[12][13]
Fourier used the work of Laplace and Lagrange to solve
d x the heat equation by separation of variables in his famous
e = ex . 1822 book Thorie analytique de la chaleur.[14] Sturm
dx
developed Fouriers ideas further and brought them to
Alternatively, the linear transformation could take the the attention of Cauchy, who combined them with his
form of an n by n matrix, in which case the eigenvectors own ideas and arrived at the fact that real symmetric
are n by 1 matrices that are also referred to as eigenvec- matrices have real eigenvalues.[11] This was extended by
tors. If the linear transformation is expressed in the form Hermite in 1855 to what are now called Hermitian matri-
of an n by n matrix A, then the eigenvalue equation above ces.[12] Around the same time, Brioschi proved that the
for a linear transformation can be rewritten as the matrix eigenvalues of orthogonal matrices lie on the unit cir-
multiplication cle,[11] and Clebsch found the corresponding result for
skew-symmetric matrices.[12] Finally, Weierstrass clari-
ed an important aspect in the stability theory started
by Laplace by realizing that defective matrices can cause
Av = v, instability.[11]
In the meantime, Liouville studied eigenvalue problems
where the eigenvector v is an n by 1 matrix. For a matrix, similar to those of Sturm; the discipline that grew out
eigenvalues and eigenvectors can be used to decompose of their work is now called SturmLiouville theory.[15]
the matrix, for example by diagonalizing it. Schwarz studied the rst eigenvalue of Laplaces equation
Eigenvalues and eigenvectors give rise to many closely re- on general domains towards the end of the 19th century,
lated mathematical concepts, and the prex eigen- is ap- while Poincar studied Poissons equation a few years
plied liberally when naming them: later.[16]
At the start of the 20th century, Hilbert studied the eigen-
values of integral operators by viewing the operators as
The set of all eigenvectors of a linear trans- innite matrices.[17] He was the rst to use the German
formation, each paired with its corresponding word eigen, which means own, to denote eigenvalues
eigenvalue, is called the eigensystem of that and eigenvectors in 1904,[18] though he may have been
transformation.[5][6] following a related usage by Helmholtz. For some time,
the standard term in English was proper value, but the
The set of all eigenvectors of T corresponding to more distinctive term eigenvalue is standard today.[19]
the same eigenvalue, together with the zero vector, The rst numerical algorithm for computing eigenval-
is called an eigenspace or characteristic space of ues and eigenvectors appeared in 1929, when Von
T.[7][8] Mises published the power method. One of the most
popular methods today, the QR algorithm, was pro-
If the set of eigenvectors of T form a basis of the posed independently by John G.F. Francis[20] and Vera
domain of T, then this basis is called an eigenbasis. Kublanovskaya[21] in 1961.[22]
98 CHAPTER 14. EIGENVALUES AND EIGENVECTORS

14.3 Eigenvalues and eigenvectors where, for each row,


of matrices

n

See also: Euclidean vector and Matrix (mathematics) wi = Ai1 v1 + Ai2 v2 + + Ain vn = Aij vj
j=1

Eigenvalues and eigenvectors are often introduced to stu- If it occurs that v and w are scalar multiples, that is if
dents in the context of linear algebra courses focused on
matrices.[23][24] Furthermore, linear transformations can
be represented using matrices,[1][2] which is especially
common in numerical and computational applications.[25] then v is an eigenvector of the linear transformation A
and the scale factor is the eigenvalue corresponding to
Y that eigenvector. Equation (1) is the eigenvalue equation
for the matrix A.
y Equation (1) can be stated equivalently as
Ax = x

y where I is the n by n identity matrix.


x
14.3.1 Eigenvalues and the characteristic
polynomial
Main article: Characteristic polynomial
O x x X
Matrix A acts by stretching the vector x, not changing its direction, Equation (2) has a non-zero solution v if and only if the
so x is an eigenvector of A. determinant of the matrix (A I) is zero. Therefore, the
eigenvalues of A are values of that satisfy the equation
Consider n-dimensional vectors that are formed as a list
of n scalars, such as the three-dimensional vectors

Using Leibniz' rule for the determinant, the left hand side
1 20 of Equation (3) is a polynomial function of the variable
x = 3 and y = 60. and the degree of this polynomial is n, the order of the
4 80 matrix A. Its coecients depend on the entries of A, ex-
cept that its term of degree n is always (1)n n . This
These vectors are said to be scalar multiples of each other, polynomial is called the characteristic polynomial of A.
or parallel or collinear, if there is a scalar such that Equation (3) is called the characteristic equation or the
secular equation of A.
The fundamental theorem of algebra implies that the
x = y. characteristic polynomial of an n by n matrix A, being a
In this case = 1/20. polynomial of degree n, can be factored into the product
of n linear terms,
Now consider the linear transformation of n-dimensional
vectors dened by an n by n matrix A,

where each i may be real but in general is a complex


Av = w, number. The numbers 1 , 2 , ... n, which may not all
have distinct values, are roots of the polynomial and are
or the eigenvalues of A.
As a brief example, which is described in more detail in
the examples section later, consider the matrix
A11 A12 ... A1n v1 w1
A21 A22 ... A2n v2 w2

.. .. .. .. .. = .. [ ]
. . . . . . 2 1
M= .
An1 An2 ... Ann vn wn 1 2
14.3. EIGENVALUES AND EIGENVECTORS OF MATRICES 99

Taking the determinant of (M I), the characteristic If d = n then the right hand side is the product of n linear
polynomial of M is terms and this is the same as Equation (4). The size of
each eigenvalues algebraic multiplicity is related to the
dimension n as
2 1
|M I| = = 3 4 + 2 .
1 2
1 A (i ) n,
Setting the characteristic polynomial equal to zero, it has
d
roots at = 1 and = 3, which are the two eigenvalues = A (i ) = n.
A
of M. The eigenvectors corresponding to each eigenvalue i=1
can be found by solving for the components of v in the
equation Mv = v. In this example, the eigenvectors are If A(i) = 1, then i is said to be a simple eigenvalue.[27]
any non-zero scalar multiples of If A(i) equals the geometric multiplicity of i, A(i),
dened in the next section, then i is said to be a semisim-
ple eigenvalue.
[ ] [ ]
1 1
v=1 = , v=3 = .
1 1
14.3.3 Eigenspaces, geometric multiplic-
If the entries of the matrix A are all real numbers, then ity, and the eigenbasis for matrices
the coecients of the characteristic polynomial will also
be real numbers, but the eigenvalues may still have non- Given a particular eigenvalue of the n by n matrix A,
zero imaginary parts. The entries of the corresponding dene the set E to be all vectors v that satisfy Equation
eigenvectors therefore may also have non-zero imaginary (2),
parts. Similarly, the eigenvalues may be irrational num-
bers even if all the entries of A are rational numbers or
even if they are all integers. However, if the entries of E = {v : (A I)v = 0}.
A are all algebraic numbers, which include the rationals,
the eigenvalues are complex algebraic numbers. On one hand, this set is precisely the kernel or nullspace
The non-real roots of a real polynomial with real coe- of the matrix (A I). On the other hand, by deni-
cients can be grouped into pairs of complex conjugates, tion, any non-zero vector that satises this condition is
namely with the two members of each pair having imag- an eigenvector of A associated with . So, the set E is the
inary parts that dier only in sign and the same real part. union of the zero vector with the set of all eigenvectors of
If the degree is odd, then by the intermediate value the- A associated with , and E equals the nullspace of (A
orem at least one of the roots is real. Therefore, any real I). E is called the eigenspace or characteristic space of
matrix with odd order has at least one real eigenvalue, A associated with .[7][8] In general is a complex num-
whereas a real matrix with even order may not have any ber and the eigenvectors are complex n by 1 matrices. A
real eigenvalues. The eigenvectors associated with these property of the nullspace is that it is a linear subspace, so
complex eigenvalues are also complex and also appear in E is a linear subspace of n .
complex conjugate pairs. Because the eigenspace E is a linear subspace, it is closed
under addition. That is, if two vectors u and v belong to
the set E, written (u,v) E, then (u + v) E or equiva-
14.3.2 Algebraic multiplicity lently A(u + v) = (u + v). This can be checked using the
distributive property of matrix multiplication. Similarly,
Let i be an eigenvalue of an n by n matrix A. The because E is a linear subspace, it is closed under scalar
algebraic multiplicity A(i) of the eigenvalue is its multiplication. That is, if v E and is a complex num-
multiplicity as a root of the characteristic polynomial, that ber, (v) E or equivalently A(v) = (v). This can be
is, the largest integer k such that ( i)k divides evenly checked by noting that multiplication of complex matri-
that polynomial.[8][26][27] ces by complex numbers is commutative. As long as u
Suppose a matrix A has dimension n and d n distinct + v and v are not zero, they are also eigenvectors of A
eigenvalues. Whereas Equation (4) factors the character- associated with .
istic polynomial of A into the product of n linear terms The dimension of the eigenspace E associated with , or
with some terms potentially repeating, the characteristic equivalently the maximum number of linearly indepen-
polynomial can instead be written as the product d terms dent eigenvectors associated with , is referred to as the
each corresponding to a distinct eigenvalue and raised to eigenvalues geometric multiplicity A(). Because E is
the power of the algebraic multiplicity, also the nullspace of (A I), the geometric multiplicity
of is the dimension of the nullspace of (A I), also
called the nullity of (A I), which relates to the dimen-
|AI| = (1 )A (1 ) (2 )A (2 ) (d )A (d )sion
. and rank of (A - I) as
100 CHAPTER 14. EIGENVALUES AND EIGENVECTORS

The trace of A, dened as the sum of its diagonal


elements, is also the sum of all eigenvalues,
A () = n rank(A I).
n n
Because of the denition of eigenvalues and eigenvec- tr(A) = i=1 Aii = i=1 i = 1 + 2 +
tors, an eigenvalues geometric multiplicity must be at + n . [28][29][30]
least one, that is, each eigenvalue has at least one associ-
ated eigenvector. Furthermore, an eigenvalues geomet- The determinant of A is the product of all its eigen-
ric multiplicity cannot exceed its algebraic multiplicity. values,
Additionally, recall that an eigenvalues algebraic multi-
plicity cannot exceed n. n
det(A) = i=1 i = 1 2 n . [28][31][32]

1 A () A () n The eigenvalues of the kth power of A, i.e. the eigen-


values of Ak , for any positive integer k, are 1 k , 2 k ,
The condition that A() A() can be proven by con- ..., nk .
sidering a particular eigenvalue of A and diagonalizing
the rst A() columns of A with respect to 's eigenvec- The matrix A is invertible if and only if every eigen-
tors, described in a later section. The resulting similar value is nonzero.
matrix B is block upper triangular, with its top left block
being the diagonal matrix IA. As a result, the charac- If A is invertible, then the eigenvalues of A1 are
teristic polynomial of B will have a factor of ( - )A() . 1/1 , 1/2 , ..., 1/n and each eigenvalues geometric
The other factors of the characteristic polynomial of B multiplicity coincides. Moreover, since the charac-
are not known, so the algebraic multiplicity of as an teristic polynomial of the inverse is the reciprocal
eigenvalue of B is no less than the geometric multiplic- polynomial of the original, the eigenvalues share the
ity of as an eigenvalue of A. The last element of the same algebraic multiplicity.
proof is the property that similar matrices have the same
If A is equal to its conjugate transpose A*, or equiv-
characteristic polynomial.
alently if A is Hermitian, then every eigenvalue is
Suppose A has d n distinct eigenvalues 1 , 2 , ..., d, real. The same is true of any symmetric real matrix.
where the geometric multiplicity of i is A(i). The total
geometric multiplicity of A, If A is not only Hermitian but also positive-denite,
positive-semidenite, negative-denite, or negative-
semidenite, then every eigenvalue is positive, non-

d
negative, negative, or non-positive, respectively.
A = A (i ),
i=1 If A is unitary, every eigenvalue has absolute value
d A n, |i| = 1.
is the dimension of the union of all the eigenspaces of
A's eigenvalues, or equivalently the maximum number of 14.3.5 Left and right eigenvectors
linearly independent eigenvectors of A. If A = n, then
See also: left and right (algebra)
The union of the eigenspaces of all of A's eigenval-
ues is the entire vector space n
Many disciplines traditionally represent vectors as matri-
A basis of n can be formed from n linearly inde- ces with a single column rather than as matrices with a
pendent eigenvectors of A; such a basis is called an single row. For that reason, the word eigenvector in the
eigenbasis context of matrices almost always refers to a right eigen-
vector, namely a column vector that right multiples the n
Any vector in can be written as a linear combi-
n
by n matrix A in the dening equation, Equation (1),
nation of eigenvectors of A

14.3.4 Additional properties of eigenval- Av = v.


ues
The eigenvalue and eigenvector problem can also be de-
Let A be an arbitrary n by n matrix of complex numbers ned for row vectors that left multiply matrix A. In this
with eigenvalues 1 , 2 , ..., n. Each eigenvalue appears formulation, the dening equation is
A(i) times in this list, where A(i) is the eigenvalues
algebraic multiplicity. The following are properties of
this matrix and its eigenvalues: uA = u,
14.3. EIGENVALUES AND EIGENVECTORS OF MATRICES 101

where is a scalar and u is a 1 by n matrix. Any row


vector u satisfying this equation is called a left eigenvec-
1
tor of A and is its associated eigenvalue. Taking the A = QQ ,
conjugate transpose of this equation,
or by instead left multiplying both sides by Q1 ,

A u = u .
Q1 AQ = .
Comparing this equation to Equation (1), the left eigen-
A can therefore be decomposed into a matrix composed
vectors of A are the conjugate transpose of the right
of its eigenvectors, a diagonal matrix with its eigenvalues
eigenvectors of A*. The eigenvalues of the left eigen-
along the diagonal, and the inverse of the matrix of eigen-
vectors are the solution of the characteristic polynomial
vectors. This is called the eigendecomposition and it is a
|A* *I|=0. Because the identity matrix is Hermitian
similarity transformation. Such a matrix A is said to be
and |M*| = |M|* for a square matrix M, the eigenvalues
similar to the diagonal matrix or diagonalizable. The
of the left eigenvectors of A are the complex conjugates
matrix Q is the change of basis matrix of the similarity
of the eigenvalues of the right eigenvectors of A. Recall
transformation. Essentially, the matrices A and rep-
that if A is a real matrix, all of its complex eigenvalues ap-
resent the same linear transformation expressed in two
pear in complex conjugate pairs. Therefore, the eigenval-
dierent bases. The eigenvectors are used as the basis
ues of the left and right eigenvectors of a real matrix are
when representing the linear transformation as .
the same. Similarly, if A is a real matrix, all of its com-
plex eigenvectors also appear in complex conjugate pairs. Conversely, suppose a matrix A is diagonalizable. Let P
Therefore, the left eigenvectors simplify to the transpose be a non-singular square matrix such that P 1 AP is some
of the right eigenvectors of AT if A is real. diagonal matrix D. Left multiplying both by P, AP = PD.
Each column of P must therefore be an eigenvector of A
whose eigenvalue is the corresponding diagonal element
14.3.6 Diagonalization and the eigende- of D. Since the columns of P must be linearly independent
composition for P to be invertible, there exist n linearly independent
eigenvectors of A. It then follows that the eigenvectors of
Main article: Eigendecomposition of a matrix A form a basis if and only if A is diagonalizable.
A matrix that is not diagonalizable is said to be defective.
Suppose the eigenvectors of A form a basis, or equiva- For defective matrices, the notion of eigenvectors gen-
lently A has n linearly independent eigenvectors v1 , v2 , eralizes to generalized eigenvectors and the diagonal ma-
..., vn with associated eigenvalues 1 , 2 , ..., n. The trix of eigenvalues generalizes to the Jordan normal form.
eigenvalues need not be distinct. Dene a square matrix Over an algebraically closed eld, any matrix A has a
Q whose columns are the n linearly independent eigen- Jordan normal form and therefore admits a basis of gener-
vectors of A, alized eigenvectors and a decomposition into generalized
eigenspaces.

[ ]
Q = v1 v2 ... vn . 14.3.7 Variational characterization
Since each column of Q is an eigenvector of A, right mul- Main article: Min-max theorem
tiplying A by Q scales each column of Q by its associated
eigenvalue,
In the Hermitian case, eigenvalues can be given a varia-
tional characterization. The largest eigenvalue of H is the
[ ] maximum value of the quadratic form xT Hx/xT x . A
AQ = 1 v1 2 v2 ... n vn .
value of x that realizes that maximum, is an eigenvector.
With this in mind, dene a diagonal matrix where each
diagonal element ii is the eigenvalue associated with the 14.3.8 Matrix examples
ith column of Q. Then
Two-dimensional matrix example

AQ = Q. Consider the matrix

Because the columns of Q are linearly independent, Q is


[ ]
invertible. Right multiplying both sides of the equation 2 1
by Q1 , A= .
1 2
102 CHAPTER 14. EIGENVALUES AND EIGENVECTORS

[ ][ ] [ ]
1 1 v1 0
(A 3I)v=3 = = .
1 1 v2 0

Any non-zero vector with v1 = v2 solves this equation.


Therefore,

[ ]
1
v=3 =
1

is an eigenvector of A corresponding to = 3, as is any


scalar multiple of this vector.
Thus, the vectors v and v are eigenvectors of A asso-
ciated with the eigenvalues = 1 and = 3, respectively.

Three-dimensional matrix example


[2 1]
The transformation matrix A = 1 2 preserves the direction of
vectors parallel to v=1 = [1 1]T (in purple) and v=3 = [1 1]T Consider the matrix
(in blue). The vectors in red are not parallel to either eigenvector,
so, their directions are changed by the transformation. See also:
An extended version, showing all four quadrants. 2 0 0
A = 0 3 4.
0 4 9
The gure on the right shows the eect of this transforma-
tion on point coordinates in the plane. The eigenvectors The characteristic polynomial of A is
v of this transformation satisfy Equation (1), and the val-
ues of for which the determinant of the matrix (A I)
equals zero are the eigenvalues. 2 0 0 1 0 0 2 0 0

3 4 ,
Taking the determinant to nd characteristic polynomial |A I| = 0 3 4 0 1 0 = 0
0 4 9 0 0 1 0 4 9
of A, [ ]
= (2 ) (3 )(9 ) 16 = 3 + 142 35 + 22.
[ ] [ ] The roots of the characteristic polynomial are 2, 1, and
2 1 1 0 2 1
|A I| = = , 11, which are the only three eigenvalues of A. These
1 2 0 1 1 2
eigenvalues correspond to the eigenvectors [1 0 0]T ,
= 3 4 + 2 . [0 2 1]T , and [0 1 2]T , or any non-zero multiple
thereof.
Setting the characteristic polynomial equal to zero, it has
roots at = 1 and = 3, which are the two eigenvalues of
A. Three-dimensional matrix example with complex
For = 1, Equation (2) becomes, eigenvalues

Consider the cyclic permutation matrix


[ ][ ] [ ]
1 1 v1 0
(A I)v=1 = = .
1 1 v2 0
0 1 0
A = 0 0 1.
Any non-zero vector with v1 = v2 solves this equation.
Therefore, 1 0 0

This matrix shifts the coordinates of the vector up by one


[ ] position and moves the rst coordinate to the bottom. Its
1 characteristic polynomial is 1 3 , whose roots are
v=1 =
1

is an eigenvector of A corresponding to = 1, as is any


scalar multiple of this vector. 1 = 1

For = 3, Equation (2) becomes 2 = 1/2 + i 3/2
14.3. EIGENVALUES AND EIGENVECTORS OF MATRICES 103


3 = 2 = 1/2 i 3/2 Each diagonal element corresponds to an eigenvector
whose only non-zero component is in the same row as
where i = 1 is the imaginary unit.
that diagonal element. In the example, the eigenvalues
For the real eigenvalue 1 = 1, any vector with three equal correspond to the eigenvectors,
non-zero entries is an eigenvector. For example,

1 0 0
5 5 5 v1 = 0, v2 = 1, v3 = 0,
A5 = 5 = 1 5. 0 0 1
5 5 5
respectively, as well as scalar multiples of these vectors.
For the complex conjugate pair of imaginary eigenvalues,
note that
Triangular matrix example

2 3 = 1, 22 = 3 , 23 = 2 . A matrix whose elements above the main diagonal are


all zero is called a lower triangular matrix, while a ma-
Then
trix whose elements below the main diagonal are all zero
is called an upper triangular matrix. As with diagonal
matrices, the eigenvalues of triangular matrices are the
1 2 1
A2 = 3 = 2 2 , elements of the main diagonal.
3 1 3 Consider the lower triangular matrix,
and

1 0 0
A = 1 2 0.
1 3 1
2 3 3
A3 = 2 = 3 3 .
2 1 2 The characteristic polynomial of A is
Therefore, the other two eigenvectors of A are complex
and are v2 = [1 2 3 ]T and v3 = [1 3 2 ]T with
eigenvalues 2 and 3 , respectively. Note that the two |A I| = (1 )(2 )(3 ),
complex eigenvectors also appear in a complex conjugate
pair, which has the roots 1 = 1, 2 = 2, and 3 = 3. These
roots are the diagonal elements as well as the eigenvalues
of A.
v2 = v3 . These eigenvalues correspond to the eigenvectors,

Diagonal matrix example


1 0 0
Matrices with entries only along the main diagonal are v1 = 1 ,v 2 = 1 , v3 = 0,
called diagonal matrices. The eigenvalues of a diagonal 1/2 3 1
matrix are the diagonal elements themselves. Consider
the matrix respectively, as well as scalar multiples of these vectors.

Matrix with repeated eigenvalues example


1 0 0
A = 0 2 0.
As in the previous example, the lower triangular matrix
0 0 3
The characteristic polynomial of A is
2 0 0 0
1 2 0 0
A=
0
,
|A I| = (1 )(2 )(3 ), 1 3 0
0 0 1 3
which has the roots 1 = 1, 2 = 2, and 3 = 3. These
roots are the diagonal elements as well as the eigenvalues has a characteristic polynomial that is the product of its
of A. diagonal elements,
104 CHAPTER 14. EIGENVALUES AND EIGENVECTORS


2 0 0 0 f (t) = f (0)et ,

1 2 0 0
|AI| = = (2)2 (3) 2
is the .eigenfunction of the derivative operator. Note that
0 1 3 0
0 0 1 3 in this case the eigenfunction is itself a function of its as-
sociated eigenvalue. In particular, note that for = 0 the
The roots of this polynomial, and hence the eigenvalues, eigenfunction f(t) is a constant.
are 2 and 3. The algebraic multiplicity of each eigenvalue
The main eigenfunction article gives other examples.
is 2; in other words they are both double roots. The sum
of the algebraic multiplicities of each distinct eigenvalue
is A = 4 = n, the order of the characteristic polynomial
and the dimension of A. 14.5 General denition
On the other hand, the geometric multiplicity of the eigen-
The concept of eigenvalues and eigenvectors extends nat-
value 2 is only 1, because its eigenspace is spanned by just
urally to arbitrary linear transformations on arbitrary
one vector [0 1 1 1]T and is therefore 1-dimensional.
vector spaces. Let V be any vector space over some eld
Similarly, the geometric multiplicity of the eigenvalue 3
K of scalars, and let T be a linear transformation mapping
is 1 because its eigenspace is spanned by just one vec-
V into V,
tor [0 0 0 1]T . The total geometric multiplicity A is 2,
which is the smallest it could be for a matrix with two dis-
tinct eigenvalues. Geometric multiplicities are dened in
a later section. T : V V.

We say that a non-zero vector v V is an eigenvector of


T if and only if there exists a scalar K such that
14.4 Eigenvalues and eigenfunc-
tions of dierential operators
Main article: Eigenfunction This equation is called the eigenvalue equation for T, and
the scalar is the eigenvalue of T corresponding to the
eigenvector v. Note that T(v) is the result of applying the
The denitions of eigenvalue and eigenvectors of a lin- transformation T to the vector v, while v is the product
ear transformation T remains valid even if the underlying of the scalar with v.[33]
vector space is an innite-dimensional Hilbert or Banach
space. A widely used class of linear transformations act-
ing on innite-dimensional spaces are the dierential op- 14.5.1 Eigenspaces, geometric multiplic-
erators on function spaces. Let D be a linear dierential ity, and the eigenbasis
operator on the space C of innitely dierentiable real
functions of a real argument t. The eigenvalue equation Given an eigenvalue , consider the set
for D is the dierential equation

E = {v : T (v) = v},
Df (t) = f (t)
which is the union of the zero vector with the set of all
The functions that satisfy this equation are eigenvectors eigenvectors associated with . E is called the eigenspace
of D and are commonly called eigenfunctions. or characteristic space of T associated with .
By denition of a linear transformation,
14.4.1 Derivative operator example
d T (x + y) = T (x) + T (y),
Consider the derivative operator dt with eigenvalue equa-
tion T (x) = T (x),

for (x,y) V and K. Therefore, if u and v are eigen-


vectors of T associated with eigenvalue , namely (u,v)
d
f (t) = f (t). E, then
dt
This dierential equation can be solved by multiplying
both sides by dt/f(t) and integrating. Its solution, the T (u + v) = (u + v),
exponential function T (v) = (v).
14.6. DYNAMIC EQUATIONS 105

So, both u + v and v are either zero or eigenvectors general, the operator (T I) may not have an inverse
of T associated with , namely (u+v,v) E, and E even if is not an eigenvalue.
is closed under addition and scalar multiplication. The For this reason, in functional analysis eigenvalues can be
eigenspace E associated with is therefore a linear sub- generalized to the spectrum of a linear operator T as the
space of V.[8][34][35] If that subspace has dimension 1, itset of all scalars for which the operator (T I) has
is sometimes called an eigenline.[36] no bounded inverse. The spectrum of an operator always
The geometric multiplicity T() of an eigenvalue is contains all its eigenvalues but is not limited to them.
the dimension of the eigenspace associated with , i.e.,
the maximum number of linearly independent eigenvec-
tors associated with that eigenvalue.[8][27] By the deni- 14.5.4 Associative algebras and represen-
tion of eigenvalues and eigenvectors, T() 1 because tation theory
every eigenvalue has at least one eigenvector.
The eigenspaces of T always form a direct sum. As a Main article: Weight (representation theory)
consequence, eigenvectors of dierent eigenvalues are al-
ways linearly independent. Therefore, the sum of the di- One can generalize the algebraic object that is acting
mensions of the eigenspaces cannot exceed the dimension on the vector space, replacing a single operator acting
n of the vector space on which T operates, and there can- on a vector space with an algebra representation an
not be more than n distinct eigenvalues.[37] associative algebra acting on a module. The study of such
Any subspace spanned by eigenvectors of T is an invariant actions is the eld of representation theory.
subspace of T, and the restriction of T to such a subspace The representation-theoretical concept of weight is an
is diagonalizable. Moreover, if the entire vector space V analog of eigenvalues, while weight vectors and weight
can be spanned by the eigenvectors of T, or equivalently spaces are the analogs of eigenvectors and eigenspaces,
if the direct sum of the eigenspaces associated with all respectively.
the eigenvalues of T is the entire vector space V, then
a basis of V called an eigenbasis can be formed from
linearly independent eigenvectors of T. When T admits
an eigenbasis, T is diagonalizable.
14.6 Dynamic equations
The simplest dierence equations have the form
14.5.2 Zero vector as an eigenvector
While the denition of an eigenvector used in this article x = a x
t 1 t1 + a2 xt2 + + ak xtk .
excludes the zero vector, it is possible to dene eigen-
values and eigenvectors such that the zero vector is an The solution of this equation for x in terms of t is found
eigenvector.[38] by using its characteristic equation
Consider again the eigenvalue equation, Equation (5).
Dene an eigenvalue to be any scalar K such that
there exists a non-zero vector v V satisfying Equation k a1 k1 a2 k2 ak1 ak = 0,
(5). It is important that this version of the denition of
an eigenvalue specify that the vector be non-zero, other- which can be found by stacking into matrix form a set of
wise by this denition the zero vector would allow any equations consisting of the above dierence equation and
scalar in K to be an eigenvalue. Dene an eigenvector the k1 equations xt1 = xt1 , . . . , xtk+1 = xtk+1 ,
v associated with the eigenvalue to be any vector that, giving a k-dimensional system of the rst order in the
given , satises Equation (5). Given the eigenvalue, the stacked variable vector [xt , . . . , xtk+1 ] in terms of its
zero vector is among the vectors that satisfy Equation (5), once-lagged value, and taking the characteristic equation
so the zero vector is included among the eigenvectors by of this systems matrix. This equation gives k character-
this alternate denition. istic roots 1 , . . . , k , for use in the solution equation

14.5.3 Spectral theory


xt = c1 t1 + + ck tk .
Main article: Spectral theory
A similar procedure is used for solving a dierential equa-
tion of the form
If is an eigenvalue of T, then the operator (T I) is not
one-to-one, and therefore its inverse (T I)1 does not
exist. The converse is true for nite-dimensional vector dk x dk1 x dx
spaces, but not for innite-dimensional vector spaces. In k
+ ak1 k1 + + a1 + a0 x = 0.
dt dt dt
106 CHAPTER 14. EIGENVALUES AND EIGENVECTORS

14.7 Calculation [ ][ ] [ ]
4 1 x x
=6
Main article: Eigenvalue algorithm 6 3 y y
This matrix equation is equivalent to two linear equations
{ {
4x + y = 6x 2x + y = 0
14.7.1 Eigenvalues that is
6x + 3y = 6y +6x 3y = 0

The eigenvalues of a matrix A can be determined by nd- Both equations reduce to the single linear equation y =
ing the roots of the characteristic polynomial. Explicit 2x . Therefore, any vector of the form [a, 2a] , for any
algebraic formulas for the roots of a polynomial exist only non-zero real number a , is an eigenvector of A with
if the degree n is 4 or less. According to the AbelRuni eigenvalue = 6 .
theorem there is no general, explicit and exact algebraic The matrix A above has another eigenvalue = 1 . A
formula for the roots of a polynomial with degree 5 or similar calculation shows that the corresponding eigen-
more. vectors are the non-zero solutions of 3x + y = 0 , that

It turns out that any polynomial with degree n is the char- is, any vector of the form [b, 3b] , for any non-zero real
acteristic polynomial of some companion matrix of order number b .
n . Therefore, for matrices of order 5 or more, the eigen- Some numeric methods that compute the eigenvalues of a
values and eigenvectors cannot be obtained by an explicit matrix also determine a set of corresponding eigenvectors
algebraic formula, and must therefore be computed by as a by-product of the computation.
approximate numerical methods.
In theory, the coecients of the characteristic polyno-
mial can be computed exactly, since they are sums of 14.8 Applications
products of matrix elements; and there are algorithms that
can nd all the roots of a polynomial of arbitrary degree
14.8.1 Eigenvalues of geometric transfor-
to any required accuracy.[39] However, this approach is
not viable in practice because the coecients would be mations
contaminated by unavoidable round-o errors, and the
roots of a polynomial can be an extremely sensitive func- The following table presents some example transforma-
tion of the coecients (as exemplied by Wilkinsons tions in the plane along with their 22 matrices, eigen-
polynomial).[39] values, and eigenvectors.

Ecient, accurate methods to compute eigenvalues and Note that the characteristic equation for a rotation is a
eigenvectors of arbitrary matrices were not known until quadratic equation with discriminant D = 4(sin )2 ,
the advent of the QR algorithm in 1961. [39] Combin- which is a negative number whenever is not an inte-
ing the Householder transformation with the LU decom- ger multiple of 180. Therefore, except for these spe-
position results in an algorithm with better convergence cial cases, the two eigenvalues are complex numbers,
than the QR algorithm. For large Hermitian sparse matri- cos i sin ; and all eigenvectors have non-real entries.
ces, the Lanczos algorithm is one example of an ecient Indeed, except for those special cases, a rotation changes
iterative method to compute eigenvalues and eigenvec- the direction of every nonzero vector in the plane.
tors, among several other possibilities.[39] A linear transformation that takes a square to a rectan-
gle of the same area (a squeeze mapping) has reciprocal
eigenvalues.
14.7.2 Eigenvectors

Once the (exact) value of an eigenvalue is known, the 14.8.2 Schrdinger equation
corresponding eigenvectors can be found by nding non-
zero solutions of the eigenvalue equation, that becomes a An example of an eigenvalue equation where the transfor-
system of linear equations with known coecients. For mation T is represented in terms of a dierential operator
example, once it is known that 6 is an eigenvalue of the is the time-independent Schrdinger equation in quantum
matrix mechanics:

[ ] HE = EE
4 1
A= where H , the Hamiltonian, is a second-order dierential
6 3
operator and E , the wavefunction, is one of its eigen-
we can nd its eigenvectors by solving the equation Av = functions corresponding to the eigenvalue E , interpreted
6v , that is as its energy.
14.8. APPLICATIONS 107

atomic and molecular orbitals can be dened by the


eigenvectors of the Fock operator. The corresponding
eigenvalues are interpreted as ionization potentials via
Koopmans theorem. In this case, the term eigenvec-
tor is used in a somewhat more general meaning, since
the Fock operator is explicitly dependent on the orbitals
and their eigenvalues. Thus, if one wants to underline
this aspect, one speaks of nonlinear eigenvalue problems.
Such equations are usually solved by an iteration proce-
dure, called in this case self-consistent eld method. In
quantum chemistry, one often represents the Hartree
Fock equation in a non-orthogonal basis set. This par-
ticular representation is a generalized eigenvalue problem
called Roothaan equations.

14.8.4 Geology and glaciology


In geology, especially in the study of glacial till, eigen-
The wavefunctions associated with the bound states of an electron
vectors and eigenvalues are used as a method by which a
in a hydrogen atom can be seen as the eigenvectors of the
mass of information of a clast fabrics constituents ori-
hydrogen atom Hamiltonian as well as of the angular momen-
tum operator. They are associated with eigenvalues interpreted entation and dip can be summarized in a 3-D space by
as their energies (increasing downward: n = 1, 2, 3, . . . ) and six numbers. In the eld, a geologist may collect such
angular momentum (increasing across: s, p, d, ...). The illus- data for hundreds or thousands of clasts in a soil sample,
tration shows the square of the absolute value of the wavefunc- which can only be compared graphically such as in a Tri-
tions. Brighter areas correspond to higher probability density for Plot (Sneed and Folk) diagram,[40][41] or as a Stereonet
a position measurement. The center of each gure is the atomic on a Wul Net.[42]
nucleus, a proton.
The output for the orientation tensor is in the three or-
thogonal (perpendicular) axes of space. The three eigen-
However, in the case where one is interested only in the vectors are ordered v1 , v2 , v3 by their eigenvalues E1
bound state solutions of the Schrdinger equation, one E2 E3 ;[43] v1 then is the primary orientation/dip of
looks for E within the space of square integrable func- clast, v2 is the secondary and v3 is the tertiary, in terms
tions. Since this space is a Hilbert space with a well- of strength. The clast orientation is dened as the direc-
dened scalar product, one can introduce a basis set in tion of the eigenvector, on a compass rose of 360. Dip
which E and H can be represented as a one-dimensional is measured as the eigenvalue, the modulus of the tensor:
array (i.e., a vector) and a matrix respectively. This al- this is valued from 0 (no dip) to 90 (vertical). The rela-
lows one to represent the Schrdinger equation in a ma- tive values of E1 , E2 , and E3 are dictated by the nature
trix form. of the sediments fabric. If E1 = E2 = E3 , the fabric
is said to be isotropic. If E1 = E2 > E3 , the fabric is
The braket notation is often used in this context. A vec-
said to be planar. If E1 > E2 > E3 , the fabric is said
tor, which represents a state of the system, in the Hilbert
to be linear.[44]
space of square integrable functions is represented by
|E . In this notation, the Schrdinger equation is:
14.8.5 Principal component analysis
H|E = E|E Main article: Principal component analysis
See also: Positive semidenite matrix and Factor analysis
where |E is an eigenstate of H and E represents the
eigenvalue. H is an observable self adjoint operator, the
innite-dimensional analog of Hermitian matrices. As inThe eigendecomposition of a symmetric positive
the matrix case, in the equation above H|E is under-semidenite (PSD) matrix yields an orthogonal basis of
eigenvectors, each of which has a nonnegative eigenvalue.
stood to be the vector obtained by application of the trans-
formation H to |E . The orthogonal decomposition of a PSD matrix is used
in multivariate analysis, where the sample covariance
matrices are PSD. This orthogonal decomposition is
14.8.3 Molecular orbitals called principal components analysis (PCA) in statistics.
PCA studies linear relations among variables. PCA is
In quantum mechanics, and in particular in atomic and performed on the covariance matrix or the correlation
molecular physics, within the HartreeFock theory, the matrix (in which each variable is scaled to have its
108 CHAPTER 14. EIGENVALUES AND EIGENVECTORS

PCA of the multivariate Gaussian distribution centered at (1, 3) Mode Shape of a Tuning Fork at Eigenfrequency 440.09 Hz
with a standard deviation of 3 in roughly the (0.878, 0.478) di-
rection and of 1 in the orthogonal direction. The vectors shown
are unit eigenvectors of the (symmetric, positive-semidenite) damped vibration is governed by
covariance matrix scaled by the square root of the corresponding
eigenvalue. (Just as in the one-dimensional case, the square root
is taken because the standard deviation is more readily visualized
mx + kx = 0
than the variance.
or
sample variance equal to one). For the covariance
or correlation matrix, the eigenvectors correspond to
principal components and the eigenvalues to the variance mx = kx
explained by the principal components. Principal
component analysis of the correlation matrix provides that is, acceleration is proportional to position (i.e., we
an orthonormal eigen-basis for the space of the observed expect x to be sinusoidal in time).
data: In this basis, the largest eigenvalues correspond to
In n dimensions, m becomes a mass matrix and k a
the principal components that are associated with most
stiness matrix. Admissible solutions are then a linear
of the covariability among a number of observed data.
combination of solutions to the generalized eigenvalue
Principal component analysis is used to study large data problem
sets, such as those encountered in bioinformatics, data
mining, chemical research, psychology, and in marketing.
PCA is popular especially in psychology, in the eld of kx = 2 mx
psychometrics. In Q methodology, the eigenvalues of the
correlation matrix determine the Q-methodologists judg- where 2 is the eigenvalue and is the (imaginary)
ment of practical signicance (which diers from the angular frequency. Note that the principal vibra-
statistical signicance of hypothesis testing; cf. criteria tion modes are dierent from the principal compliance
for determining the number of factors). More generally, modes, which are the eigenvectors of k alone. Further-
principal component analysis can be used as a method of more, damped vibration, governed by
factor analysis in structural equation modeling.

mx + cx + kx = 0
14.8.6 Vibration analysis
leads to a so-called quadratic eigenvalue problem,
Main article: Vibration

Eigenvalue problems occur naturally in the vibration anal- ( 2 m + c + k)x = 0.


ysis of mechanical structures with many degrees of free-
dom. The eigenvalues are the natural frequencies (or This can be reduced to a generalized eigenvalue problem
eigenfrequencies) of vibration, and the eigenvectors are by algebraic manipulation at the cost of solving a larger
the shapes of these vibrational modes. In particular, un- system.
14.8. APPLICATIONS 109

The orthogonality properties of the eigenvectors allows 14.8.8 Tensor of moment of inertia
decoupling of the dierential equations so that the sys-
tem can be represented as linear summation of the eigen- In mechanics, the eigenvectors of the moment of iner-
vectors. The eigenvalue problem of complex structures is tia tensor dene the principal axes of a rigid body. The
often solved using nite element analysis, but neatly gen- tensor of moment of inertia is a key quantity required to
eralize the solution to scalar-valued vibration problems. determine the rotation of a rigid body around its center
of mass.

14.8.7 Eigenfaces 14.8.9 Stress tensor


In solid mechanics, the stress tensor is symmetric and so
can be decomposed into a diagonal tensor with the eigen-
values on the diagonal and eigenvectors as a basis. Be-
cause it is diagonal, in this orientation, the stress tensor
has no shear components; the components it does have
are the principal components.

14.8.10 Graphs

In spectral graph theory, an eigenvalue of a graph is


dened as an eigenvalue of the graphs adjacency ma-
trix A , or (increasingly) of the graphs Laplacian ma-
trix due to its Discrete Laplace operator, which is either
T A (sometimes called the combinatorial Laplacian)
or I T 1/2 AT 1/2 (sometimes called the normalized
Laplacian), where T is a diagonal matrix with Tii equal
1/2
to the degree ofvertex vi , and in T , the i th diago-
nal entry is 1/ deg(vi ) . The k th principal eigenvector
of a graph is dened as either the eigenvector correspond-
ing to the k th largest or k th smallest eigenvalue of the
Laplacian. The rst principal eigenvector of the graph is
also referred to merely as the principal eigenvector.
Eigenfaces as examples of eigenvectors The principal eigenvector is used to measure the
centrality of its vertices. An example is Google's
PageRank algorithm. The principal eigenvector of a
Main article: Eigenface
modied adjacency matrix of the World Wide Web graph
gives the page ranks as its components. This vector
In image processing, processed images of faces can be corresponds to the stationary distribution of the Markov
seen as vectors whose components are the brightnesses of chain represented by the row-normalized adjacency ma-
each pixel.[45] The dimension of this vector space is the trix; however, the adjacency matrix must rst be modi-
number of pixels. The eigenvectors of the covariance ma- ed to ensure a stationary distribution exists. The second
trix associated with a large set of normalized pictures of smallest eigenvector can be used to partition the graph
faces are called eigenfaces; this is an example of principal into clusters, via spectral clustering. Other methods are
components analysis. They are very useful for expressing also available for clustering.
any face image as a linear combination of some of them.
In the facial recognition branch of biometrics, eigenfaces
provide a means of applying data compression to faces for 14.8.11 Basic reproduction number
identication purposes. Research related to eigen vision
systems determining hand gestures has also been made. Main article: Basic reproduction number
Similar to this concept, eigenvoices represent the general
direction of variability in human pronunciations of a par- The basic reproduction number ( R0 ) is a fundamental
ticular utterance, such as a word in a language. Based number in the study of how infectious diseases spread.
on a linear combination of such eigenvoices, a new voice If one infectious person is put into a population of com-
pronunciation of the word can be constructed. These con- pletely susceptible people, then R0 is the average number
cepts have been found useful in automatic speech recog- of people that one typical infectious person will infect.
nition systems for speaker adaptation. The generation time of an infection is the time, tG , from
110 CHAPTER 14. EIGENVALUES AND EIGENVECTORS

one person becoming infected to the next person becom- In 1755, Johann Andreas Segner proved that any
ing infected. In a heterogeneous population, the next gen- body has three principal axes of rotation: Johann
eration matrix denes how many people in the population Andreas Segner, Specimen theoriae turbinum [Es-
will become infected after time tG has passed. R0 is then say on the theory of tops (i.e., rotating bodies)] (
the largest eigenvalue of the next generation matrix.[46][47] Halle (Halae), (Germany) : Gebauer, 1755). On
p. XXVIIII (i.e., 29), Segner derives a third-degree
equation in t, which proves that a body has three
principal axes of rotation. He then states (on the
14.9 See also same page): Non autem repugnat tres esse eius-
modi positiones plani HM, quia in aequatione cubica
Antieigenvalue theory radices tres esse possunt, et tres tangentis t valores.
(However, it is not inconsistent [that there] be three
Eigenplane such positions of the plane HM, because in cubic
equations, [there] can be three roots, and three val-
Eigenvalue algorithm ues of the tangent t.)
Introduction to eigenstates The relevant passage of Segners work was dis-
cussed briey by Arthur Cayley. See: A. Cayley
Jordan normal form (1862) Report on the progress of the solution of
certain special problems of dynamics, Report of the
List of numerical analysis software Thirty-second meeting of the British Association for
Nonlinear eigenproblem the Advancement of Science; held at Cambridge in
October 1862, 32 : 184-252 ; see especially pages
Quadratic eigenvalue problem 225-226.

Singular value [10] See Hawkins 1975, 2

[11] See Hawkins 1975, 3

14.10 Notes [12] See Kline 1972, pp. 807808

[13] Augustin Cauchy (1839) Mmoire sur l'intgration des


[1] Herstein (1964, pp. 228,229)
quations linaires (Memoir on the integration of linear
[2] Nering (1970, p. 38) equations), Comptes rendus, 8 : 827-830, 845-865, 889-
907, 931-937. From p. 827: On sait d'ailleurs qu'en
[3] Burden & Faires (1993, p. 401) suivant la mthode de Lagrange, on obtient pour valeur
gnrale de la variable prinicipale une fonction dans laque-
[4] Betteridge (1965)
lle entrent avec la variable principale les racines d'une
[5] Press (2007, p. 536) certaine quation que j'appellerai l'quation caractris-
tique, le degr de cette quation tant prcisment l'order de
[6] Wolfram Research, Inc. (2010) Eigenvector. Accessed on l'quation direntielle qu'il sagit d'intgrer. (On knows,
2016-04-01. moreover, that by following Lagranges method, one ob-
tains for the general value of the principal variable a func-
[7] Anton (1987, pp. 305,307)
tion in which there appear, together with the principal
[8] Nering (1970, p. 107) variable, the roots of a certain equation that I will call
the characteristic equation, the degree of this equation
[9] Note: being precisely the order of the dierential equation that
must be integrated.)
In 1751, Leonhard Euler proved that any body
has a principal axis of rotation: Leonhard Euler [14] See Kline 1972, p. 673
(presented: October 1751 ; published: 1760) Du
mouvement d'un corps solide quelconque lorsqu'il [15] See Kline 1972, pp. 715716
tourne autour d'un axe mobile (On the movement
of any solid body while it rotates around a moving [16] See Kline 1972, pp. 706707
axis), Histoire de l'Acadmie royale des sciences et
[17] See Kline 1972, p. 1063
des belles lettres de Berlin, pp.176-227. On p. 212,
Euler proves that any body contains a principal axis [18] See:
of rotation: Thorem. 44. De quelque gure que
soit le corps, on y peut toujours assigner un tel axe, David Hilbert (1904) Grundzge einer allge-
qui passe par son centre de gravit, autour duquel le meinen Theorie der linearen Integralgleichungen.
corps peut tourner librement & d'un mouvement uni- (Erste Mitteilung)" (Fundamentals of a gen-
forme. (Theorem. 44. Whatever be the shape of eral theory of linear integral equations. (First
the body, one can always assign to it such an axis, report)), Nachrichten von der Gesellschaft der
which passes through its center of gravity, around Wissenschaften zu Gttingen, Mathematisch-
which it can rotate freely and with a uniform mo- Physikalische Klasse (News of the Philosophical
tion.) Society at Gttingen, mathematical-physical
14.11. REFERENCES 111

section), pp. 49-91. From page 51: Insbesondere [34] Shilov 1977, p. 109
in dieser ersten Mitteilung gelange ich zu Formeln,
die die Entwickelung einer willkrlichen Funktion [35] Lemma for the eigenspace
nach gewissen ausgezeichneten Funktionen, die ich
Eigenfunktionen nenne, liefern: (In particular, [36] Schaums Easy Outline of Linear Algebra, p. 111
in this rst report I arrive at formulas that provide
the [series] development of an arbitrary function [37] For a proof of this lemma, see Roman 2008, Theorem 8.2
in terms of some distinctive functions, which I on p. 186; Shilov 1977, p. 109; Heeron 2001, p. 364;
call eigenfunctions: ) Later on the same page: Beezer 2006, Theorem EDELI on p. 469; and Lemma for
Dieser Erfolg ist wesentlich durch den Umstand linear independence of eigenvectors
bedingt, da ich nicht, wie es bisher geschah, in
erster Linie auf den Beweis fr die Existenz der [38] Axler, Sheldon, Ch. 5, Linear Algebra Done Right (2nd
Eigenwerte ausgehe, " (This success is mainly ed.), p. 77
attributable to the fact that I do not, as it has
happened until now, rst of all aim at a proof of [39] Trefethen, Lloyd N.; Bau, David (1997), Numerical Lin-
the existence of eigenvalues, ) ear Algebra, SIAM

For the origin and evolution of the terms eigenvalue, [40] Graham, D.; Midgley, N. (2000), Graphical rep-
characteristic value, etc., see: Earliest Known Uses resentation of particle shape using triangular dia-
of Some of the Words of Mathematics (E) grams: an Excel spreadsheet method, Earth Surface
Processes and Landforms, 25 (13): 14731477,
[19] See Aldrich 2006
Bibcode:2000ESPL...25.1473G, doi:10.1002/1096-
[20] Francis, J. G. F. (1961), The QR Transformation, 9837(200012)25:13<1473::AID-ESP158>3.0.CO;2-C
I (part 1)", The Computer Journal, 4 (3): 265271,
doi:10.1093/comjnl/4.3.265 and Francis, J. G. F. (1962), [41] Sneed, E. D.; Folk, R. L. (1958), Pebbles in the
The QR Transformation, II (part 2)", The Computer Jour- lower Colorado River, Texas, a study of particle mor-
nal, 4 (4): 332345, doi:10.1093/comjnl/4.4.332 phogenesis, Journal of Geology, 66 (2): 114150,
Bibcode:1958JG.....66..114S, doi:10.1086/626490
[21] Kublanovskaya, Vera N. (1961), On some algorithms
for the solution of the complete eigenvalue prob- [42] Knox-Robinson, C.; Gardoll, Stephen J. (1998),
lem, USSR Computational Mathematics and Mathe- GIS-stereoplot: an interactive stereonet plotting
matical Physics, 3: 637657. Also published in: module for ArcView 3.0 geographic information
" system, Computers & Geosciences, 24 (3): 243,
" [On certain algo- Bibcode:1998CG.....24..243K, doi:10.1016/S0098-
rithms for the solution of the complete eigenvalue 3004(97)00122-2
problem],
(Journal of Computational [43] Stereo32 software
Mathematics and Mathematical Physics), 1 (4): 555570,
1961 [44] Benn, D.; Evans, D. (2004), A Practical Guide to the study
of Glacial Sediments, London: Arnold, pp. 103107
[22] See Golub & van Loan 1996, 7.3; Meyer 2000, 7.3
[45] Xirouhakis, A.; Votsis, G.; Delopoulus, A. (2004),
[23] Cornell University Department of Mathematics (2016) Estimation of 3D motion and structure of human faces
Lower-Level Courses for Freshmen and Sophomores. Ac- (PDF), National Technical University of Athens
cessed on 2016-03-27.
[46] Diekmann O, Heesterbeek JA, Metz JA (1990), On
[24] University of Michigan Mathematics (2016) Math Course the denition and the computation of the basic re-
Catalogue. Accessed on 2016-03-27. production ratio R0 in models for infectious dis-
[25] Press (2007, pp. 38) eases in heterogeneous populations, Journal of Math-
ematical Biology, 28 (4): 365382, PMID 2117040,
[26] Fraleigh (1976, p. 358) doi:10.1007/BF00178324

[27] Golub & Van Loan (1996, p. 316) [47] Odo Diekmann; J. A. P. Heesterbeek (2000), Mathemat-
ical epidemiology of infectious diseases, Wiley series in
[28] Beauregard & Fraleigh (1973, p. 307) mathematical and computational biology, West Sussex,
England: John Wiley & Sons
[29] Herstein (1964, p. 272)

[30] Nering (1970, pp. 115116)

[31] Herstein (1964, p. 290) 14.11 References


[32] Nering (1970, p. 116)
Akivis, Max A.; Goldberg, Vladislav V. (1969),
[33] See Korn & Korn 2000, Section 14.3.5a; Friedberg, Insel Tensor calculus, Russian, Science Publishers,
& Spence 1989, p. 217 Moscow
112 CHAPTER 14. EIGENVALUES AND EIGENVECTORS

Aldrich, John (2006), Eigenvalue, eigenfunction, Gelfand, I. M. (1971), Lecture notes in linear alge-
eigenvector, and related terms, in Je Miller (Ed- bra, Russian, Science Publishers, Moscow
itor), Earliest Known Uses of Some of the Words of
Mathematics, retrieved 2006-08-22 Gohberg, Israel; Lancaster, Peter; Rodman, Leiba
(2005), Indenite linear algebra and applications,
Alexandrov, Pavel S. (1968), Lecture notes in analyt- Basel-Boston-Berlin: Birkhuser Verlag, ISBN 3-
ical geometry, Russian, Science Publishers, Moscow 7643-7349-0
Anton, Howard (1987), Elementary Linear Algebra Golub, Gene F.; van der Vorst, Henk A. (2000),
(5th ed.), New York: Wiley, ISBN 0-471-84819-0 Eigenvalue computation in the 20th century,
Journal of Computational and Applied Mathemat-
Beauregard, Raymond A.; Fraleigh, John B. (1973),
ics, 123: 3565, Bibcode:2000JCoAM.123...35G,
A First Course In Linear Algebra: with Optional
doi:10.1016/S0377-0427(00)00413-1
Introduction to Groups, Rings, and Fields, Boston:
Houghton Miin Co., ISBN 0-395-14017-X Golub, Gene H.; Van Loan, Charles F. (1996), Ma-
trix computations (3rd ed.), Johns Hopkins Univer-
Beezer, Robert A. (2006), A rst course in linear
sity Press, Baltimore, Maryland, ISBN 978-0-8018-
algebra, Free online book under GNU licence, Uni-
5414-9
versity of Puget Sound
Greub, Werner H. (1975), Linear Algebra (4th ed.),
Betteridge, Harold T. (1965), The New Cassells
Springer-Verlag, New York, ISBN 0-387-90110-8
German Dictionary, New York: Funk & Wagnall,
LCCN 58-7924 Halmos, Paul R. (1987), Finite-dimensional vector
Bowen, Ray M.; Wang, Chao-Cheng (1980), Linear spaces (8th ed.), New York: Springer-Verlag, ISBN
and multilinear algebra, Plenum Press, New York, 0-387-90093-4
ISBN 0-306-37508-7 Hawkins, T. (1975), Cauchy and the spectral the-
Brown, Maureen (October 2004), Illuminating Pat- ory of matrices, Historia Mathematica, 2: 129,
terns of Perception: An Overview of Q Methodology doi:10.1016/0315-0860(75)90032-4

Burden, Richard L.; Faires, J. Douglas (1993), Nu- Heeron, Jim (2001), Linear Algebra, Online book,
merical Analysis (5th ed.), Boston: Prindle, Weber St Michaels College, Colchester, Vermont, USA
and Schmidt, ISBN 0-534-93219-3 Herstein, I. N. (1964), Topics In Algebra, Waltham:
Carter, Tamara A.; Tapia, Richard A.; Papacon- Blaisdell Publishing Company, ISBN 978-
stantinou, Anne, Linear Algebra: An Introduction to 1114541016
Linear Algebra for Pre-Calculus Students, Rice Uni- Horn, Roger A.; Johnson, Charles F. (1985), Matrix
versity, Online Edition, retrieved 2008-02-19 analysis, Cambridge University Press, ISBN 0-521-
Cohen-Tannoudji, Claude (1977), Chapter II. The 30586-1
mathematical tools of quantum mechanics, Quan-
Kline, Morris (1972), Mathematical thought from
tum mechanics, John Wiley & Sons, ISBN 0-471-
ancient to modern times, Oxford University Press,
16432-1
ISBN 0-19-501496-0
Curtis, Charles W. (1999), Linear Algebra: An In-
Korn, Granino A.; Korn, Theresa M. (2000),
troductory Approach (4th ed.), Springer, ISBN 0-
Mathematical Handbook for Scientists and
387-90992-3
Engineers: Denitions, Theorems, and For-
Demmel, James W. (1997), Applied numerical lin- mulas for Reference and Review, New York:
ear algebra, SIAM, ISBN 0-89871-389-7 McGraw-Hill (2nd Revised ed.), Dover Pub-
lications, Bibcode:1968mhse.book.....K, ISBN
Fraleigh, John B. (1976), A First Course In Abstract 0-486-41147-8
Algebra (2nd ed.), Reading: Addison-Wesley, ISBN
0-201-01984-1 Kuttler, Kenneth (2007), An introduction to lin-
ear algebra (PDF), Online e-book in PDF format,
Fraleigh, John B.; Beauregard, Raymond A. (1995), Brigham Young University
Linear algebra (3rd ed.), Addison-Wesley Publish-
ing Company, ISBN 0-201-83999-7 Lancaster, P. (1973), Matrix theory, Russian,
Moscow, Russia: Science Publishers
Friedberg, Stephen H.; Insel, Arnold J.; Spence,
Lawrence E. (1989), Linear algebra (2nd ed.), En- Larson, Ron; Edwards, Bruce H. (2003), Elemen-
glewood Clis, New Jersey 07632: Prentice Hall, tary linear algebra (5th ed.), Houghton Miin
ISBN 0-13-537102-3 Company, ISBN 0-618-33567-6
14.12. EXTERNAL LINKS 113

Lipschutz, Seymour (1991), Schaums outline of the- A Beginners Guide to Eigenvectors.


ory and problems of linear algebra, Schaums outline Deeplearning4j. 2015.
series (2nd ed.), New York: McGraw-Hill Compa-
nies, ISBN 0-07-038007-4
14.12.1 Theory
Meyer, Carl D. (2000), Matrix analysis and applied
linear algebra, Society for Industrial and Applied Hazewinkel, Michiel, ed. (2001), Eigen value,
Mathematics (SIAM), Philadelphia, ISBN 978-0- Encyclopedia of Mathematics, Springer, ISBN 978-
89871-454-8 1-55608-010-4
Nering, Evar D. (1970), Linear Algebra and Ma- Hazewinkel, Michiel, ed. (2001), Eigen vector,
trix Theory (2nd ed.), New York: Wiley, LCCN Encyclopedia of Mathematics, Springer, ISBN 978-
76091646 1-55608-010-4
(in Russian)Pigolkina, T. S.; Shulman, V. S. (1977). Eigenvalue (of a matrix)". PlanetMath.
Eigenvalue. In Vinogradov, I. M. Mathematical
Encyclopedia. 5. Moscow: Soviet Encyclopedia. Eigenvector Wolfram MathWorld

Press, William H.; Teukolsky, Saul A.; Vetterling, Eigen Vector Examination working applet
William T.; Flannery, Brian P. (2007), Numerical
Same Eigen Vector Examination as above in a Flash
Recipes: The Art of Scientic Computing (3rd ed.),
demo with sound
ISBN 9780521880688
Computation of Eigenvalues
Roman, Steven (2008), Advanced linear algebra
(3rd ed.), New York: Springer Science + Business Numerical solution of eigenvalue problems Edited
Media, LLC, ISBN 978-0-387-72828-5 by Zhaojun Bai, James Demmel, Jack Dongarra,
Sharipov, Ruslan A. (1996), Course of Linear Al- Axel Ruhe, and Henk van der Vorst
gebra and Multidimensional Geometry: the textbook, Eigenvalues and Eigenvectors on the Ask Dr. Math
Bibcode:2004math......5323S, ISBN 5-7477-0099- forums: ,
5, arXiv:math/0405323
Shilov, Georgi E. (1977), Linear algebra, Translated
14.12.2 Demonstration applets
and edited by Richard A. Silverman, New York:
Dover Publications, ISBN 0-486-63518-X Java applet about eigenvectors in the real plane
Shores, Thomas S. (2007), Applied linear alge- Wolfram Language functionality for Eigenvalues,
bra and matrix analysis, Springer Science+Business Eigenvectors and Eigensystems
Media, LLC, ISBN 0-387-33194-8
Strang, Gilbert (1993), Introduction to linear alge-
bra, Wellesley-Cambridge Press, Wellesley, Mas-
sachusetts, ISBN 0-9614088-5-5
Strang, Gilbert (2006), Linear algebra and its appli-
cations, Thomson, Brooks/Cole, Belmont, Califor-
nia, ISBN 0-03-010567-6

14.12 External links


What are Eigen Values? non-technical introduc-
tion from PhysLink.coms Ask the Experts
Eigen Values and Eigen Vectors Numerical Exam-
ples Tutorial and Interactive Program from Rev-
oledu.
Introduction to Eigen Vectors and Eigen Values
lecture from Khan Academy
Hill, Roger (2009). " Eigenvalues. Sixty Sym-
bols. Brady Haran for the University of Nottingham.
Chapter 15

Positive-denite matrix

Not to be confused with Positive matrix and Totally Either way, the result is positive since z is not
positive matrix. the zero vector (that is, at least one of a and b
is not zero).
In linear algebra, a symmetric n n real matrix M is said
to be positive denite if the scalar z T M z is positive for The real symmetric matrix
every non-zero column vector z of n real numbers. Here
z T denotes the transpose of z .[1]

More generally, an n n Hermitian matrix M is said 2 1 0
to be positive denite if the scalar z M z is real and M = 1 2 1
positive for all non-zero column vectors z of n complex 0 1 2
numbers. Here z denotes the conjugate transpose of z .
is positive denite since for any non-zero col-
The negative denite, positive semi-denite, and neg- umn vector z with entries a, b and c, we have
ative semi-denite matrices are dened in the same way,

except that in the last two cases 0s are allowed, i.e. the
[ ] a
expression z T M z or z M z is required to be always neg- z T M z = (z T M )z = (2a b) (a + 2b c) (b + 2c) b
ative, non-negative, and non-positive, respectively. c
Positive denite matrices are closely related to positive- = 2a2 2ab + 2b2 2bc + 2c2
denite symmetric bilinear forms (or sesquilinear forms = a2 + (a b)2 + (b c)2 + c2
in the complex case), and to inner products of vector
spaces.[2] This result is a sum of squares, and therefore
Some authors use more general denitions of positive non-negative; and is zero only if a = b = c = 0,
denite that include some non-symmetric real matrices, that is, when z is zero.
or non-Hermitian complex ones.
For any real invertible matrix A , the product AT A is
a positive denite matrix. A simple proof is that for
15.1 Examples any non-zero vector z , the condition z T AT Az =
Az2 > 0, since the invertibility of matrix A
[ ] means that Az = 0.
1 0
The identity matrix I = is positive denite
0 1
(and as such also positive semi-denite). Seen as a The examples M and N above show that a matrix in which
real matrix, it is symmetric, and, for any non-zero some elements are negative may still be positive-denite,
column vector z with real entries a and b, one has and conversely a matrix whose entries are all positive may
not be positive denite.

[ ][ ]
T
[ ] 1 0 a 15.2 Connections
z Iz = a b = a2 + b2
0 1 b
A general purely quadratic real function f(z) on n real
Seen as a complex matrix, for any non-zero col-
variables z1 , ..., zn can always be written as zT Mz where
umn vector z with complex entries a and b one
z is the column vector with those variables, and M is a
has
symmetric real matrix. Therefore, the matrix being pos-
[ ][ ]

[ ] 1 0 a itive denite means that f has a unique minimum (zero)
z Iz = a b = a a+b b = |a|2 +|b|2 when z is zero, and is strictly positive for any other z.
0 1 b

114
15.4. QUADRATIC FORMS 115

More generally, a twice-dierentiable real function f on decomposition P 1 DP where P is unitary, D diag-


n real variables has local minimum at arguments z1 , ..., onal, and all diagonal elements Dii = i of D are
zn if its gradient is zero and its Hessian (the matrix of all real and positive. Let E be the real diagonal ma-
second derivatives) is positive semi-denite at that point. trix with entries Eii = i so E 2 = D ; then
Similar statements can be made for negative denite and P 1 DP = P DP = P EEP = (EP ) EP.
semi-denite matrices. Now we let x1 , . . . , xn be the columns of EP. These
In statistics, the covariance matrix of a multivariate prob- vectors are linearly independent, and by the above M
ability distribution is always positive semi-denite; and is their Gram matrix, under the standard inner prod-
uct of Cn , namely xi , xj = xi xj .
it is positive denite unless one variable is an exact lin-
ear combination of the others. Conversely, every posi- 4. Its leading principal minors are all positive. The
tive semi-denite matrix is the covariance matrix of some kth leading principal minor of a matrix M is the
multivariate distribution. determinant of its upper-left k by k sub-matrix. It
turns out that a matrix is positive denite if and only
if all these determinants are positive. This condi-
15.3 Characterizations tion is known as Sylvesters criterion, and provides
an ecient test of positive-deniteness of a sym-
Let M be an n n Hermitian matrix. The following prop- metric real matrix. Namely, the matrix is reduced
erties are equivalent to M being positive denite: to an upper triangular matrix by using elementary
row operations, as in the rst part of the Gaussian
elimination method, taking care to preserve the sign
1. All its eigenvalues are positive. Let P 1 DP be an of its determinant during pivoting process. Since
eigendecomposition of M, where P is a unitary com- the kth leading principal minor of a triangular ma-
plex matrix whose rows comprise an orthonormal trix is the product of its diagonal elements up to
basis of eigenvectors of M, and D is a real diagonal row k, Sylvesters criterion is equivalent to checking
matrix whose main diagonal contains the corre- whether its diagonal elements are all positive. This
sponding eigenvalues. The matrix M may be re- condition can be checked each time a new row k of
garded as a diagonal matrix D that has been re- the triangular matrix is obtained.
expressed in coordinates of the basis P. In particular,
the one-to-one change of variable y = Pz shows that 5. It has a unique Cholesky decomposition. The
z*Mz is real and positive for any complex vector z matrix M is positive denite if and only if there ex-
if and only if y*Dy is real and positive for any y; in ists a unique lower triangular matrix L, with real and
other words, if D is positive denite. For a diagonal strictly positive diagonal elements, such that M =
matrix, this is true only if each element of the main LL*. This factorization is called the Cholesky de-
diagonalthat is, every eigenvalue of Mis posi- composition of M.
tive. Since the spectral theorem guarantees all eigen-
values of a Hermitian matrix to be real, the positivity
of eigenvalues can be checked using Descartes rule 15.4 Quadratic forms
of alternating signs when the characteristic polyno-
mial of a real, symmetric matrix M is available.
The (purely) quadratic form associated with a real matrix
2. The associated sesquilinear form is an inner M is the function Q : Rn R such that Q(x) = xT Mx for
product. The sesquilinear form dened by M is all x. It turns out that the matrix M is positive denite
the function , from Cn Cn to C such that if and only if it is symmetric and its quadratic form is a
x, y := y M x for all x and y in Cn , where y* is strictly convex function.
the complex conjugate of y. For any complex matrix More generally, any quadratic function from Rn to R can
M, this form is linear in each argument separately. be written as xT Mx + xT b + c where M is a symmetric n
Therefore, the form is an inner product on Cn if and n matrix, b is a real n-vector, and c a real constant. This
only if z, z is real and positive for all nonzero z; quadratic function is strictly convex when M is positive
that is if and only if M is positive denite. (In fact, denite, and hence has a unique nite global minimum, if
every inner product on Cn arises in this fashion from and only if M is positive denite. For this reason, positive
a Hermitian positive denite matrix.) denite matrices play an important role in optimization
problems.
3. It is the Gram matrix of linearly independent
vectors. Let x1 , . . . , xn be a list of n linearly in-
dependent vectors of some complex vector space
with an inner product , . It can be veried that 15.5 Simultaneous diagonalization
the Gram matrix M of those vectors, dened by
Mij = xi , xj , is always positive denite. Con- A symmetric matrix and another symmetric and positive-
versely, if M is positive denite, it has an eigen- denite matrix can be simultaneously diagonalized, al-
116 CHAPTER 15. POSITIVE-DEFINITE MATRIX

though not necessarily via a similarity transformation.


This result does not extend to the case of three or more
matrices. In this section we write for the real case. Ex- x M x 0
tension to the complex case is immediate. for all x in Cn (or, all x in Rn for the real matrix).
Let M be a symmetric and N a symmetric and positive- A matrix M is positive-semidenite if and only if it arises
denite matrix. Write the generalized eigenvalue equa- as the Gram matrix of some set of vectors. In contrast to
tion as (MN)x = 0 where we impose that x be normal- the positive-denite case, these vectors need not be lin-
ized, i.e. xT Nx = 1. Now we use Cholesky decomposition early independent.
to write the inverse of N as QT Q. Multiplying by Q and
letting x QT y, we get Q(MN)QT y = 0, which can For any matrix A, the matrix A*A is positive semide-
be rewritten as (QMQT )y = y where yT y = 1. Manipula- nite, and rank(A) = rank(A*A). Conversely, any Hermi-
tion now yields MX = NX where X is a matrix having as tian positive semi-denite matrix M can be written as M
columns the generalized eigenvectors and is a diagonal = LL*, where L is lower triangular; this is the Cholesky
matrix with the generalized eigenvalues. Now premulti- decomposition. If M is not positive denite, then some
plication with XT gives the nal result: XT MX = and of the diagonal elements of L may be zero.
XT NX = I, but note that this is no longer an orthogonal A Hermitian matrix is positive semidenite if and only if
diagonalization with respect to the inner product where all of its principal minors are nonnegative. It is however
yT y = 1. In fact, we diagonalized M with respect to the not enough to consider the leading principal minors only,
inner product induced by N. as is checked on the diagonal matrix with entries 0 and
Note that this result does not contradict what is said on si- 1.
multaneous diagonalization in the article Diagonalizable
matrix, which refers to simultaneous diagonalization by a 15.6.3 Negative-semidenite
similarity transformation. Our result here is more akin to
a simultaneous diagonalization of two quadratic forms, It is called negative-semidenite if
and is useful for optimization of one form under con-
ditions on the other. For this result see Horn&Johnson,
1985, page 218 and following.
x M x 0
for all x in Cn (or, all x in Rn for the real matrix).
15.6 Negative-denite, semide-
nite and indenite matrices 15.6.4 Indenite

A Hermitian matrix is negative-denite, negative- A Hermitian matrix which is neither positive denite,
semidenite, or positive-semidenite if and only if all negative denite, positive-semidenite, nor negative-
of its eigenvalues are negative, non-positive, or non- semidenite is called indenite. Indenite matrices are
negative, respectively. also characterized by having both positive and negative
eigenvalues.

15.6.1 Negative-denite
15.7 Further properties
The n n Hermitian matrix M is said to be negative-
denite if
If M is a Hermitian positive-semidenite matrix, one
sometimes writes M 0 and if M is positive-denite one
writes M > 0.[3] The notion comes from functional anal-
x Mx < 0 ysis where positive-semidenite matrices dene positive
for all non-zero x in C (or, all non-zero x in R for the operators.
n n

real matrix), where x* is the conjugate transpose of x. For arbitrary square matrices M, N we write M N if M
A matrix is negative denite if its k-th order leading prin- N 0; i.e., M N is positive semi-denite. This denes
cipal minor is negative when k is odd, and positive when a partial ordering on the set of all square matrices. One
k is even. can similarly dene a strict partial ordering M > N.

1. Every positive denite matrix is invertible and its


15.6.2 Positive-semidenite inverse is also positive denite.[4] If M N > 0 then
N 1 M 1 > 0.[5] Moreover, by the min-max theo-
M is called positive-semidenite (or sometimes rem, the kth largest eigenvalue of M is greater than
nonnegative-denite) if the kth largest eigenvalue of N
15.8. BLOCK MATRICES 117

2. If M is positive denite and r > 0 is a real number, 12. If M,N 0, although MN is not necessary positive-
then rM is positive denite.[6] If M and N are posi- semidenite, the Kronecker product M N
tive denite, then the sum M + N [6] and the products 0, the Hadamard product M N 0 (this re-
MNM and NMN are also positive denite. If MN = sult is often called the Schur product theorem).,[10]
NM, then MN is also positive denite. and the Frobenius product M : N 0 (Lancaster-
Tismenetsky, The Theory of Matrices, p. 218).
3. Every principal submatrix of a positive denite ma-
trix is positive denite. 13. Regarding the Hadamard product of two positive-
4. If M is positive-semidenite, then Q M Q is T semidenite matrices M = (mij) 0, N 0, there
positive-semidenite. If M is positive denite and are two notable inequalities:
Q has full rank, then QT M Q is positive denite.[7] Oppenheims inequality: det(M N )

5. The diagonal entries mii are real and non- det(N ) i mii . [11]
negative. As a consequence the trace, tr(M) 0. det(M N) det(M) det(N).[12]
Furthermore,[8] since every principal sub matrix (in
particular, 2-by-2) is positive denite,
15.8 Block matrices
mii + mjj
|mij | mii mjj
2 A positive 2n 2n matrix may also be dened by blocks:
and thus
max |mij | max |mii | [ ]
A B
M=
C D
6. A matrix M is positive semi-denite if and only if
there is a positive semi-denite matrix B with B2 = where each block is n n. By applying the positivity con-
M. This matrix B is unique,[9] is called the square dition, it immediately follows that A and D are hermitian,
root of M, and is denoted with B = M 1/2 (the square and C = B*.
root B is not to be confused with the matrix L in
the Cholesky factorization M = LL*, which is also We have that z*Mz 0 for all complex z, and in particular
sometimes called the square root of M). If M > N > for z = ( v, 0)T . Then
0 then M 1/2 > N 1/2 > 0.
7. If M is a symmetric matrix of the form mij = m(ij), [ [ ][ ]
] A B v
and the strict inequality holds v 0 = v Av 0.
B D 0

A similar argument can be applied to D, and thus we con-


|m(j)| < m(0) clude that both A and D must be positive denite matri-
j=0
ces, as well.
then M is strictly positive denite.
Converse results can be proved with stronger conditions
8. Let M > 0 and N Hermitian. If MN + NM 0 (resp., on the blocks, for instance using the Schur complement.
MN + NM > 0) then N 0 (resp., N > 0).
9. If M > 0 is real, then there is a > 0 such that M >
I, where I is the identity matrix. 15.9 On the denition
10. If Mk denotes the leading k by k minor,
det(Mk )/ det(Mk1 ) is the kth pivot during 15.9.1 Consistency between real and com-
LU decomposition. plex denitions
11. The set of positive semidenite symmetric matrices Since every real matrix is also a complex matrix, the def-
is convex. That is, if M and N are positive semidef- initions of positive denite for the two classes must
inite, then for any between 0 and 1, M + (1)N agree.
is also positive semidenite. For any vector x:
For complex matrices, the most common denition says
that "M is positive denite if and only if z*Mz is real and
T T T positive for all non-zero complex column vectors z". This
x (M +(1)N )x = x M x+(1)x N x 0.
condition implies that M is Hermitian, that is, its trans-
This property guarantees that pose is equal to its conjugate. To see this, consider the
semidenite programming problems matrices A = (M+M*)/2 and B = (MM*)/(2i), so that M
converge to a globally optimal solution. = A+iB and z*Mz = z*Az + iz*Bz. The matrices A and
118 CHAPTER 15. POSITIVE-DEFINITE MATRIX

B are Hermitian, therefore z*Az and z*Bz are individu- Covariance matrix
ally real. If z*Mz is real, then z*Bz must be zero for all z.
Then B is the zero matrix and M = A, proving that M is M-matrix
Hermitian.
Positive-denite function
By this denition, a positive denite real matrix M is
Hermitian, hence symmetric; and zT Mz is positive for all
Positive-denite kernel
non-zero real column vectors z. However the last condi-
tion alone is not sucient for M to be positive denite. Schur complement
For example, if
Square root of a matrix
[ ]
1 1
M= , Sylvesters criterion
1 1
then for any real vector z with entries a and b we have
zT Mz = (ab)a + (a+b)b = a2 + b2 , which is always pos-
itive if z is not zero. However, if z is the complex vector 15.11 Notes
with entries 1 and i, one gets
[1] http://onlinelibrary.wiley.com/doi/10.1002/
z*Mz = [1, i]M[1, i]T = [1+i, 1i][1, i]T = 2 9780470173862.app3/pdf
+ 2i,
[2] Stewart, J. (1976). Positive denite functions and gener-
which is not real. Therefore, M is not positive denite. alizations, an historical survey. Rocky Mountain J. Math,
6(3).
On the other hand, for a symmetric real matrix M, the
condition "zT Mz > 0 for all nonzero real vectors z" does [3] This may be confusing, as sometimes nonnegative matri-
imply that M is positive denite in the complex sense. ces are also denoted in this way. A common alternative
notation is M 0 and M 0 for positive semidenite
and positive denite matrices, respectively.
15.9.2 Extension for non symmetric matri-
ces [4] Horn & Johnson (1985), p. 397

Some authors choose to say that a complex matrix M is [5] Horn & Johnson (1985), Corollary 7.7.4(a)
positive denite if Re(z*Mz) > 0 for all non-zero complex
[6] Horn & Johnson (1985), Observation 7.1.3
vectors z, where Re(c) denotes the real part of a com-
plex number c.[13] This weaker denition encompasses [7] Horn, Roger A.; Johnson, Charles R. (2013). 7.1 De-
some non-Hermitian complex matrices,
[ 1 1 ] including some nitions and Properties. Matrix Analysis (Second Edition).
non-symmetric real ones, such as 1 1 . Cambridge University Press. p. 431. ISBN 978-0-521-
Indeed, with this denition, a real matrix is positive def- 83940-2.
inite if and only if zT Mz > 0 for all nonzero real vectors Observation 7.1.8 Let A Mn be Hermitian and let
C Mn,m :
z, even if M is not symmetric.
* Suppose that A is positive semidenite. Then
In general, we have Re(z*Mz) > 0 for all complex nonzero C AC is positive semidenite, nullspace( C AC ) =
vectors z if and only if the Hermitian part (M + M*)/2 of nullspace(AC), and rank( C AC )=rank( AC )
M is positive denite in the narrower sense. Similarly, we * Suppose that A is positive denite. Then rank( C AC
have xT Mx > 0 for all real nonzero vectors x if and only if )=rank(C), and C AC is positive denite if and only if
the symmetric part (M + M T )/2 of M is positive denite rank(C)=m
in the narrower sense.
[8] Horn & Johnson (1985), p. 398
In summary, the distinguishing feature between the real
and complex case is that, a bounded positive operator [9] Horn & Johnson (1985), Theorem 7.2.6 with k = 2
on a complex Hilbert space is necessarily Hermitian, or
self adjoint. The general claim can be argued using the [10] Horn & Johnson (1985), Theorem 7.5.3
polarization identity. That is no longer true in the real
case. [11] Horn & Johnson (1985), Theorem 7.8.6

[12] (Styan 1973)

15.10 See also [13] Weisstein, Eric W. Positive Denite Matrix. From
MathWorld--A Wolfram Web Resource. Accessed on
Cholesky decomposition 2012-07-26
15.13. EXTERNAL LINKS 119

15.12 References
Horn, Roger A.; Johnson, Charles R. (1990). Matrix
Analysis. Cambridge University Press. ISBN 978-
0-521-38632-6.

Bhatia, Rajendra (2007). Positive denite matrices.


Princeton Series in Applied Mathematics. ISBN
978-0-691-12918-1.

15.13 External links


Hazewinkel, Michiel, ed. (2001), Positive-denite
form, Encyclopedia of Mathematics, Springer,
ISBN 978-1-55608-010-4
Wolfram MathWorld: Positive Denite Matrix
Chapter 16

Cambridge University Press

For the football club, see Cambridge University Press thors published by Cambridge have included John Mil-
F.C. ton, William Harvey, Isaac Newton, Bertrand Russell,
Warning: Page using Template:Infobox publisher with and Stephen Hawking.[5]
unknown parameter num_employees (this message is University printing began in Cambridge when the rst
shown only in preview).
practising University Printer, Thomas Thomas, set up a
Warning: Page using Template:Infobox publisher with printing house on the site of what became the Senate
unknown parameter key_people (this message is
House lawn a few yards from where the Presss
shown only in preview). bookshop now stands. In those days, the Stationers Com-
Warning: Page using Template:Infobox publisher with
pany in London jealously guarded its monopoly of print-
unknown parameter company_type (this message is ing, which partly explains the delay between the date of
shown only in preview). the Universitys Letters Patent and the printing of the rst
book.
Cambridge University Press (CUP) is the publishing In 1591, Thomass successor, John Legate, printed the
business of the University of Cambridge. Granted letters rst Cambridge Bible, an octavo edition of the popular
patent by Henry VIII in 1534, it is the worlds oldest Geneva Bible. The London Stationers objected stren-
publishing house and the second-largest university press uously, claiming that they had the monopoly on Bible
in the world (after Oxford University Press).[1][2] It also printing. The universitys response was to point out the
holds letters patent as the Queens Printer.[3] provision in its charter to print 'all manner of books.
The Presss mission is To further the Universitys mission Thus began the Presss tradition of publishing the Bible,
by disseminating knowledge in the pursuit of education, a tradition that has endured for over four centuries, be-
learning and research at the highest international levels of ginning with the Geneva Bible, and continuing with the
excellence.[4] Authorized Version, the Revised Version, the New En-
Cambridge University Press is a department of the Uni- glish Bible and the Revised English Bible. The restric-
versity of Cambridge and is both an academic and educa- tions and compromises forced upon Cambridge by the
tional publisher. With a global sales presence, publishing dispute with the London Stationers did not really come
hubs, and oces in more than 40 countries, it publishes to an end until the scholar Richard Bentley was given the
over 50,000 titles by authors from over 100 countries. power to set up a 'new-style press in 1696. In July 1697
Its publishing includes academic journals, monographs, the Duke of Somerset made a loan of 200 to the univer-
reference works, textbooks, and English-language teach- sity towards the printing house and presse and James
ing and learning publications. Cambridge University Halman, Registrary of the University, lent 100 for the
Press is a charitable enterprise that transfers part of its same purpose.[6]
annual surplus back to the university. It was in Bentleys time, in 1698, that a body of senior
scholars ('the Curators, known from 1733 as 'the Syn-
dics) was appointed to be responsible to the university
for the Presss aairs. The Press Syndicates publishing
16.1 History committee still meets regularly (eighteen times a year),
and its role still includes the review and approval of the
Cambridge University Press is both the oldest publish- Presss planned output. John Baskerville became Univer-
ing house in the world and the oldest university press. It sity Printer in the mid-eighteenth century. Baskervilles
originated from Letters Patent granted to the University concern was the production of the nest possible books
of Cambridge by Henry VIII in 1534, and has been pro- using his own type-design and printing techniques.
ducing books continuously since the rst University Press Baskerville wrote, The importance of the work demands
book was printed. Cambridge is one of the two privileged all my attention; not only for my own (eternal) reputation;
presses (the other being Oxford University Press). Au-

120
16.2. GOVERNANCE 121

and scale of its academic and educational publishing op-


eration. An important factor in this increase was the inau-
guration of its list of schoolbooks (including what came to
be known as the 'Pitt Press Series). During Clays admin-
istration, the Press also undertook a sizable co-publishing
venture with Oxford: the Revised Version of the Bible,
which was begun in 1870 and completed in 1885. It was
in this period as well that the Syndics of the Press turned
down what later became the Oxford English Dictionary
a proposal for which was brought to Cambridge by James
Murray (lexicographer) before he turned to Oxford.
The appointment of R. T. Wright as Secretary of the
Press Syndicate in 1892 marked the beginning of the
Presss development as a modern publishing business with
The University Printing House, on the main site of the Press a clearly dened editorial policy and administrative struc-
ture. It was Wright (with two great historians, Lord Acton
but (I hope) also to convince the world, that the Univer- and F. W. Maitland) who devised the plan for one of the
sity in the honour done me has not entirely misplaced their most distinctive Cambridge contributions to publishing
favours. Caxton would have found nothing to surprise the Cambridge Histories.
him if he had walked into the Presss printing house in The Cambridge Modern History was published between
the eighteenth century: all the type was still being set by 1902 and 1912. Nine years later the Press issued the
hand; wooden presses, capable of producing only 1,000 rst volumes of the freshly edited complete works of
sheets a day at best, were still in use; and books were still Shakespeare, a project of nearly equal scope that was
being individually bound by hand. A technological break- not nished until 1966. The Presss list in science and
through was badly needed, and it came when Lord Stan- mathematics began to thrive, with men of the stature of
hope perfected the making of stereotype plates. This in- Albert Einstein and Ernest Rutherford subsequently be-
volved making a mould of the whole surface of a page of coming Press authors. The Presss impressive contribu-
type and then casting plates from that mould. The Press tion to journal publishing began in 1893, and today it
was the rst to use this technique, and in 1805 produced publishes over 300 journals.
the technically successful and much-reprinted Cambridge
In 1992 the Press opened its own bookshop at 1 Trinity
Stereotype Bible.
Street, in the centre of Cambridge. Books have been sold
continuously on this site since at least 1581, perhaps even
as early as 1505, making it the oldest known bookshop
site in Britain.[7]
In 2012 the Press decided to end the tradition of printing
after 428 years and now uses third parties to provide all
of its print publications.

16.2 Governance

The Press has, since 1698, been governed by the Press


The letters patent of Cambridge University Press by Henry VIII 'Syndics (originally known as the 'Curators),[8] made up
allow the Press to print all manner of books. The ne initial of 18 senior academics from the University of Cambridge
with the kings portrait inside it and the large rst line of script who represent a wide variety of subjects.[9] The Syndi-
are still discernible. cate has several sub-committees: an Operating Board, an
Academic Publishing Committee, an ELT & Education
By the 1850s the Press was using steam-powered ma- Publishing Committee, and an Audit Committee. The
chine presses, employing two to three hundred people, Operating Board oversees the Presss nancial, strategic
and occupying several buildings in the Silver Street and and operational aairs, while the two Publishing Com-
Mill Lane area, including the one that the Press still occu- mittees provide quality assurance and formal approval of
pies, the Pitt Building (1833), which was built specically the publishing strategy.[10] The Chair of the Syndicate is
for the Press and in honour of William Pitt the Younger. currently Sir David Bell. The operational responsibility
Under the stewardship of C. J. Clay, who was University of the Press is delegated by the Syndics to the Presss
Printer from 1854 to 1882, the Press increased the size Chief Executive, Peter Phillips, and the Press Board.
122 CHAPTER 16. CAMBRIDGE UNIVERSITY PRESS

16.3.3 Education
The Education group delivers educational products
and solutions for primary, secondary and international
schools, and Education Ministries worldwide.

On the main site of the Press

The Pitt Building in Cambridge, which used to be the headquar-


16.4 Electronic and digital devel-
ters of Cambridge University Press, and now serves as a confer- opments
ence centre for the Press.

Owing to the changes taking place in the way that books


and content are bought and accessed, Cambridge believes
16.3 Structure that digital products, services and solutions could account
for two-thirds of its sales by 2020.[13]
Cambridge University Press is a global organisation with Since 2010, Cambridge has provided electronic book
three market facing publishing groups. These are: content through the website Cambridge Books Online.[14]
For many years, all of Cambridges journals have been
published in both hard copy format and online.
Other recent ventures include Race to Learn, curricu-
16.3.1 Academic publishing lum software that uses Formula One to encourage group
working in primary school children,[15] published through
This group publishes academic books and journals across CambridgeHitachi, a joint venture between Cambridge
science, technology, medicine, humanities, and social University Press and Hitachi Software Engineering that
sciences.[11] The group also publishes bibles, and the produces software for teaching on interactive white-
Press is one of only two publishers entitled to publish the boards in schools.
Book of Common Prayer and the King James Version of
the Bible in England.[12]
16.5 Controversies

16.5.1 Alms for Jihad


16.3.2 Cambridge English Language
Teaching (ELT) Main article: Alms for Jihad

The Cambridge English group publishes English language In 2007, controversy arose over CUPs decision to de-
teaching courses and resources for all ages around the stroy all remaining copies of its 2006 book, Alms for
world.[11] The group works closely with Cambridge En- Jihad: Charity and Terrorism in the Islamic World, by
glish Language Assessment to provide solutions that im- Burr and Collins, as part of the settlement of a law-
prove language prociency, aligned to the Common Euro- suit brought by Saudi billionaire Khalid bin Mahfouz.[16]
pean Framework of Reference for Languages, or CEFR. Within hours, Alms for Jihad became one of the 100 most
16.7. OPEN ACCESS 123

sought after titles on Amazon.com and eBay in the United The Press partnered with Bookshare in 2010 to make
States. CUP sent a letter to libraries asking them to re- their books accessible to people with qualied print dis-
move copies from circulation. CUP subsequently sent out abilities. Under the terms of the digital rights licence
copies of an errata sheet for the book. agreement, the Press delivers academic and scholarly
The American Library Association issued a recommen- books from all of its regional publishing centres on the
dation to libraries still holding Alms for Jihad: Given world to Bookshare for conversion into accessible for-
the intense interest in the book, and the desire of readers mats. People with qualied print disabilities around
to learn about the controversy rst hand, we recommend the world can download the books for a nominal Book-
share membership fee and read them using a computer
that U.S. libraries keep the book available for their users.
The publishers decision did not have the support of the or other assistive technology, with voice generated by
text-to-speech technology, as well as options for digital
books authors and was criticised by some who claimed it
was incompatible with freedom of speech and with free- Braille.[22]
dom of the press and that it indicated that English libel
laws were excessively strict.[17][18] In a New York Times
Book Review (7 October 2007), United States Congress- 16.7 Open access
man Frank R. Wolf described Cambridges settlement as
basically a book burning.[19] CUP pointed out that, at CUP is one of thirteen publishers to participate in the
that time, it had already sold most of its copies of the Knowledge Unlatched pilot, a global library consortium
book. approach to funding open access books.[23] CUP is a
Cambridge defended its actions, saying it had acted re- member of the Open Access Scholarly Publishers Asso-
sponsibly and that it is a global publisher with a duty to ciation.
observe the laws of many dierent countries.[20]

16.8 See also


16.5.2 Cambridge University Press, et al.
v. Becker et al. List of Cambridge University Press journals

Main article: Cambridge University Press v. Becker


16.9 Notes
In this ongoing case, begun in 2008, CUP et al. accused
Georgia State University of infringement of copyright. [1] Oldest printing and publishing house. Guinnessworl-
drecords.com. 2002-01-22. Retrieved 2012-03-28.

16.6 Community work [2] Black, Michael (1984). Cambridge University Press,
15831984. pp. 3289. ISBN 978-0-521-66497-4.

[3] The Queens Printers Patent. Cambridge UNiversity


Press. Retrieved 20 March 2016.

[4] Cambridge University Press at a Glance. Cambridge


University Press.

[5] Black, Michael (2000). Cambridge University Press,


15841984. Cambridge University Press. ISBN 978-0-
521-66497-4.

[6] The Cambridge University Press 16961712 (C. U. P.,


1966), p. 78

[7] History of the Bookshop. Cambridge University Press


Bookshop. 2009. Retrieved 4 May 2011.

[8] McKitterick, David (1998). A History of Cambridge


2008 conference booth University Press, Volume 2: Scholarship and Commerce,
16981872. Cambridge University Press. p. 61. ISBN
The Press has been recognised on several occasions for 978-0-521-30802-1.
its commitment to community involvement and social re-
[9] Statutes J The University Press (PDF). University of
sponsibility, and it has stated that public engagement is Cambridge. 2010. Retrieved 4 May 2011.
an important part of the Presss role, by undertaking ed-
ucational projects and fundraising.[21] [10] The Press Syndicate. Cambridge University Press.
124 CHAPTER 16. CAMBRIDGE UNIVERSITY PRESS

[11] Black, Michael (2000). A Short History of Cambridge A History of Cambridge University Press, Volume 3:
University Press. Cambridge University Press. pp. 65 New Worlds for Learning, 18731972; McKitterick,
66. ISBN 978-0-521-77572-4. David; 1998; ISBN 978-0-521-30803-8
[12] The Queens Printers Patent. Cambridge University A Short History of Cambridge University Press;
Press Website. Retrieved 15 October 2012. Black, Michael; 2000; ISBN 978-0-521-77572-4
[13] Neill, Graeme (1 November 2010). CUP looks to digi- Cambridge University Press 15841984; Black,
tal. The Bookseller. Retrieved 4 May 2011.
Michael, Foreword by Gordon Johnson; 2000; ISBN
[14] Neilan, Catherine (7 December 2009). CUP launches 978-0-521-66497-4, Hardback ISBN 978-0-521-
online books platform. The Bookseller. Retrieved 4 May 26473-0
2011.

[15] BETT award winners 2010. The Guardian. 14 January


2010. Retrieved 4 May 2011. 16.11 External links
[16] Steyn, Mark (6 August 2007). One Way Multicultural- A Brief History of Cambridge University Press
ism. The New York Sun. Ronald Weintraub. Retrieved 4
May 2011. Cambridge University Press
[17] Richardson, Anna (3 August 2007). Bonus Books criti- Cambridge University Press Bibles
cises CUP. Thebookseller.com. Retrieved 4 May 2011.
Cambridge Journals Online
[18] Jaschick, Scott (16 August 2007). A University Press
stands up and wins. Insidehighered.com. Retrieved 4
May 2011. Coordinates: 521118N 00755E / 52.1882N
0.1320E
[19] Danadio, Rachel (7 October 2007). Libel Without Bor-
ders. The New York Times. Retrieved 4 May 2011.

[20] Taylor, Kevin (9 August 2007). Why CUP acted respon-


sibly. The Bookseller. Retrieved 4 May 2011.

[21] Annual Report and Accounts for the year that ended 30
April 2009 (PDF). Cambridge University Press. 2009.
p. 30. Retrieved 4 May 2011.

[22] CUP grants worldwide digital rights to Bookshare. Re-


search Information. 24 May 2010. Retrieved 4 May 2011.

[23] Good for publishers. knowledgeunlatched.org.

16.10 References
Anonymous; The Students Guide to the University
of Cambridge. Third Edition, Revised and Partly
Re-written; Deighton Bell, 1874 (reissued by Cam-
bridge University Press, 2009; ISBN 978-1-108-
00491-6)

Anonymous; War Record of the Cambridge Univer-


sity Press 19141919; Cambridge University Press,
1920; (reissued by Cambridge University Press,
2009; ISBN 978-1-108-00294-3)

A History of Cambridge University Press, Volume 1:


Printing and the Book Trade in Cambridge, 1534
1698; McKitterick, David; 1992; ISBN 978-0-521-
30801-4

A History of Cambridge University Press, Volume 2:


Scholarship and Commerce, 16981872; McKitter-
ick, David; 1998; ISBN 978-0-521-30802-1
Chapter 17

Hamiltonian (quantum mechanics)

In quantum mechanics, the Hamiltonian is the operator is the potential energy operator and
corresponding to the total energy of the system in most
of the cases. It is usually denoted by H, also or . Its
spectrum is the set of possible outcomes when one mea- p^
^ p p2 2 2
T = = =
sures the total energy of a system. Because of its close 2m 2m 2m
relation to the time-evolution of a system, it is of fun- is the kinetic energy operator in which m is the mass of
damental importance in most formulations of quantum the particle, the dot denotes the dot product of vectors,
theory. and
The Hamiltonian is named after William Rowan Hamil-
ton, who also created a revolutionary reformation of
Newtonian mechanics, now called Hamiltonian mechan- p = i
ics, that is important in quantum physics.
is the momentum operator wherein is the del operator.
The dot product of with itself is the Laplacian 2 . In
three dimensions using Cartesian coordinates the Laplace
17.1 Introduction operator is

Main article: Operator (physics) Operators in quantum


mechanics 2 2 2
2 = 2 + 2 +
x y z 2
The Hamiltonian is the sum of the kinetic energies of all Although this is not the technical denition of the
the particles, plus the potential energy of the particles as- Hamiltonian in classical mechanics, it is the form it most
sociated with the system. For dierent situations or num- commonly takes. Combining these together yields the fa-
ber of particles, the Hamiltonian is dierent since it in- miliar form used in the Schrdinger equation:
cludes the sum of kinetic energies of the particles, and the
potential energy function corresponding to the situation.
H = T + V
p^
^ p
= + V (r, t)
17.2 The Schrdinger Hamiltonian 2m
2
= 2 + V (r, t)
17.2.1 One particle 2m
which allows one to apply the Hamiltonian to systems de-
By analogy with classical mechanics, the Hamiltonian is scribed by a wave function (r, t). This is the approach
commonly expressed as the sum of operators correspond- commonly taken in introductory treatments of quantum
ing to the kinetic and potential energies of a system in the mechanics, using the formalism of Schrdingers wave
form mechanics.
One can also make substitutions to certain variables to
t specic cases, such as some involving electromagnetic
H = T + V elds.
where
17.2.2 Many particles
V = V = V (r, t) The formalism can be extended to N particles:

125
126 CHAPTER 17. HAMILTONIAN (QUANTUM MECHANICS)

only be written as above: a function of all the spatial po-



N sitions of each particle.
H = Tn + V For non-interacting particles, i.e. particles which do not
n=1 interact mutually and move independently, the potential
where of the system is the sum of the separate potential energy
for each particle,[1] that is

V = V (r1 , r2 rN , t)
N
is the potential energy function, now a function of the V = V (ri , t) = V (r1 , t)+V (r2 , t)+ +V (rN , t)
spatial conguration of the system and time (a particular i=1
set of spatial positions at some instant of time denes a
conguration) and; The general form of the Hamiltonian in this case is:

pn pn
Tn = 2 1 2
N N
2mn H = + Vi
2 i=1 mi i i=1
is the kinetic energy operator of particle n, and is the
gradient for particle n, 2 is the Laplacian for particle N ( )
2 2
using the coordinates: = i + Vi
i=1
2mi

N
2 2 2 = Hi
2n = + +
x2n yn2 zn2 i=1

Combining these yields the Schrdinger Hamiltonian for where the sum is taken over all particles and their cor-
the N-particle case: responding potentials; the result is that the Hamiltonian
of the system is the sum of the separate Hamiltonians
for each particle. This is an idealized situation - in prac-
N
tice the particles are usually always inuenced by some
H = Tn + V
potential, and there are many-body interactions. One il-
n=1
lustrative example of a two-body interaction where this
N
pn ^
^ pn form would not apply is for electrostatic potentials due to
= + V (r1 , r2 rN , t)
n=1
2m n charged particles, because they interact with each other
by Coulomb interaction (electrostatic force), as shown be-
2 1 2
N
= + V (r1 , r2 rN , t) low.
2 n=1 mn n

However, complications can arise in the many-body prob-


lem. Since the potential energy depends on the spatial 17.3 Schrdinger equation
arrangement of the particles, the kinetic energy will also
depend on the spatial conguration to conserve energy. Main article: Schrdinger equation
The motion due to any one particle will vary due to the
motion of all the other particles in the system. For this
The Hamiltonian generates the time evolution of quantum
reason cross terms for kinetic energy may appear in the
states. If |(t) is the state of the system at time t, then
Hamiltonian; a mix of the gradients for two particles:

2
i j H |(t) = i |(t) .
2M t
where M denotes the mass of the collection of particles This equation is the Schrdinger equation. It takes the
resulting in this extra kinetic energy. Terms of this form same form as the HamiltonJacobi equation, which is one
are known as mass polarization terms, and appear in the of the reasons H is also called the Hamiltonian. Given the
Hamiltonian of many electron atoms (see below). state at some initial time (t = 0), we can solve it to obtain
For N interacting particles, i.e. particles which interact the state at any subsequent time. In particular, if H is
mutually and constitute a many-body situation, the po- independent of time, then
tential energy function V is not simply a sum of the sep-
arate potentials (and certainly not a product, as this is di-
mensionally incorrect). The potential energy function can |(t) = eiHt/ |(0) .
17.5. EXPRESSIONS FOR THE HAMILTONIAN 127

The exponential operator on the right hand side of the 17.5.1 General forms for one particle
Schrdinger equation is usually dened by the corre-
sponding power series in H. One might notice that tak- 17.5.2 Free particle
ing polynomials or power series of unbounded operators
that are not dened everywhere may not make mathemat- The particle is not bound by any potential energy, so the
ical sense. Rigorously, to take functions of unbounded potential is zero and this Hamiltonian is the simplest. For
operators, a functional calculus is required. In the case one dimension:
of the exponential function, the continuous, or just the
holomorphic functional calculus suces. We note again,
however, that for common calculations the physicists for- 2 2
mulation is quite sucient. H =
2m x2
By the *-homomorphism property of the functional cal- and in three dimensions:
culus, the operator

2 2
U = eiHt/ H =
2m
is a unitary operator. It is the time evolution operator, or
propagator, of a closed quantum system. If the Hamil-
tonian is time-independent, {U(t)} form a one parameter
17.5.3 Constant-potential well
unitary group (more than a semigroup); this gives rise to
For a particle in a region of constant potential V = V 0
the physical principle of detailed balance.
(no dependence on space or time), in one dimension, the
Hamiltonian is:
17.4 Dirac formalism
2 2
However, in the more general formalism of Dirac, the H = + V0
2m x2
Hamiltonian is typically implemented as an operator on a
Hilbert space in the following way: in three dimensions
The eigenkets (eigenvectors) of H, denoted |a , provide
an orthonormal basis for the Hilbert space. The spectrum 2 2
of allowed energy levels of the system is given by the set H = + V0
2m
of eigenvalues, denoted {E}, solving the equation:
This applies to the elementary "particle in a box" prob-
lem, and step potentials.
H |a = Ea |a .
Since H is a Hermitian operator, the energy is always a 17.5.4 Simple harmonic oscillator
real number.
From a mathematically rigorous point of view, care must For a simple harmonic oscillator in one dimension, the
be taken with the above assumptions. Operators on potential varies with position (but not time), according
innite-dimensional Hilbert spaces need not have eigen- to:
values (the set of eigenvalues does not necessarily coin-
cide with the spectrum of an operator). However, all rou-
tine quantum mechanical calculations can be done using k 2 m 2 2
V = x = x
the physical formulation. 2 2
where the angular frequency , eective spring constant
k, and mass m of the oscillator satisfy:
17.5 Expressions for the Hamilto-
nian
k
2 =
Following are expressions for the Hamiltonian in a num- m
ber of situations.[2] Typical ways to classify the expres- so the Hamiltonian is:
sions are the number of particles, number of dimensions,
and the nature of the potential energy function - impor-
tantly space and time dependence. Masses are denoted 2 2 m 2 2
by m, and charges by q. H = 2
+ x
2m x 2
128 CHAPTER 17. HAMILTONIAN (QUANTUM MECHANICS)

For three dimensions, this becomes Electrostatic potential energy stored in a conguration of
discrete point charges):[3]

2 2 m 2 2
H =
2m
+
2
r 1 1 qi qj
Vj = qi (ri ) =
2 80 |ri rj |
i=j i=j
where the three-dimensional position vector r using
cartesian coordinates is (x, y, z), its magnitude is where (r) is the electrostatic potential of charge qj at r.
The total potential of the system is then the sum over j:

r2 = r r = |r|2 = x2 + y 2 + z 2
1 qi qj
N

Writing the Hamiltonian out in full shows it is simply the V = 8 |ri rj |


0 j=1
sum of the one-dimensional Hamiltonians in each direc- i=j

tion: so the Hamiltonian is:

( 2 )
2 2 2 m 2 2
2 1 2 1 qi qj
N N
H = + + + (x + y 2 + z 2 )
2m x 2 y 2 z 2 2 H = j +
( ) ( ) ( 2 2j=1 2mj )0
8 |ri rj |
2 2 m 2 2 2 2 m 2 2 m 2 2 j=1 i=j
= + x + + y + + z
2m x2 2 2m y 2 2 N 2m z 22
2 1 qi qj
= 2j +
j=1
2m j 8 0 |r i r j |
i=j
17.5.5 Rigid rotor
For a rigid rotor i.e. system of particles which can rotate 17.5.7 Electric dipole in an electric eld
freely about any axes, not bound in any potential (such as
free molecules with negligible vibrational degrees of free- For an electric dipole moment d constituting charges
dom, say due to double or triple chemical bonds), Hamil- of magnitude q, in a uniform, electrostatic eld (time-
tonian is: independent) E, positioned in one place, the potential is:

2 2 2 2 2 2 V = ^
dE
H = Jx Jy J
2Ixx 2Iyy 2Izz z
the dipole moment itself is the operator
where Ixx, Iyy, and Izz are the moment of inertia compo-
nents (technically the diagonal elements of the moment
of inertia tensor), and Jx , Jy and Jz are the total angular V = ^ dE
momentum operators (components), about the x, y, and
z axes respectively. Since the particle is stationary, there is no translational
kinetic energy of the dipole, so the Hamiltonian of the
dipole is just the potential energy:
17.5.6 Electrostatic or coulomb potential
The Coulomb potential energy for two point charges q1 H = ^ d E = qE ^
r
and q2 (i.e. charged particles, since particles have no
spatial extent), in three dimensions, is (in SI units -
rather than Gaussian units which are frequently used in 17.5.8 Magnetic dipole in a magnetic eld
electromagnetism):
For a magnetic dipole moment in a uniform, magneto-
static eld (time-independent) B, positioned in one place,
q1 q2 the potential is:
V =
40 |r|

However, this is only the potential for one point charge V = B


due to another. If there are many charged particles, each
charge has a potential energy due to every other point Since the particle is stationary, there is no translational
charge (except itself). For N charges, the potential en- kinetic energy of the dipole, so the Hamiltonian of the
ergy of charge qj due to all other charges is (see also dipole is just the potential energy:
17.7. HAMILTONS EQUATIONS 129

are propagating plane waves. The energy of each of these


plane waves is inversely proportional to the square of its
H = B
wavelength. A wave propagating in the x direction is a
For a Spin- particle, the corresponding spin magnetic dierent state from one propagating in the y direction,
moment is:[4] but if they have the same wavelength, then their energies
will be the same. When this happens, the states are said
gs e to be degenerate.
S = S
2m It turns out that degeneracy occurs whenever a nontrivial
where gs is the spin gyromagnetic ratio (a.k.a. spin g- unitary operator U commutes with the Hamiltonian. To
factor"), e is the electron charge, S is the spin operator see this, suppose that |a is an energy eigenket. Then
vector, whose components are the Pauli matrices, hence U |a is an energy eigenket with the same eigenvalue,
since
gs e
H = SB
2m
U H|a = U Ea |a = Ea (U |a) = H (U |a).

17.5.9 Charged particle in an electromag- Since U is nontrivial, at least one pair of |a and U |a
netic eld must represent distinct states. Therefore, H has at least
one pair of degenerate energy eigenkets. In the case of
For a charged particle q in an electromagnetic eld, de- the free particle, the unitary operator which produces
scribed by the scalar potential and vector potential A, the symmetry is the rotation operator, which rotates the
there are two parts to the Hamiltonian to substitute for.[1] wavefunctions by some angle while otherwise preserving
The momentum operator must be replaced by the kinetic their shape.
momentum operator, which includes a contribution from The existence of a symmetry operator implies the exis-
the A eld: tence of a conserved observable. Let G be the Hermitian
generator of U:
^=P
^ qA
^ is the canonical momentum operator given as the U = I iG + O(2 )
where P
usual momentum operator:
It is straightforward to show that if U commutes with H,
then so does G:
^ = i
P
so the corresponding kinetic energy operator is:
[H, G] = 0

^^ 1 (^ )2 Therefore,
T = = P qA
2m 2m
and the potential energy, which is due to the eld: 1
(t)|G|(t) = (t)|[G, H]|(t) = 0.
t i
V = q In obtaining this result, we have used the Schrdinger
equation, as well as its dual,
Casting all of these into the Hamiltonian gives:

1
H =
2
(i qA) + q (t)|H = i (t)|.
2m t
Thus, the expected value of the observable G is conserved
17.6 Energy eigenket degeneracy, for any state of the system. In the case of the free particle,
the conserved quantity is the angular momentum.
symmetry, and conservation
laws
17.7 Hamiltons equations
In many systems, two or more energy eigenstates have
the same energy. A simple example of this is a free par- Hamilton's equations in classical Hamiltonian mechanics
ticle, whose energy eigenstates have wavefunctions that have a direct analogy in quantum mechanics. Suppose
130 CHAPTER 17. HAMILTONIAN (QUANTUM MECHANICS)

we have a set of basis states {|n} , which need not nec-


essarily be eigenstates of the energy. For simplicity, we
H a
assume that they are discrete, and that they are orthonor- = i n
mal, i.e., an t
If we dene conjugate momentum variables n by

n |n = nn .
n (t) = ian (t)
Note that these basis states are assumed to be indepen-
dent of time. We will assume that the Hamiltonian is also then the above equations become
independent of time.
The instantaneous state of the system at time t, | (t) ,
can be expanded in terms of these basis states: H an H n
= , =
n t an t
which is precisely the form of Hamiltons equations, with
|(t) = an (t)|n the an s as the generalized coordinates, the n s as the
n
conjugate momenta, and H taking the place of the clas-
where sical Hamiltonian.

an (t) = n|(t). 17.8 See also


The coecients an(t) are complex variables. We can
Hamiltonian mechanics
treat them as coordinates which specify the state of the
system, like the position and momentum coordinates Operator (physics)
which specify a classical system. Like classical coordi-
nates, they are generally not constant in time, and their Braket notation
time dependence gives rise to the time dependence of the Quantum state
system as a whole.
Linear algebra
The expectation value of the Hamiltonian of this state,
which is also the mean energy, is Conservation of energy
Potential theory
def

H(t) = (t)|H|(t) = an an n |H|n Many-body problem
nn
Electrostatics
where the last step was obtained by expanding | (t) in
terms of the basis states. Electric eld
Each an(t) actually corresponds to two independent de- Magnetic eld
grees of freedom, since the variable has a real part and
an imaginary part. We now perform the following trick: LiebThirring inequality
instead of using the real and imaginary parts as the inde-
pendent variables, we use an(t) and its complex conjugate
an*(t). With this choice of independent variables, we can 17.9 References
calculate the partial derivative
[1] Quantum Physics of Atoms, Molecules, Solids, Nuclei
and Particles (2nd Edition), R. Resnick, R. Eisberg, John
H
= an n |H|n = n |H| Wiley & Sons, 1985, ISBN 978-0-471-87373-0
an n
[2] Quanta: A handbook of concepts, P.W. Atkins, Oxford
By applying Schrdingers equation and using the or- University Press, 1974, ISBN 0-19-855493-1
thonormality of the basis states, this further reduces to [3] Electromagnetism (2nd edition), I.S. Grant, W.R.
Phillips, Manchester Physics Series, 2008 ISBN 0-471-
92712-0
H an
= i
an t [4] Physics of Atoms and Molecules, B.H. Bransden,
C.J.Joachain, Longman, 1983, ISBN 0-582-44401-2
Similarly, one can show that
Chapter 18

Quantum state

In quantum physics, quantum state refers to the state of distinguishable) mixed states. Mixed states are described
an isolated quantum system. A quantum state provides a by so-called density matrices. A pure state can also be
probability distribution for the value of each observable, recast as a density matrix; in this way, pure states can be
i.e. for the outcome of each possible measurement on the represented as a subset of the more general mixed states.
system. Knowledge of the quantum state together with For example, if the spin of an electron is measured in
the rules for the systems evolution in time exhausts all
any direction, e.g. with a SternGerlach experiment,
that can be predicted about the systems behavior. there are two possible results: up or down. The Hilbert
A mixture of quantum states is again a quantum state. space for the electrons spin is therefore two-dimensional.
Quantum states that cannot be written as a mixture of A pure state here is represented by a two-dimensional
other states are called pure quantum states, all other complex vector (, ) , with a length of one; that is, with
states are called mixed quantum states.
Mathematically, a pure quantum state can be repre-
sented by a ray in a Hilbert space over the complex num- ||2 + ||2 = 1,
bers.[1] The ray is a set of nonzero vectors diering by
just a complex scalar factor; any of them can be chosen where || and || are the absolute values of and
as a state vector to represent the ray and thus the state. . A mixed state, in this case, is a 2 2 matrix that is
A unit vector is usually picked, but its phase factor can Hermitian, positive-denite, and has trace 1.
be chosen freely anyway. Nevertheless, such factors are Before a particular measurement is performed on a quan-
important when state vectors are added together to form tum system, the theory usually gives only a probability
a superposition. distribution for the outcome, and the form that this dis-
Hilbert space is a generalization of the ordinary Euclidean tribution takes is completely determined by the quan-
space[2]:9396 and it contains all possible pure quantum tum state and the observable describing the measurement.
states of the given system. If this Hilbert space, by These probability distributions arise for both mixed states
choice of representation (essentially a choice of basis cor- and pure states: it is impossible in quantum mechanics
responding to a complete set of observables), is exhibited (unlike classical mechanics) to prepare a state in which
as a function space, a Hilbert space in its own right, then all properties of the system are xed and certain. This
the representatives are called wave functions. is exemplied by the uncertainty principle, and reects a
core dierence between classical and quantum physics.
For example, when dealing with the energy spectrum of
Even in quantum theory, however, for every observable
the electron in a hydrogen atom, the relevant state vec-
there are some states that have an exact and determined
tors are identied by the principal quantum number n,
value for that observable.[2]:45[3]
the angular momentum quantum number l, the magnetic
quantum number m, and the spin z-component sz. A
more complicated case is given (in braket notation) by
the spin part of a state vector 18.1 Conceptual description
( )
1 18.1.1 Pure states
| = | | ,
2
In the mathematical formulation of quantum mechanics,
which involves superposition of joint spin states for two pure quantum states correspond to vectors in a Hilbert
particles with spin 1 2 . space, while each observable quantity (such as the energy
A mixed quantum state corresponds to a probabilistic or momentum of a particle) is associated with a mathe-
mixture of pure states; however, dierent distributions matical operator. The operator serves as a linear function
of pure states can generate equivalent (i.e., physically in- which acts on the states of the system. The eigenvalues of

131
132 CHAPTER 18. QUANTUM STATE

The expectation value A of an observable A is a sta-


tistical mean of measured values of the observable. It
is this mean, and the distribution of probabilities, that is
predicted by physical theories.
There is no state which is simultaneously an eigenstate for
all observables. For example, we cannot prepare a state
such that both the position measurement Q(t) and the mo-
mentum measurement P(t) (at the same time t) are known
exactly; at least one of them will have a range of possible
values.[lower-alpha 2] This is the content of the Heisenberg
uncertainty relation.
Moreover, in contrast to classical mechanics, it is un-
avoidable that performing a measurement on the system
generally changes its state.[6][7][8] More precisely: After
measuring an observable A, the system will be in an eigen-
state of A; thus the state has changed, unless the system
Probability densities for the electron of a hydrogen atom in dif- was already in that eigenstate. This expresses a kind of
ferent quantum states. logical consistency: If we measure A twice in the same
run of the experiment, the measurements being directly
consecutive in time,[lower-alpha 3] then they will produce the
the operator correspond to the possible values of the ob- same results. This has some strange consequences, how-
servable, i.e. it is possible to observe a particle with a mo- ever, as follows.
mentum of 1 kgm/s if and only if one of the eigenvalues
of the momentum operator is 1 kgm/s. The correspond- Consider two incompatible observables, A and B, where
ing eigenvector (which physicists call an eigenstate) with A [lower-alpha
corresponds to a measurement earlier in time than
4]
eigenvalue 1 kgm/s would be a quantum state with a def- B. Suppose that the system is in an eigenstate
inite, well-dened value of momentum of 1 kgm/s, with of B at the experiments beginning. If we measure only
no quantum uncertainty. If its momentum were mea- B, all runs of the experiment will yield the same result.
sured, the result is guaranteed to be 1 kgm/s. If we measure rst A and then B in the same run of the
experiment, the system will transfer to an eigenstate of
On the other hand, a system in a linear combination of A after the rst measurement, and we will generally no-
multiple dierent eigenstates does in general have quan- tice that the results of B are statistical. Thus: Quantum
tum uncertainty for the given observable. We can repre- mechanical measurements inuence one another, and the
sent this linear combination of eigenstates as: order in which they are performed is important.
Another feature of quantum states becomes relevant if
we consider a physical system that consists of multiple
|(t) = Cn (t)|n subsystems; for example, an experiment with two parti-
n cles rather than one. Quantum physics allows for certain
states, called entangled states, that show certain statistical
The coecient which corresponds to a particular state in
correlations between measurements on the two particles
the linear combination is a complex number, thus allow-
which cannot be explained by classical theory. For de-
ing interference eects between states. The coecients
tails, see entanglement. These entangled states lead to
are time dependent. How a quantum system changes in
experimentally testable properties (Bells theorem) that
time is governed by the time evolution operator. The
allow us to distinguish between quantum theory and al-
symbols | and [lower-alpha 1] surrounding the are part
ternative classical (non-quantum) models.
of braket notation.
Statistical mixtures of states are dierent from a lin-
ear combination. A statistical mixture of states is a 18.1.2 Schrdinger picture vs. Heisenberg
statistical ensemble of independent systems. Statistical picture
mixtures represent the degree of knowledge whilst the
uncertainty within quantum mechanics is fundamental. One can take the observables to be dependent on time,
Mathematically, a statistical mixture is not a combination while the state was xed once at the beginning of
using complex coecients, but rather a combination us- the experiment. This approach is called the Heisenberg
ing real-valued, positive probabilities of dierent states picture. (This approach was taken in the later part of
n . A number Pn represents the probability of a ran- the discussion above, with time-varying observables P(t),
domly selected system being in the state n . Unlike Q(t).) One can, equivalently, treat the observables as
the linear combination case each system is in a denite xed, while the state of the system depends on time; that
eigenstate.[4][5] is known as the Schrdinger picture. (This approach was
18.2. FORMALISM IN QUANTUM PHYSICS 133

taken in the earlier part of thediscussion above, with where vectors are usually bold, lower-case letters, or
a time-varying state |(t) = n Cn (t)|n .) Con- letters with arrows on top.
ceptually (and mathematically), the two approaches are
equivalent; choosing one of them is a matter of conven-
Dirac dened two kinds of vector, bra and ket, dual
tion.
to each other.[12]
Both viewpoints are used in quantum theory. While non-
relativistic quantum mechanics is usually formulated in
Each ket | is uniquely associated with a so-called
terms of the Schrdinger picture, the Heisenberg pic-
bra, denoted | , which corresponds to the same
ture is often preferred in a relativistic context, that is, for
physical quantum state. Technically, the bra is the
quantum eld theory. Compare with Dirac picture.[10]:65
adjoint of the ket. It is an element of the dual space,
and related to the ket by the Riesz representation
theorem. In a nite-dimensional space with a cho-
18.2 Formalism in quantum sen basis, writing | as a column vector, | is a
physics row vector; to obtain it just take the transpose and
entry-wise complex conjugate of | .
See also: Mathematical formulation of quantum me-
chanics Scalar products[13][14] (also called brackets) are writ-
ten so as to look like a bra and ket next to each other:
1 |2 . (The phrase bra-ket is supposed to re-
semble bracket.)
18.2.1 Pure states as rays in a Hilbert
space
Quantum physics is most commonly formulated in terms
18.2.3 Spin
of linear algebra, as follows. Any given system is iden-
tied with some nite- or innite-dimensional Hilbert Main article: Mathematical formulation of quantum
space. The pure states correspond to vectors of norm mechanics Spin
1. Thus the set of all pure states corresponds to the unit
sphere in the Hilbert space. The angular momentum has the same dimension
2 1
Multiplying a pure state by a scalar is physically inconse- (ML T ) as the Planck constant and, at quantum scale,
quential (as long as the state is considered by itself). If behaves as a discrete degree of freedom of a quantum sys-
one vector is obtained from the other by multiplying by a tem. Most particles possess a kind of intrinsic angular
scalar of unit magnitude, the two vectors are said to cor- momentum that does not appear at all in classical me-
respond to the same ray in Hilbert space[11] and also to chanics and arises from Diracs relativistic generalization
the same point in the projective Hilbert space. of the theory. Mathematically it is described with spinors.
In non-relativistic quantum mechanics the group repre-
sentations of the Lie group SU(2) are used to describe
18.2.2 Braket notation this additional freedom. For a given particle, the choice
of representation (and hence the range of possible val-
Main article: Braket notation ues of the spin observable) is specied by a non-negative
number S that, in units of Plancks reduced constant , is
either an integer (0, 1, 2 ...) or a half-integer (1/2, 3/2,
Calculations in quantum mechanics make frequent use
5/2 ...). For a massive particle with spin S, its spin quan-
of linear operators, scalar products, dual spaces and
tum number m always assumes one of the 2S + 1 possible
Hermitian conjugation. In order to make such calcula-
values in the set
tions ow smoothly, and to make it unnecessary (in some
contexts) to fully understand the underlying linear alge-
bra, Paul Dirac invented a notation to describe quantum
states, known as braket notation. Although the details {S, S + 1, . . . + S 1, +S}
of this are beyond the scope of this article, some conse-
quences of this are: As a consequence, the quantum state of a particle with
spin is described by a vector-valued wave function with
The expression used to denote a state vector (which values in C2S+1 . Equivalently, it is represented by a
corresponds to a pure quantum state) takes the form complex-valued function of four variables: one discrete
| (where the " " can be replaced by any other quantum number variable (for the spin) is added to
symbols, letters, numbers, or even words). This can the usual three continuous variables (for the position in
be contrasted with the usual mathematical notation, space).
134 CHAPTER 18. QUANTUM STATE

18.2.4 Many-body states and particle


statistics
| = 1,
Further information: Particle statistics and for orthonormal basis this translates to

The quantum state of a system of N particles, each poten-


tially with spin, is described by a complex-valued function 2
|ci | = 1.
with four variables per particle, e.g. i

Expansions of this sort play an important role in mea-


surement in quantum mechanics. In particular, if the |ki
|(r1 , m1 ; . . . ; rN , mN ).
are eigenstates (with eigenvalues ki) of an observable, and
Here, the spin variables m assume values from the set that observable is measured on the normalized state | ,
then the probability that the result of the measurement is
ki is |ci|2 . (The normalization condition above mandates
that the total sum of probabilities is equal to one.)
{S , S + 1, . . . + S 1, +S }
A particularly important example is the position basis,
where S is the spin of th particle. S = 0 for a particle which is the basis consisting of eigenstates |r with eigen-
that does not exhibit spin. values r of the observable which corresponds to measur-
[lower-alpha 5]
The treatment of identical particles is very dierent for ing position. If these eigenstates are nondegen-
bosons (particles with integer spin) versus fermions (par- erate (for example, if the system is a single, spinless parti-
ticles with half-integer spin). The above N-particle func- cle), then any ket | is associated with a complex-valued
tion must either be symmetrized (in the bosonic case) function of three-dimensional space
or anti-symmetrized (in the fermionic case) with respect
to the particle numbers. If not all N particles are iden- (r) r|. [lower-alpha 6] i.e. r|r =
tical, but some of them are, then the function must be (r r) Dirac delta function), which means
(anti)symmetrized separately over the variables corre- that r|r = .
sponding to each group of identical variables, according
to its statistics (bosonic or fermionic). This function is called the wave function correspond-
Electrons are fermions with S = 1/2, photons (quanta of ing to | . Similarly to the discrete case above, the
light) are bosons with S = 1 (although in the vacuum they probability density of the particle being found at position
are massless and can't be described with Schrdingerian r is |(r)| and the normalized states have
2

mechanics).
When symmetrization or anti-symmetrization is unneces- 3
sary, N-particle spaces of states can be obtained simply d r|(r)|2 = 1
by tensor products of one-particle spaces, to which we
will return later. In terms of the continuous set of position basis |r , the
state | is:

18.2.5 Basis states of one-particle systems


As with any Hilbert space, if a basis is chosen for the | = d r (r)|r
3

Hilbert space of a system, then any ket can be expanded


as a linear combination of those basis elements. Symbol-
ically, given basis kets |ki , any ket | can be written 18.2.6 Superposition of pure states
Main article: Quantum superposition

| = ci |ki
i One aspect of quantum states, mentioned above, is that
superpositions of them can be formed. If | and | are
where ci are complex numbers. In physical terms, this
two kets corresponding to quantum states, the ket
is described by saying that | has been expressed as a
quantum superposition of the states |ki . If the basis kets
are chosen to be orthonormal (as is often the case), then
c = k | . c | + c |
i i

One property worth noting is that the normalized states is a dierent quantum state (possibly not normalized).
| are characterized by Note that which quantum state it is depends on both the
18.3. INTERPRETATION 135

amplitudes and phases (arguments) of c and c . In


other words, for example, even though | and ei |

(for real ) correspond to the same physical quantum = ps |s s |
state, they are not interchangeable, since for example s
| + | and | + ei | do not (in general) corre-
spond to the same physical state. However, | + | where ps is the fraction of the ensemble in each pure state
and ei (| + |) do correspond to the same physical |s . The density matrix can be thought of as a way of us-
state. This is sometimes described by saying that global ing the one-particle formalism to describe the behavior of
phase factors are unphysical, but relative phase factors many similar particles by giving a probability distribution
are physical and important. (or ensemble) of states that these particles can be found
One example of a quantum interference phenomenon that in.
arises from superposition is the double-slit experiment. A simple criterion for checking whether a density matrix
The photon state is a superposition of two dierent states, is describing a pure or mixed state is that the trace of 2
one of which corresponds to the photon having passed is equal to 1 if the state is pure, and less than 1 if the state
through the left slit, and the other corresponding to pas- is mixed.[lower-alpha 7][16] Another, equivalent, criterion is
sage through the right slit. The relative phase of those two that the von Neumann entropy is 0 for a pure state, and
states has a value which depends on the distance from strictly positive for a mixed state.
each of the two slits. Depending on what that phase is,
the interference is constructive at some locations and de- The rules for measurement in quantum mechanics are
structive in others, creating the interference pattern. By particularly simple to state in terms of density matrices.
the analogy with coherence in other wave phenomena, a For example, the ensemble average (expectation value) of
superposed state can be referred to as a coherent super- a measurement corresponding to an observable A is given
position. by

Another example of the importance of relative phase in


quantum superposition is Rabi oscillations, where the
relative phase of two states varies in time due to the A = ps s |A|s = ps ai |i |s |2 = tr(A)
Schrdinger equation. The resulting superposition ends s s i

up oscillating back and forth between two dierent states.


where |i , ai are eigenkets and eigenvalues, respec-
tively, for the operator A, and tr denotes trace. It is
important to note that two types of averaging are occur-
ring, one being a weighted quantum superposition over
18.2.7 Mixed states the basis kets |s of the pure states, and the other being
a statistical (said incoherent) average with the probabili-
ties ps of those states.
See also: Density matrix
According to Wigner,[17] the concept of mixture was put
forward by Landau.[18][19]:3841
A pure quantum state is a state which can be described
by a single ket vector, as described above. A mixed
quantum state is a statistical ensemble of pure states (see
quantum statistical mechanics). Mixed states inevitably
arise from pure states when, for a composite quantum
18.3 Interpretation
system H1 H2 with an entangled state on it, the part
H2 is inaccessible to the observer. The state of the part Main article: Interpretations of quantum mechanics
H1 is expressed then as the partial trace over H2 .
A mixed state cannot be described as a ket vector. In- Although theoretically, for a given quantum system, a
stead, it is described by its associated density matrix (or state vector provides the full information about its evolu-
density operator), usually denoted . Note that density tion, it is not easy to understand what information about
matrices can describe both mixed and pure states, treat- the real world it carries. Due to the uncertainty prin-
ing them on the same footing. Moreover, a mixed quan- ciple, a state, even if it has the value of one observable
tum state on a given quantum system described by a exactly dened (i.e. the observable has this state as an
Hilbert space H can be always represented as the partial eigenstate), cannot exactly dene values of all observ-
trace of a pure quantum state (called a purication) on ables.
a larger bipartite system H K for a suciently large For state vectors (pure states), probability amplitudes of-
Hilbert space K . fer a probabilistic interpretation. It can be generalized for
The density matrix describing a mixed state is dened to all states (including mixed), for instance, as expectation
be an operator of the form values mentioned above.
136 CHAPTER 18. QUANTUM STATE

18.4 Mathematical generalizations [7] Note that this criterion works when the density matrix is
normalized so that the trace of is 1, as it is for the stan-
dard denition given in this section. Occasionally a den-
States can be formulated in terms of observables, rather
sity matrix will be normalized dierently, in which case
than as vectors in a vector space. These are positive nor- the criterion is Tr(2 ) = (Tr )2
malized linear functionals on a C*-algebra, or sometimes
other classes of algebras of observables. See State on a
C*-algebra and GelfandNaimarkSegal construction for
more details. 18.7 References
[1] Weinberg, S. (2002), The Quantum Theory of Fields, I,
Cambridge University Press, ISBN 0-521-55001-7
18.5 See also
[2] Griths, David J. (2004), Introduction to Quantum Me-
Atomic electron transition chanics (2nd ed.), Prentice Hall, ISBN 0-13-111892-7

Bloch sphere [3] Ballentine, L. E. (1970), The Statistical Interpre-


tation of Quantum Mechanics, Reviews of Modern
Ground state Physics, 42: 358381, Bibcode:1970RvMP...42..358B,
doi:10.1103/RevModPhys.42.358
Introduction to quantum mechanics
[4] Statistical Mixture of States
No-cloning theorem
[5] Archived copy. Archived from the original on January
Orthonormal basis 15, 2012. Retrieved January 24, 2012.
PBR theorem [6] Heisenberg, W. (1927). ber den anschaulichen Inhalt
der quantentheoretischen Kinematik und Mechanik, Z.
Quantum harmonic oscillator
Phys. 43: 172198. Translation as 'The actual content
Qubit of quantum theoretical kinematics and mechanics. Also
translated as 'The physical content of quantum kinematics
State vector reduction, for historical reasons called and mechanics at pp. 6284 by editors John Wheeler and
a wave function collapse Wojciech Zurek, in Quantum Theory and Measurement
(1983), Princeton University Press, Princeton NJ.
Stationary state
[7] Bohr, N. (1927/1928). The quantum postulate and the
W state recent development of atomic theory, Nature Supplement
April 14 1928, 121: 580590.

[8] Dirac (1958),[9] p. 4: If a system is small, we cannot


18.6 Notes observe it without producing a serious disturbance.

[1] Sometimes written ">"; see angle brackets. [9] Dirac, P.A.M. (1958). The Principles of Quantum Me-
chanics, 4th edition, Oxford University Press, Oxford UK.
[2] To avoid misunderstandings: Here we mean that Q(t) and
P(t) are measured in the same state, but not in the same [10] Gottfried, Kurt; Yan, Tung-Mow (2003). Quantum Me-
run of the experiment. chanics: Fundamentals (2nd, illustrated ed.). Springer.
ISBN 9780387955766.
[3] i.e. separated by a zero delay. One can think of it as stop-
ping the time, then making the two measurements one af- [11] Weinberg, Steven. The Quantum Theory of Fields, Vol.
ter the other, then resuming the time. Thus, the measure- 1. Cambridge University Press, 1995 p. 50.
ments occurred at the same time, but it is still possible to
tell which was rst. [12] Dirac (1958),[9] p. 20: The bra vectors, as they have been
here introduced, are quite a dierent kind of vector from
[4] For concreteness sake, suppose that A = Q(t 1 ) and B = the kets, and so far there is no connexion between them
P(t 2 ) in the above example, with t 2 > t 1 > 0. except for the existence of a scalar product of a bra and a
ket.
[5] Note that a state | is a superposition of dierent basis
states |r , so | and |r are elements of the same Hilbert [13] Dirac (1958),[9] p. 19: A scalar product B|A now ap-
space. A particle in state |r is located precisely at position pears as a complete bracket expression.
r = (x, y, z) , while a particle in state | can be found
at dierent positions with corresponding probabilities. [14] Gottfried (2013),[10] p. 31: to dene the scalar products
as being between bras and kets.
[6] In the continuous case, the basis kets |r are not unit kets
3 the state |[15]): They are normalized according to
(unlike [15] Landau (1965),<ref name='Landau (1965)' group=''>
d r r|r = 1, p. 17: " * dq = (f f)"
(the left side corresponds to f|f ), " (f f) df = 1. [16] Blum, Density matrix theory and applications, page 39.
18.8. FURTHER READING 137

[17] Eugene Wigner (1962). Remarks on the mind-body


question (PDF). In I.J. Good. The Scientist Speculates.
London: Heinemann. pp. 284302. Footnote 13 on
p.180

[18] Lev Landau (1927). Das Dmpfungsproblem in


der Wellenmechanik (The Damping Problem in
Wave Mechanics)". Zeitschrift fr Physik. 45
(56): 430441. Bibcode:1927ZPhy...45..430L.
doi:10.1007/bf01343064. English translation reprinted
in: D. Ter Haar, ed. (1965). Collected papers of L.D.
Landau. Oxford: Pergamon Press. p.818

[19] Lev Landau; Evgeny Lifshitz (1965). Quantum Mechanics


Non-Relativistic Theory (PDF). Course of Theoretical
Physics. 3 (2nd ed.). London: Pergamon Press.

18.8 Further reading


The concept of quantum states, in particular the content
of the section Formalism in quantum physics above, is
covered in most standard textbooks on quantum mechan-
ics.
For a discussion of conceptual aspects and a comparison
with classical states, see:

Isham, Chris J (1995). Lectures on Quantum


Theory: Mathematical and Structural Foundations.
Imperial College Press. ISBN 978-1-86094-001-9.

For a more detailed coverage of mathematical aspects,


see:

Bratteli, Ola; Robinson, Derek W (1987). Opera-


tor Algebras and Quantum Statistical Mechanics 1.
Springer. ISBN 978-3-540-17093-8. 2nd edition.
In particular, see Sec. 2.3.

For a discussion of purications of mixed quantum states,


see Chapter 2 of John Preskills lecture notes for Physics
219 at Caltech.
Chapter 19

Hermitian matrix

In mathematics, a Hermitian matrix (or self-adjoint A = BB H , then A is a Hermitian positive semi-denite


matrix) is a complex square matrix that is equal to its matrix. Furthermore, if B is row full-rank, then A is pos-
own conjugate transposethat is, the element in the i-th itive denite.
row and j-th column is equal to the complex conjugate of
the element in the j-th row and i-th column, for all indices
i and j: 19.2 Properties
aij = aji or A = AT , in matrix form.

Hermitian matrices can be understood as the complex ex- The entries on the main diagonal (top left to bottom
tension of real symmetric matrices. right) of any Hermitian matrix are necessarily real,
If the conjugate transpose of a matrix A is denoted by AH because they have to be equal to their complex con-
, then the Hermitian property can be written concisely as jugate.

Because of conjugation, for complex valued entries


A=A . H the o-diagonal elements cannot be symmetric (or
same). Hence, a matrix that has only real entries is
Hermitian matrices are named after Charles Hermite, Hermitian if and only if it is a symmetric matrix, i.e.,
who demonstrated in 1855 that matrices of this form if it is symmetric with respect to the main diagonal.
share a property with real symmetric matrices of always A real and symmetric matrix is simply a special case
having real eigenvalues. of a Hermitian matrix.

Every Hermitian matrix is a normal matrix.


19.1 Examples The nite-dimensional spectral theorem says that
any Hermitian matrix can be diagonalized by a
In this section, the conjugate transpose of matrix A is de- unitary matrix, and that the resulting diagonal ma-
noted as AH , the transpose of matrix A is denoted as AT trix has only real entries. This implies that all
and conjugate of matrix A is denoted as A . eigenvalues of a Hermitian matrix A with dimen-
See the following example: sion n are real, and that A has n linearly inde-
pendent eigenvectors. Moreover, Hermitian matrix
has orthogonal eigenvectors for distinct eigenvalues.
2 2+i 4 Even if there are degenerate eigenvalues, it is always
2 i 3 i possible to nd an orthogonal basis of Cn consisting
4 i 1 of n eigenvectors of A.
The diagonal elements must be real, as they must be their The sum of any two Hermitian matrices is Hermi-
own complex conjugate. tian, and the inverse of an invertible Hermitian ma-
Well-known families of Pauli matrices, Gell-Mann matri- trix is Hermitian as well. However, the product of
ces and their generalizations are Hermitian. In theoretical two Hermitian matrices A and B is Hermitian if and
physics such Hermitian matrices are often multiplied only if AB = BA. Thus An is Hermitian if A is Her-
by imaginary coecients,[1][2] which results in skew- mitian and n is an integer.
Hermitian matrices (see below). For an arbitrary complex valued vector v the prod-
Here we oer another useful Hermitian matrix using an uct v H Av is real because of v H Av = (v H Av)H
abstract example. If a square matrix A equals the multi- . This is especially important in quantum physics
plication of a matrix and its conjugate transpose, that is, where hermitian matrices are operators that measure

138
19.4. RAYLEIGH QUOTIENT 139

properties of a system e.g. total spin which have to


be real.
1 1
C = A+B with A = (C+C H ) and B = (CC H )
The Hermitian complex n-by-n matrices do not form 2 2
a vector space over the complex numbers, since
the identity matrix In is Hermitian, but i In is not. The determinant of a Hermitian matrix is real:
However the complex Hermitian matrices do form
a vector space over the real numbers R. In the 2n2 -
dimensional vector space of complex n n matri-
ces over R, the complex Hermitian matrices form
a subspace of dimension n2 . If Ejk denotes the n- det(A) = det(AT ) det(AH ) = det(A)
by-n matrix with a 1 in the j,k position and zeros
elsewhere, a basis can be described as follows: A = AH det(A) = det(A) .

(Alternatively, the determinant is the product


Ejj for 1 j n (n matrices) of the matrixs eigenvalues, and as mentioned
together with the set of matrices of the form before, the eigenvalues of a Hermitian matrix
are real.)
Ejk + Ekj for 1 j < k n (n2
n/2 matrices)

and the matrices 19.4 Rayleigh quotient


i(Ejk Ekj ) for 1 j < k n
Main article: Rayleigh quotient
(n2 n/2 matrices)

where i denotes the complex number 1 , In mathematics, for a given complex Hermitian matrix M
known as the imaginary unit. and nonzero vector x, the Rayleigh quotient[3] R(M, x) ,
is dened as:[4][5]
If n orthonormal eigenvectors u1 , . . . , un of a Her-
mitian matrix are chosen and written as the columns
of the matrix U, then one eigendecomposition of A xH M x
H H H R(M, x) := .
is A = U U where U U = I = U U and xH x
therefore
For real matrices and vectors, the condition of being Her-
mitian reduces to that of being symmetric, and the con-
jugate transpose xH to the usual transpose xT . Note
that R(M, cx) = R(M, x) for any non-zero real scalar
A= j uj uH
j c . Recall that a Hermitian (or real symmetric) matrix
j
has real eigenvalues. It can be shown that, for a given
where j are the eigenvalues on the diagonal of matrix, the Rayleigh quotient reaches its minimum value
the diagonal matrix . min (the smallest eigenvalue of M) when x is vmin (the
corresponding eigenvector). Similarly, R(M, x) max
and R(M, vmax ) = max .
19.3 Further properties The Rayleigh quotient is used in the min-max theorem to
get exact values of all eigenvalues. It is also used in eigen-
value algorithms to obtain an eigenvalue approximation
Additional facts related to Hermitian matrices include:
from an eigenvector approximation. Specically, this is
the basis for Rayleigh quotient iteration.
The sum of a square matrix and its conjugate trans-
The range of the Rayleigh quotient (for matrix that is
pose (C + C H ) is Hermitian.
not necessarily Hermitian) is called a numerical range,
The dierence of a square matrix and its conjugate (or spectrum in functional analysis). When the matrix is
transpose (C C H ) is skew-Hermitian (also called Hermitian, the numerical range is equal to the spectral
antihermitian). This implies that commutator of two norm. Still in functional analysis, max is known as the
Hermitian matrices is skew-Hermitian. spectral radius. In the context of C*-algebras or algebraic
quantum mechanics, the function that to M associates the
An arbitrary square matrix C can be written as the Rayleigh-Ritz quotient R(M,x) for a xed x and M vary-
sum of a Hermitian matrix A and a skew-Hermitian ing through the algebra would be referred to as vector
matrix B: state of the algebra.
140 CHAPTER 19. HERMITIAN MATRIX

19.5 See also


Skew-Hermitian matrix (anti-Hermitian matrix)

Haynsworth inertia additivity formula


Hermitian form

Self-adjoint operator

Unitary matrix

19.6 References
[1] Frankel, Theodore (2004). The geometry of physics: an
introduction. Cambridge University Press. p. 652. ISBN
0-521-53927-7.

[2] Physics 125 Course Notes at California Institute of Tech-


nology

[3] Also known as the RayleighRitz ratio; named after


Walther Ritz and Lord Rayleigh.

[4] Horn, R. A. and C. A. Johnson. 1985. Matrix Analysis.


Cambridge University Press. pp. 176180.

[5] Parlet B. N. The symmetric eigenvalue problem, SIAM,


Classics in Applied Mathematics,1998

19.7 External links


Hazewinkel, Michiel, ed. (2001), Hermitian ma-
trix, Encyclopedia of Mathematics, Springer, ISBN
978-1-55608-010-4

Visualizing Hermitian Matrix as An Ellipse with Dr.


Geo, by Chao-Kuei Hung from Shu-Te University,
gives a more geometric explanation.
Hermitian Matrices. MathPages.com.
Chapter 20

Trace (linear algebra)

In linear algebra, the trace of an n-by-n square matrix for all square matrices A and B, and all scalars c.
A is dened to be the sum of the elements on the main A matrix and its transpose have the same trace:
diagonal (the diagonal from the upper left to the lower
right) of A, i.e.,
tr(A) = tr(AT )

n
This follows immediately from the fact that transposing
tr(A) = a11 + a22 + + ann = aii
a square matrix does not aect elements along the main
i=1
diagonal.
where aii denotes the entry on the ith row and ith column
of A. The trace of a matrix is the sum of the (complex)
eigenvalues, and it is invariant with respect to a change 20.2.2 Trace of a product
of basis. This characterization can be used to dene the
trace of a linear operator in general. Note that the trace The trace of a product can be rewritten as the sum of
is only dened for a square matrix (i.e., n n). entry-wise products of elements:

The trace (often abbreviated to tr) is related to the



derivative of the determinant (see Jacobis formula). tr(X T Y ) = tr(XY T ) = tr(Y T X) = tr(Y X T ) = Xij Yij
i,j

20.1 Example This means that the trace of a product of matrices func-
tions similarly to a dot product of vectors. For this rea-
son, generalizations of vector operations to matrices (e.g.
Let A be a matrix, with in matrix calculus and statistics) often involve a trace of
matrix products.

a b c For real matrices, the trace of a product can also be writ-
A = d e f ten in the following forms:
g h i The matrices in a trace of a product can be switched with-
out changing the result: If A is an m n matrix and B is
Then
an n m matrix, then

tr(AB) = tr(BA) .[1]


tr(A) = a + e + i
More generally, the trace is invariant under cyclic permu-
tations, i.e.,
20.2 Properties

20.2.1 Basic properties tr(ABCD) = tr(BCDA) = tr(CDAB) = tr(DABC)


This is known as the cyclic property.
The trace is a linear mapping. That is,
Note that arbitrary permutations are not allowed: in gen-
eral,
tr(A + B) = tr(A) + tr(B)

tr(cA) = c tr(A) tr(ABC) = tr(ACB)

141
142 CHAPTER 20. TRACE (LINEAR ALGEBRA)

However, if products of three symmetric matrices are 0; one can state this as the trace is a map of Lie algebras
considered, any permutation is allowed. (Proof: tr(ABC) gln k from operators to scalars, as the commuta-
= tr(AT BT C T ) = tr(AT (CB)T ) = tr((CB)T AT ) = tr((ACB)T ) tor of scalars is trivial (it is an abelian Lie algebra). In
= tr(ACB), where the last equality is because the traces of particular, using similarity invariance, it follows that the
a matrix and its transpose are equal.) For more than three identity matrix is never similar to the commutator of any
factors this is not true. pair of matrices.
Unlike the determinant, the trace of the product is not the Conversely, any square matrix with zero trace is a linear
product of traces, that is: combinations of the commutators of pairs of matrices.[3]
Moreover, any square matrix with zero trace is unitarily
equivalent to a square matrix with diagonal consisting of
tr(XY ) = tr(X) tr(Y ) all zeros.
The trace of any power of a nilpotent matrix is zero.
What is true is that the trace of the Kronecker product of When the characteristic of the base eld is zero, the con-
two matrices is the product of their traces: verse also holds: if tr(xk ) = 0 for all k , then x is nilpo-
tent.
The trace of a Hermitian matrix is real, because the ele-
tr(X Y ) = tr(X) tr(Y )
ments on the diagonal are real.
The trace of a projection matrix is the dimension of the
20.2.3 Other properties target space. If

The following three properties:


( )1 T
PX = X X T X X
tr(A + B) = tr(A) + tr(B)
then
tr(cA) = c tr(A)
tr (PX ) = rank (X)
tr(AB) = tr(BA)
characterize the trace completely in the sense that fol-
lows. Let f be a linear functional on the space of square 20.3 Exponential trace
matrices satisfying f(x y) = f(y x). Then f and tr are
proportional.[2] Expressions like tr(exp(A)), where A is a square matrix,
The trace is similarity-invariant, which means that A and occur so often in some elds (e.g. multivariate statistical
P 1 AP have the same trace. This is because theory), that a shorthand notation has become common:

tr(P 1 AP ) = tr(P 1 (AP )) = tr((AP )P 1 ) = tr(A(P Ptre(A)


1 tr(exp(A))
:=tr(A)
)) =

If A is symmetric and B is antisymmetric, then This is sometimes referred to as the exponential trace
function; it is used in the GoldenThompson inequality.

tr(AB) = 0
20.4 Trace of a linear operator
The trace of the identity matrix is the dimension of the
space; this leads to generalizations of dimension using Given some linear map f : V V (where V is a nite-
trace. The trace of an idempotent matrix A (for which dimensional vector space) generally, we can dene the
A2 = A) is the rank of A. The trace of a nilpotent matrix trace of this map by considering the trace of matrix rep-
is zero. resentation of f, that is, choosing a basis for V and de-
More generally, if f(x) = (x 1 )d1 (x k)dk is the scribing f as a matrix relative to this basis, and taking the
characteristic polynomial of a matrix A, then trace of this square matrix. The result will not depend
on the basis chosen, since dierent bases will give rise to
similar matrices, allowing for the possibility of a basis-
independent denition for the trace of a linear map.
tr(A) = d1 1 + + dk k
Such a denition can be given using the canonical iso-
When both A and B are n-by-n, the trace of the (ring- morphism between the space End(V) of linear maps on
theoretic) commutator of A and B vanishes: tr([A, B]) = V and V V , where V is the dual space of V. Let v be
20.5. APPLICATIONS 143

in V and let f be in V . Then the trace of the indecom- These transformations all have determinant 1, so they
posable element v f is dened to be f(v); the trace of a preserve area. The derivative of this family at = 0, the
general element is dened by linearity. Using an explicit identity rotation, is the antisymmetric matrix
basis for V and the corresponding dual basis for V , one
can show that this gives the same denition of the trace ( )
as given above. 0 1
A=
1 0
which clearly has trace zero, indicating that this ma-
20.4.1 Eigenvalue relationships trix represents an innitesimal transformation which pre-
serves area.
If A is a linear operator represented by a square n-by-n
A related characterization of the trace applies to linear
matrix with real or complex entries and if 1 , ..., n are
vector elds. Given a matrix A, dene a vector eld F on
the eigenvalues of A (listed according to their algebraic
n by F(x) = Ax. The components of this vector eld are
multiplicities), then
linear functions (given by the rows of A). Its divergence
div F is a constant function, whose value is equal to tr(A).
By the divergence theorem, one can interpret this in terms
tr(A) = i of ows: if F(x) represents the velocity of a uid at lo-
i cation x and U is a region in n , the net ow of the uid
This follows from the fact that A is always similar to its out of U is given by tr(A) vol(U), where vol(U) is the
Jordan form, an upper triangular matrix having , ..., n volume of U.
1
on the main diagonal. In contrast, the determinant of A The trace is a linear operator, hence it commutes with the
is the product of its eigenvalues; i.e., derivative:

d tr(X) = tr(dX).
det(A) = i
i

More generally, 20.5 Applications


The trace of a 2-by-2 complex matrix is used to clas-
tr(Ak ) = ki sify Mbius transformations. First the matrix is normal-
i ized to make its determinant equal to one. Then, if the
square of the trace is 4, the corresponding transformation
is parabolic. If the square is in the interval [0,4), it is el-
20.4.2 Derivatives liptic. Finally, if the square is greater than 4, the transfor-
mation is loxodromic. See classication of Mbius trans-
The trace corresponds to the derivative of the determi-
formations.
nant: it is the Lie algebra analog of the (Lie group) map
of the determinant. This is made precise in Jacobis for- The trace is used to dene characters of group represen-
mula for the derivative of the determinant. tations. Two representations A, B : G GL(V ) of a
group G are equivalent (up to change of basis on V) if
As a particular case, at the identity, the derivative of the
tr A(g) = tr B(g) for all g G.
determinant actually amounts to the trace: tr = detI .
From this (or from the connection between the trace and The trace also plays a central role in the distribution of
the eigenvalues), one can derive a connection between the quadratic forms.
trace function, the exponential map between a Lie algebra
and its Lie group (or concretely, the matrix exponential
function), and the determinant: 20.6 Lie algebra
The trace is a map of Lie algebras tr : gln k from the
det(exp(A)) = exp(tr(A)) Lie algebra gln of operators on a n-dimensional space (n
n matrices) to the Lie algebra k of scalars; as k is abelian
For example, consider the one-parameter family of linear (the Lie bracket vanishes), the fact that this is a map of
transformations given by rotation through angle , Lie algebras is exactly the statement that the trace of a
bracket vanishes:
( )
cos sin
R = tr([A, B]) = 0
sin cos
144 CHAPTER 20. TRACE (LINEAR ALGEBRA)

The kernel of this map, a matrix whose trace is zero, is


often said to be traceless or tracefree, and these matrices
form the simple Lie algebra sln, which is the Lie algebra tr(x[y, z]) = tr([x, y]z)
of the special linear group of matrices with determinant
1. The special linear group consists of the matrices which For a complex simple Lie algebra (e.g., sln ), every such
do not change volume, while the special linear algebra is bilinear form is proportional to each other; in particular,
the matrices which innitesimally do not change volume. to the Killing form.

In fact, there is an internal direct sum decomposition Two matrices x and y are said to be trace orthogonal if
gln = sln k of operators/matrices into traceless opera-
tors/matrices and scalars operators/matrices. The projec-
tion map onto scalar operators can be expressed in terms tr(xy) = 0
of the trace, concretely as:

20.7 Inner product


1
A 7 tr(A) I
n
For an m-by-n matrix A with complex (or real) entries
Formally, one can compose the trace (the counit map) and * being the conjugate transpose, we have
with the unit map k gln of inclusion of scalars" to
obtain a map gln gln mapping onto scalars, and multi-
plying by n. Dividing by n makes this a projection, yield- tr(A A) 0
ing the formula above.
In terms of short exact sequences, one has with equality if and only if A = 0. The assignment

tr
0 sln gln k 0 A, B = tr(A B)

which is analogous to yields an inner product on the space of all complex (or
real) m-by-n matrices.
The norm derived from the above inner product is
det
1 SLn GLn K 1 called the Frobenius norm, which satises submultiplica-
tive property as matrix norm. Indeed, it is simply the
for Lie groups. However, the trace splits naturally (via n1 Euclidean norm if the matrix is considered as a vector
times scalars) so gln = sln k , but the splitting of the of length m n.
determinant would be as the nth root times scalars, and It follows that if A and B are real positive semi-denite
this does not in general dene a function, so the determi- matrices of the same size then
nant does not split and the general linear group does not
decompose: GLn = SLn K .
0 [tr(AB)]2 tr(A2 ) tr(B 2 )
[tr(A)]2 [tr(B)]2 .[5]
20.6.1 Bilinear forms
The bilinear form 20.8 Generalizations
The concept of trace of a matrix is generalized to the trace
B(x, y) = tr(ad(x) ad(y)) where ad(x)y = [x, y] = xyyx class of compact operators on Hilbert spaces, and the ana-
log of the Frobenius norm is called the HilbertSchmidt
is called the Killing form, which is used for the classi- norm.
cation of Lie algebras.
If K is trace-class, then for any orthonormal basis (n )n
The trace denes a bilinear form: , the trace is given by

tr(K) = n n , Kn ,
(x, y) 7 tr(xy) and is nite and independent of the orthonormal basis.[6]
The partial trace is another generalization of the trace that
(x, y square matrices). is operator-valued. The trace of a linear operator Z which
The form is symmetric, non-degenerate[4] and associative lives on a product space AB is equal to the partial traces
in the sense that: over A and B : tr(Z) = trA (trB (Z)) = trB (trA (Z)) .
20.10. SEE ALSO 145

For more properties and a generalization of the partial coecients along the diagonal. This method, however,
trace, see the article on traced monoidal categories. makes coordinate invariance an immediate consequence
If A is a general associative algebra over a eld k, then a of the denition.
trace on A is often dened to be any map tr: A k which
vanishes on commutators: tr([a, b]) = 0 for all a, b in A.
20.9.1 Dual
Such a trace is not uniquely dened; it can always at least
be modied by multiplication by a nonzero scalar.
Further, one may dualize this map, obtaining a map F =
A supertrace is the generalization of a trace to the setting F V V = End(V ) . This map is precisely the
of superalgebras. inclusion of scalars, sending 1 F to the identity matrix:
The operation of tensor contraction generalizes the trace trace is dual to scalars. In the language of bialgebras,
to arbitrary tensors. scalars are the unit, while trace is the counit.
I tr
One can then compose these, F End(V )F , which
yields multiplication by n, as the trace of the identity is
20.9 Coordinate-free denition the dimension of the vector space.

We can identify the space of linear operators on a vector


space V, dened over the eld F, with the space V V 20.10 See also
, where v h = (w 7 h(w)v) . We also have a canon-
ical bilinear function t : V V F that consists of Trace of a tensor with respect to a metric tensor
applying an element w* of V * to an element v of V to get
an element of F, in symbols t(v, w ) := w (v) F . Characteristic function
This induces a linear function on the tensor product (by
Field trace
its universal property) t : V V F , which, as it
turns out, when that tensor product is viewed as the space GoldenThompson inequality
of operators, is equal to the trace.
This also claries why tr(AB) = tr(BA) and why Spechts theorem
tr(AB) = tr(A) tr(B) , as composition of operators Trace class
(multiplication of matrices) and trace can be interpreted
as the same pairing. Viewing End(V ) = V V , one Trace inequalities
may interpret the composition map End(V )End(V )
End(V ) as von Neumanns trace inequality

(V V ) (V V ) (V V ) 20.11 Notes
coming from the pairing V V F on the middle [1] This is immediate from the denition of the matrix prod-
terms. Taking the trace of the product then comes from uct:
pairing on the outer terms, while taking the product in
m m n n m n
the opposite order and then taking the trace just switches tr(AB) = (AB)ii = Aij Bji = Bji Aij = (BA)jj = tr
which pairing is applied rst. On the other hand, taking i=1 i=1 j=1 j=1 i=1 j=1
the trace of A and the trace of B corresponds to applying
the pairing on the left terms and on the right terms (rather [2] Proof:
than on inner and outer), and is thus dierent.
f (eij ) = 0 if and only if i = j and f (ejj ) =
In coordinates, this corresponds toindexes: mul- f (e11 ) (with the standard basis eij ),
tiplication
is given by (AB)ik = j aij bjk , so
tr(AB) = ij aij bji and tr(BA) = ij bij a
ji which
and thus
is the same, while tr(A) tr(B) = i aii j bjj ,
f (A) = [A]ij f (eij ) = [A]ii f (e11 ) = f (e11 ) tr(A)
which is dierent. i,j i

For V nite-dimensional, with basis {ei } and dual basis


More abstractly, this corresponds to the decomposition
{ei } , then ei ej is the ij-entry of the matrix of the
gln = sln k , as tr(AB) = tr(BA) (equivalently,
operator with respect to that basis. Any operator A is tr([A, B]) = 0 ) denes the trace on sln, which has com-
therefore a sum of the form A = aij ei ej . With plement the scalar matrices, and leaves one degree of free-
t dened as above, t(A) = aij t(ei ej ) . The latter, dom: any such map is determined by its value on scalars,
however, is just the Kronecker delta, being 1 if i = j and 0 which is one scalar parameter and hence all are multiple
otherwise. This shows that t(A) is simply the sum of the of the trace, a non-zero such map.
146 CHAPTER 20. TRACE (LINEAR ALGEBRA)

[3] Proof: sln is a semisimple Lie algebra and thus every ele-
ment in it is a linear combination of commutators of some
pairs of elements, otherwise the derived algebra would be
a proper ideal.

[4] This follows from the fact that tr(A A) = 0 if and only
if A = 0

[5] Can be proven with the CauchySchwarz inequality.

[6] G. Teschl, Mathematical Methods in Quantum Mechan-


ics, Graduate Studies in Mathematics 157 (2014).

20.12 External links


Hazewinkel, Michiel, ed. (2001), Trace of a square
matrix, Encyclopedia of Mathematics, Springer,
ISBN 978-1-55608-010-4
Chapter 21

Symmetric matrix

In linear algebra, a symmetric matrix is a square ma- BA. So for integer n, An is symmetric if A is symmetric.
trix that is equal to its transpose. Formally, matrix A is If A1 exists, it is symmetric if and only if A is symmetric.
symmetric if Let Matn denote the space of n n matrices. A sym-
metric n n matrix is determined by n(n + 1)/2 scalars
(the number of entries on or above the main diagonal).
A = AT . Similarly, a skew-symmetric matrix is determined by n(n
1)/2 scalars (the number of entries above the main di-
Because equal matrices have equal dimensions, only agonal). If Symn denotes the space of n n symmetric
square matrices can be symmetric. matrices and Skewn the space of n n skew-symmetric
The entries of a symmetric matrix are symmetric with matrices then Matn = Symn + Skewn and Symn Skewn
respect to the main diagonal. So if the entries are written = {0}, i.e.
as A = (aij), then aij = aji, for all indices i and j.
The following 3 3 matrix is symmetric:
Matn = Symn Skewn ,

where denotes the direct sum. Let X Matn then


1 7 3
7 4 5.
3 5 6 1 1
X= (X + X T ) + (X X T ).
2 2
Every square diagonal matrix is symmetric, since all o-
diagonal elements are zero. Similarly in characteristic Notice that 1/2(X + XT ) Symn and 1/2(X XT )
dierent from 2, each diagonal element of a skew- Skewn. This is true for every square matrix X with en-
symmetric matrix must be zero, since each is its own neg- tries from any eld whose characteristic is dierent from
ative. 2.
In linear algebra, a real symmetric matrix represents a Any matrix congruent to a symmetric matrix is again
self-adjoint operator[1] over a real inner product space. symmetric: if X is a symmetric matrix then so is AXAT
The corresponding object for a complex inner product for any matrix A. A symmetric matrix is necessarily a
space is a Hermitian matrix with complex-valued entries, normal matrix.
which is equal to its conjugate transpose. Therefore, in
linear algebra over the complex numbers, it is often as-
sumed that a symmetric matrix refers to one which has 21.1.1 Real symmetric matrices
real-valued entries. Symmetric matrices appear naturally
in a variety of applications, and typical numerical lin- Denote by , the standard inner product on Rn . The
ear algebra software makes special accommodations for real n-by-n matrix A is symmetric if and only if
them.

Ax, y = x, Ay x, y Rn .
21.1 Properties Since this denition is independent of the choice of basis,
symmetry is a property that depends only on the linear op-
The sum and dierence of two symmetric matrices is erator A and a choice of inner product. This characteriza-
again symmetric, but this is not always true for the tion of symmetry is useful, for example, in dierential ge-
product: given symmetric matrices A and B, then AB is ometry, for each tangent space to a manifold may be en-
symmetric if and only if A and B commute, i.e., if AB = dowed with an inner product, giving rise to what is called

147
148 CHAPTER 21. SYMMETRIC MATRIX

a Riemannian manifold. Another area where this formu- Every real non-singular matrix can be uniquely factored
lation is used is in Hilbert spaces. as the product of an orthogonal matrix and a symmetric
The nite-dimensional spectral theorem says that any positive denite matrix, which is called a polar decom-
symmetric matrix whose entries are real can be position. Singular matrices can also be factored, but not
diagonalized by an orthogonal matrix. More explicitly: uniquely.
For every symmetric real matrix A there exists a real or- Cholesky decomposition states that every real positive-
thogonal matrix Q such that D = QT AQ is a diagonal ma- denite symmetric matrix A is a product of a lower-
trix. Every symmetric matrix is thus, up to choice of an triangular matrix L and its transpose, A = LLT . If
orthonormal basis, a diagonal matrix. the matrix is symmetric indenite, it may be still decom-
T T
If A and B are n n real symmetric matrices that posed as P AP = LDL where P is a permutation
commute, then they can be simultaneously diagonalized: matrix (arising from the need to pivot), L a lower unit
there exists a basis of Rn such that every element of the triangular matrix, and D is a direct sum of symmetric 1
basis is an eigenvector for both A and B. 1 and 2 2 blocks, which is called Bunch-Kaufman
decomposition [5]
Every real symmetric matrix is Hermitian, and therefore
all its eigenvalues are real. (In fact, the eigenvalues are the A complex symmetric matrix may not be diagonalizable
entries in the diagonal matrix D (above), and therefore D by similarity; every real symmetric matrix is diagonaliz-
is uniquely determined by A up to the order of its entries.) able by a real orthogonal similarity.
Essentially, the property of being symmetric for real ma- Every complex symmetric matrix A can be diagonalized
trices corresponds to the property of being Hermitian for by unitary congruence
complex matrices.

A = QQT
21.1.2 Complex symmetric matrices
where Q is an unitary matrix. If A is real, the matrix
A complex symmetric matrix can be 'diagonalized' us- Q is a real orthogonal matrix, (the columns of which are
ing a unitary matrix: thus if A is a complex symmet- eigenvectors of A), and is real and diagonal (having the
ric matrix, there is a unitary matrix U such that UAUT eigenvalues of A on the diagonal). To see orthogonality,
is a real diagonal matrix. This result is referred to as suppose x and y are eigenvectors corresponding to dis-
the AutonneTakagi factorization. It was originally tinct eigenvalues 1 , 2 . Then
proved by Lon Autonne (1915) and Teiji Takagi (1925)
and rediscovered with dierent proofs by several other
mathematicians.[2][3] In fact, the matrix B = A A is Her- 1 x, y = Ax, y = x, Ay = 2 x, y
mitian and non-negative, so there is a unitary matrix V
such that V BV is diagonal with non-negative real entries. Since 1 and 2 are distinct, thus we have x, y = 0 the
Thus C = V T AV is complex symmetric with C C real. orthogonality.
Writing C = X + iY with X and Y real symmetric matrices,
C C = X2 + Y 2 + i(XY YX). Thus XY = YX. Since X and
Y commute, there is a real orthogonal matrix W such that 21.3 Hessian
both WXW T and WYW T are diagonal. Setting U = WV T ,
the matrix UAU T is complex diagonal. Post-multiplying Symmetric n-by-n matrices of real functions appear as the
U by another diagonal matrix the diagonal entries can be Hessians of twice continuously dierentiable functions of
made to be real and non-negative. Since their squares are n real variables.
the eigenvalues of A A, they coincide with the singular
values of A. (Note, about the eigen-decomposition of a Every quadratic form q on Rn can be uniquely written in
complex symmetric matrix A, the Jordan normal form of the form q(x) = xT Ax with a symmetric n-by-n matrix
A may not be diagonal, therefore A may not be diagonal- A. Because of the above spectral theorem, one can then
ized by any similarity transformation.) say that every quadratic form, up to the choice of an or-
thonormal basis of Rn , looks like

21.2 Decomposition
n
q(x1 , . . . , xn ) = i x2i
i=1
Using the Jordan normal form, one can prove that every
square real matrix can be written as a product of two real with real numbers i. This considerably simplies the
symmetric matrices, and every square complex matrix study of quadratic forms, as well as the study of the level
can be written as a product of two complex symmetric sets {x : q(x) = 1} which are generalizations of conic sec-
matrices.[4] tions.
21.7. REFERENCES 149

This is important partly because the second-order behav- Autonne, L. (1915), Sur les matrices hypohermi-
ior of every smooth multi-variable function is described tiennes et sur les matrices unitaires, Ann. Univ.
by the quadratic form belonging to the functions Hessian; Lyon, 38: 177
this is a consequence of Taylors theorem. Takagi, T. (1925), On an algebraic problem re-
lated to an analytic theorem of Carathodory and
Fejr and on an allied theorem of Landau, Japan.
J. Math., 1: 8393
21.4 Symmetrizable matrix
Siegel, Carl Ludwig (1943), Symplectic Geome-
try, American Journal of Mathematics, 65: 186,
An n-by-n matrix A is said to be symmetrizable if there JSTOR 2371774, doi:10.2307/2371774, Lemma 1,
exist an invertible diagonal matrix D and symmetric ma- page 12
trix S such that A = DS. The transpose of a symmetriz-
Hua, L.-K. (1944), On the theory of au-
able matrix is symmetrizable, since AT = (DS)T = SD =
tomorphic functions of a matrix variable I
D1 (DSD) and DSD is symmetric. A matrix A = (aij) is geometric basis, Amer. J. Math., 66: 470488,
symmetrizable if and only if the following conditions are doi:10.2307/2371910
met:
Schur, I. (1945), Ein Satz ber quadratische for-
men mit komplexen koezienten, Amer. J. Math.,
1. aij = 0 implies aji = 0 for all 1 i j n. 67: 472480, doi:10.2307/2371974
Benedetti, R.; Cragnolini, P. (1984), On simul-
2. ai1 i2 ai2 i3 . . . aik i1 = ai2 i1 ai3 i2 . . . ai1 ik for any -
taneous diagonalization of one Hermitian and one
nite sequence (i1 , i2 , . . . , ik ). symmetric form, Linear Algebra Appl., 57: 215
226, doi:10.1016/0024-3795(84)90189-7

21.5 See also [4] Bosch, A. J. (1986). The factorization of a square ma-
trix into two symmetric matrices. American Mathe-
matical Monthly. 93 (6): 462464. JSTOR 2323471.
Other types of symmetry or pattern in square matrices doi:10.2307/2323471.
have special names; see for example:
[5] G.H. Golub, C.F. van Loan. (1996). Matrix Computa-
tions. The Johns Hopkins University Press, Baltimore,
Antimetric matrix London.

Centrosymmetric matrix

Circulant matrix 21.7 References


Covariance matrix
Horn, Roger A.; Johnson, Charles R. (2013), Ma-
Coxeter matrix trix analysis (2nd ed.), Cambridge University Press,
ISBN 978-0-521-54823-6
Hankel matrix

Hilbert matrix
21.8 External links
Persymmetric matrix
Hazewinkel, Michiel, ed. (2001), Symmetric ma-
Skew-symmetric matrix trix, Encyclopedia of Mathematics, Springer, ISBN
Sylvesters law of inertia 978-1-55608-010-4

A brief introduction and proof of eigenvalue prop-


Toeplitz matrix
erties of the real symmetric matrix

See also symmetry in mathematics.

21.6 Notes
[1] Jess Rojo Garca (1986). lgebra lineal (in Spanish) (2nd.
ed.). Editorial AC. ISBN 84 7288 120 2.

[2] Horn & Johnson 2013, p. 278

[3] See:
150 CHAPTER 21. SYMMETRIC MATRIX

21.9 Text and image sources, contributors, and licenses


21.9.1 Text
Lev Landau Source: https://en.wikipedia.org/wiki/Lev_Landau?oldid=763147882 Contributors: CYD, Vicki Rosenzweig, Amillar,
XJaM, Enchanter, Kurt Jansson, Olivier, Michael Hardy, Alan Peakall, Mic, SebastianHelm, Salsa Shark, Andres, Vargenau, Ideyal, RickK,
Reddi, Maximus Rex, Noeckel, Phil Boswell, Robbot, Sanders muc, Altenmann, Blainster, Humus sapiens, Timrollpickering, Wikibot,
JackofOz, Ancheta Wis, Giftlite, Sj, Fastssion, Monedula, Everyking, Physman, Curps, Ezhiki, JillandJack, LiDaobing, Scott MacLean,
Gene s, Balcer, Lumidek, Chris Howard, D6, Sfngan, KNewman, Rich Farmbrough, Parishan, Aris Katsaris, Bender235, El C, Gnomz007,
Jumbuck, Mattpickman, Snowolf, Ksnow, Ronark, SidP, BigHairyBoris, Cmapm, SteinbDJ, Gene Nygaard, Vital303, AndriyK, XaosBits,
Emerson7, BD2412, Kbdank71, Rjwilmsi, Koavf, Lockley, Erkcan, Valip, Kasparov, Vuong Ngan Ha, PlatypeanArchcow, Wgfcrafty,
Goudzovski, Srleer, Russavia, Danielfong, YurikBot, Hillman, Brandmeister (old), RussBot, Splash, Alex Bakharev, Aeusoes1, ExRat,
LevKamensky, Daniel Mietchen, Bota47, Orlom~enwiki, Deville, Rms125a@hotmail.com, Abune, Curpsbot-unicodify, RG2, Paul Pie-
niezny, SmackBot, Unyoyega, KocjoBot~enwiki, Gjs238, GaeusOctavius, Paulcardan, DHN-bot~enwiki, Colonies Chris, Eliezg, Tamfang,
Tristes tigres, OrphanBot, MattOates, Yulia Romero, Pulu, Ser Amantio di Nicolao, John, Syrcatbot, Yms, Rglovejoy, Andrwsc, Clarity-
end, Paulmlieberman, Hermitage17, Vyznev Xnebara, Banedon, Chicheley, CMG, Cydebot, Ejconard, WillowW, MWaller, Michael C
Price, Ebyabe, Crum375, Thijs!bot, Headbomb, RobotG, Richiez, Gcm, Avaya1, Hamsterlopithecus, Enoent, Ggorelik, LarsMarius, Kope,
Fulvius~enwiki, Hgtp, Infovarius, CommonsDelinker, Johnpacklambert, Jspiegler, Silin2005, Salih, TomyDuby, Afaber012, Taras danko,
1812ahill, VolkovBot, Yawmoght, VasilievVV, Toddy1, Dougie monty, TXiKiBoT, Jimmyeatskids, Sk741~enwiki, GcSwRhIc, Dun-
can.Hull, Gilisa, CoolKid1993, Spinningspark, Jackbars, AlleborgoBot, TobiasS, SieBot, Pallab1234, Gerakibot, Phe-bot, Likebox, Blue-
claw, RSStockdale, Jsqqq777, AMbot, StewartMH, RS1900, Mallodi, DFRussia, All Hallows Wraith, Metzujan, Av0id3r, Brewcrewer,
Liilliil, Masterpiece2000, DragonBot, Gulmammad, SchreiberBike, Cardinalem, Dana boomer, Viktor O. Ledenyov, DumZiBoT, Messi-
naRagazza, Florincoter, Cmr08, D.M. from Ukraine, Addbot, Jmanteith, Victor-435, Ammarsakaji190, Lightbot, 55, Vasi, Galloping
Ghost U of I, Greyhood, Legobot, Luckas-bot, Yobot, AnomieBOT, TorontoFever, , Materialscientist, Theodorekon, Citation
bot, LouriePieterse, Xqbot, Parkyere, Davshul, Psychiatrick, Omnipaedista, Mnmngb, Plot Spoiler, Morpheus & Momus, Edgars2007,
Citation bot 1, Morent 342, RedBot, MastiBot, Jauhienij, Double sharp, TobeBot, Sapphirain, EmausBot, Louis van Appeven, Wikitanvir-
Bot, Nick Moyes, Dcirovic, , Suslindisambiguator, Zloyvolsheb, Physnick, JeanneMish, Brandmeister, Alvez3, ChuispastonBot,
RockMagnetist, Nikolay Yeriomin, Karlhufbauer, ClueBot NG, El Roih, Kikichugirl, LJosil, Clearlyfakeusername, Erkki Thuneberg, Jor-
genev, Helpful Pixie Bot, SzMithrandir, Bibcode Bot, BG19bot, MaoGo, Illyukhina, Ragnarstroberg, Brad7777, Petrukhina, Honorsteem,
Ninmacer20, The Elixir Of Life, Purdygb, ChrisGualtieri, SD5bot, Dexbot, Garuda0001, VIAFbot, Jochen Burghardt, Epicgenius, Leko-
ren, Yzavyalo, Winged Blades of Godric, AsalKadal, Monkbot, Nfgjgn, , NilubonT, TonyArzenta, Baragoz12, KasparBot,
John Crowfoot, Raiteri, Gamafunctional, Horsed-bean, Enjoyyourlife111, Prajaman, Magic links bot and Anonymous: 148
Felix Bloch Source: https://en.wikipedia.org/wiki/Felix_Bloch?oldid=766259461 Contributors: Trelvis, XJaM, Michael Hardy, Mic,
Markonen, Ahoerstemeier, Docu, The Anomebot, WhisperToMe, Maximus Rex, Fibonacci, Kokiri, Wikibot, Ancheta Wis, Giftlite,
Orangemike, Fastssion, Curps, R. end, Quadell, Ary29, D6, TheBlueWizard, Rich Farmbrough, Guanabot, ThomasK, Aris Katsaris,
Roo72, Night Gyr, Bender235, Djordjes, Brim, Lokifer, Alansohn, TheParanoidOne, A Kit, Ksnow, KingTT, Dirac1933, Encyclope-
dist, Emerson7, Askewmind, Kbdank71, Rjwilmsi, Mayumashu, Koavf, MZMcBride, The wub, Revolving Bugbear, Srleer, Valen-
tinian, Jaraalbe, GangofOne, YurikBot, RobotE, Wester, RussBot, Nmintz, Tevildo, Nae'blis, LeonardoRob0t, Garion96, RG2, Vulturell,
SmackBot, Hkhenson, Gilliam, Hmains, EncMstr, Josteinn, OrphanBot, KaiserbBot, Greenshed, Khukri, LeoNomis, Andrei Stroe, John,
Guat6, Beetstra, Luis Sanchez, Joseph Solis in Australia, RokasT~enwiki, Pathosbot, T-W, SkyWalker, HennessyC, Drinibot, Zureks,
Cydebot, MWaller, Gonzo fan2007, Thijs!bot, ChKa, Headbomb, Bunzil, AntiVandalBot, RobotG, Mary Mark Ockerbloom, Coyets,
Arx Fortis, Gcm, Hamsterlopithecus, Magioladitis, Misheu, Bulbeck, Here2xCategorizations, Balloonguy, Dirac66, E104421, Calendar-
Watcher, CommonsDelinker, Johnpacklambert, Ryan Postlethwaite, Plindenbaum, STBotD, VolkovBot, TXiKiBoT, Rei-bot, Norbu19,
CoolKid1993, Spinningspark, Resurgent insurgent, Ponyo, SieBot, Utternutter, Toddst1, JerzyTarasiuk, Kumioko (renamed), Wuhwuzdat,
RS1900, ClueBot, WriterListener, MikeVitale, ChandlerMapBot, Masterpiece2000, Cardinalem, Cpt.schoener, DJep, MessinaRagazza,
Rror, MystBot, Socersam627, MrOllie, Numbo3-bot, Lightbot, , Zorrobot, Legobot, Luckas-bot, Yobot, NLWASTI, Sulair.speccoll,
AnomieBOT, Materialscientist, Citation bot, Xqbot, Parkyere, DSisyphBot, Davshul, Mnmngb, Ironboy11, BenzolBot, Citation bot 1, Sky-
erise, TobeBot, EmausBot, Amanda.nelson12, Stephencdickson, Suslindisambiguator, ClueBot NG, Usctommytrojan, Movses-bot, Nas-
mem, Helpful Pixie Bot, Bibcode Bot, BG19bot, Ema--or, Periglio, VIAFbot, Churn and change, Darvii, Bibliophilen, RaphaelQS, Pinoc-
chio3000, Crispulop, Suspecialcollections, Monkbot, Jim Carter, Prisencolin, NilubonT, Ellipapa, KasparBot, Csumstudent, Elizasvingen,
InternetArchiveBot, , Ber31, Kobilesaux and Anonymous: 50
Quantum thermodynamics Source: https://en.wikipedia.org/wiki/Quantum_thermodynamics?oldid=721091983 Contributors:
Mirokado, Verrai, Lighthead, Timtrent, Hroulf, Josfritz, Wiae, Dodger67, Sphilbrick, MatthewVanitas, AnomieBOT, Primefac, ClueBot
NG, BG19bot, Sulfurboy, Khazar2, Mark viking, Arthur goes shopping, Joseph2302, Rkoslo, Lmikmik, Tnd1111, Andre Xuereb and
Anonymous: 5
Master equation Source: https://en.wikipedia.org/wiki/Master_equation?oldid=753385640 Contributors: Tessonec~enwiki, Michael
Hardy, Lexor, Charles Matthews, Tea2min, Giftlite, Jason Quinn, Nek, TedPavlic, Danski14, BryanD, Jheald, Oleg Alexandrov, Linas,
Zzyzx11, R.e.b., Mathbot, Bgwhite, Blutnk, SmackBot, Chodges, Complexica, Airfoyle, Mets501, Cydebot, Headbomb, Spheniscus,
Aeioun, JaGa, S3000, R'n'B, Adavidb, Camrn86, Nonstandard, Sapphic, AlleborgoBot, SHL-at-Sv, Melcombe, Agor153, Addbot, Dr.
Universe, Luckas-bot, Yobot, AnomieBOT, Citation bot, Eumolpo, Bjf624, Jonesey95, Miracle Pen, Zhngbn, EmausBot, Mithrawnuruodo,
Jasonanaggie, ZroBot, Flomenbom, Helpful Pixie Bot, Ricardohz, Bibcode Bot, Marcocapelle, Zetazeros, Lipatrick, ChrisGualtieri, Zatrp,
Makecat-bot, Kryomaxim, Altroware, Cyrej and Anonymous: 28
Markov property Source: https://en.wikipedia.org/wiki/Markov_property?oldid=768131287 Contributors: Michael Hardy, Ejrh, Rl, Bj-
cairns, Charles Matthews, Lypheklub, Benwing, Qlmatrix, Seth Ilys, Giftlite, Gene s, Histrion, Cretog8, Dallashan~enwiki, Arthena, Dan-
hash, Kazvorpal, Linas, Btyner, EnDumEn, Sodin, Bgwhite, Adoniscik, YurikBot, Wavelength, Finell, SmackBot, Eskimbot, Zinc2005,
Papa November, DHN-bot~enwiki, Radagast83, Hyperbola, Clean Copy, Loodog, Nijdam, Ickydog, Thudso, ZICO, CBM, Jackiechen01,
Skittleys, A3nm, Changodoa, Jamelan, Cysiek10, SieBot, WereSpielChequers, Yfc7272, OKBot, AlanUS, Melcombe, ClueBot, Ecov, Es-
tirabot, BodhisattvaBot, Addbot, RobertHannah89, Yobot, AnomieBOT, Ciphers, Xqbot, Constructive editor, Olexa Riznyk, Briardew,
Gaba p, Chibby0ne, Sigma0 1, Nehchal, Kdcoo, Capodistria, Helpful Pixie Bot, Moradmandali, Jorgeecardona, Brirush, 7804j, Afnazi,
Franois Robere, Parswerk, RationalBlasphemist, Odcameron and Anonymous: 55
Lindblad equation Source: https://en.wikipedia.org/wiki/Lindblad_equation?oldid=764416965 Contributors: Charles Matthews, Laussy,
Chuunen Baka, Cederal, David Schaich, Phobos~enwiki, Jrme, Linas, Zzyzx11, Bgwhite, , Jess Riedel, C h eming, Reedy,
21.9. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 151

Njerseyguy, Mets501, Hansbethe, Myasuda, Mct mht, Second Quantization, Azaghal of Belegost, Cotton2, Smite-Meister, VolkovBot,
Sam729, Shai mach, SchreiberBike, Jscg, Phys0111, WikHead, Addbot, Dr. Universe, Yobot, AnomieBOT, Citation bot, Churchill17,
Omnipaedista, Baz.77.243.99.32, Kirsim, Pavithransiyer, Akerans, , Helpful Pixie Bot, Bibcode Bot, BG19bot, Andreamari84, Luca
Innocenti, Corvus-TAU, Bender the Bot, PrimeBOT and Anonymous: 26
Amir Caldeira Source: https://en.wikipedia.org/wiki/Amir_Caldeira?oldid=749945120 Contributors: Rsabbatini, Masud1011, Tim-
rollpickering, Kinu, T. Anthony, SmackBot, Victor Lopes, Judgesurreal777, Cydebot, Batamtig, Dalliance, Alphachimpbot, Waacstats,
Phonon, Fordescort79, Aboutmovies, Jonas Mur~enwiki, BOTijo, Addbot, Lightbot, Yobot, RjwilmsiBot, BG19bot, Brad7777, Giso6150,
KasparBot, InternetArchiveBot, GreenC bot and Anonymous: 3
Anthony James Leggett Source: https://en.wikipedia.org/wiki/Anthony_James_Leggett?oldid=766625531 Contributors: Amillar, Rsab-
batini, Mic, Ams80, Maximus Rex, Kenatipo, SWAdair, Wmahan, ChicXulub, HorsePunchKid, Phe, PDH, John Foley, D6, Garri-
son, Aris Katsaris, ChristophDemmer, Srbauer, Craigy144, Ksnow, Cortonin, RyanGerbil10, Angr, Woohookitty, Emerson7, BD2412,
Wachholder0, Melesse, Rjwilmsi, The wub, Ttwaring, FlaBot, Margosbot~enwiki, Gary Cziko, CarolGray, Choess, Srleer, CJLL
Wright, YurikBot, RussBot, Alex Bakharev, Rms125a@hotmail.com, LeonardoRob0t, Garion96, Le Hibou~enwiki, Philip Stevens,
KnightRider~enwiki, SmackBot, Vald, Bluebot, Paulcardan, George Church, Tsca.bot, EdGl, Andrei Stroe, Chymicus, Ser Amantio di
Nicolao, John, Tim bates, Regan123, Ambuj.Saxena, Trafalgar007, HennessyC, CmdrObot, Drinibot, Themightyquill, Cydebot, MWaller,
Thijs!bot, TonyTheTiger, Headbomb, Batamtig, Bunzil, Salavat, RobotG, Chill doubt, NBeale, Gcm, MER-C, Here2xCategorizations,
Drdavidhill, CommonsDelinker, Johnpacklambert, SuperGirl, Paravane, Slim cop, Domminico, Paragghosh, GrahamHardy, VolkovBot,
MertonLister, Setchcr, TXiKiBoT, Jimmyeatskids, Victor vakaryuk, Duncan.Hull, Strangerer, Rcb1, AlleborgoBot, Millbanks, SieBot,
Kzirkel, Utternutter, Arjen Dijksman, Seedbot, Jobas, Someone111111, Martarius, Gaia Octavia Agrippa, Joao Xavier, Pointillist, Mas-
terpiece2000, DragonBot, Excirial, Alexbot, DeltaQuad, Cardinalem, Tjako, MessinaRagazza, D.M. from Ukraine, Addbot, Kamuichikap,
LaaknorBot, Tassedethe, Lightbot, Zorrobot, Luckas-bot, Yobot, Amirobot, KamikazeBot, AnomieBOT, Ciphers, AdjustShift, Cyan22,
Citation bot, ArthurBot, KHirsch, Carolyne M. Van Vliet, Davshul, Omnipaedista, Citation bot 1, Sopher99, Plucas58, TobeBot, Badger
M., MrX, SeoMac, Srv.poddar, RjwilmsiBot, VernoWhitney, EmausBot, WikitanvirBot, PBS-AWB, Parsonscat, JeanneMish, ClueBot
NG, HBook, Bibcode Bot, BG19bot, Zoldyick, JYBot, VIAFbot, Here is where you cut it, Tentinator, RaphaelQS, Fallen skirts, Monkbot,
EdYuan, KasparBot, JorisEnter, InternetArchiveBot, Bender the Bot and Anonymous: 46
Nitrogen-vacancy center Source: https://en.wikipedia.org/wiki/Nitrogen-vacancy_center?oldid=763309032 Contributors: Dratman,
Mike Schwartz, Gary, Gene Nygaard, WilliamKF, BD2412, Rjwilmsi, Shaddack, Oakwood, Reyk, Tentrillion, SaveTheWhales, Smack-
Bot, Chris the speller, Mgiganteus1, Sasata, Wideofthemark, Trev M, Headbomb, VolkovBot, TXiKiBoT, FKmailliW, Lightmouse, Mild
Bill Hiccup, K a r n a, Justin545, Wikimedes, Addbot, Tassedethe, AnomieBOT, JackieBot, Khcf6971, Materialscientist, Citation bot,
LilHelpa, Mononomic, J04n, Herrenberg, BenzolBot, Sbalian, Citation bot 1, Tom.Reding, Ripchip Bot, AManWithNoPlan, DokReggar,
Nscozzaro, Frietjes, Bibcode Bot, BG19bot, Dexbot, Me, Myself, and I are Here, Lagoset, Monkbot, Vishven, Brunolucatto, Zijin Shi,
Nadsokor, SarcasticHalli, MinusBot and Anonymous: 16
Quantum mechanics Source: https://en.wikipedia.org/wiki/Quantum_mechanics?oldid=769477577 Contributors: AxelBoldt, Paul Drye,
Chenyu, Derek Ross, CYD, Eloquence, Mav, The Anome, AstroNomer, Taral, Ap, Magnus~enwiki, Ed Poor, XJaM, Rgamble, Christian
List, William Avery, Roadrunner, Ellmist, Mjb, Olivier, Stevertigo, Bdesham, Michael Hardy, Tim Starling, JakeVortex, Vudujava, Owl,
Norm, Gabbe, Menchi, Ixfd64, Axlrosen, TakuyaMurata, Shanemac, Alo, Looxix~enwiki, Mdebets, Ahoerstemeier, Cyp, Stevenj, J-
Wiki, Theresa knott, Snoyes, Gyan, Nanobug, Cipapuc, Jebba, , Glenn, Kyokpae~enwiki, Nikai, Dod1, Jouster, Mxn, Charles
Matthews, Tantalate, Timwi, Stone, Jitse Niesen, Rednblu, Wik, Dtgm, Patrick0Moran, Tpbradbury, Nv8200pa, Phys, Bevo, Jecar, Fvw,
Stormie, Sokane, Optim, Bcorr, Johnleemk, Jni, Rogper~enwiki, Robbot, Ke4roh, Midom, MrJones, Jaleho, Astronautics~enwiki, Fredrik,
Chris 73, Moncrief, Goethean, Bkalafut, Lowellian, Centic, Gandalf61, StefanPernar, Academic Challenger, Rursus, Texture, Matty j,
Moink, Hadal, Papadopc, Johnstone, Fuelbottle, Lupo, HaeB, Mcdutchie, Xanzzibar, Tea2min, David Gerard, Enochlau, Ancheta Wis,
Decumanus, Giftlite, Donvinzk, DocWatson42, ScudLee, Awolf002, Barbara Shack, Harp, Fudoreaper, Lethe, Fastssion, Zigger, Mon-
edula, Wwoods, Anville, Alison, Bensaccount, Tromer, Sukael, Andris, Jason Quinn, Gracefool, Solipsist, Nathan Hamblen, Foobar,
SWAdair, Mckaysalisbury, AdamJacobMuller, Utcursch, CryptoDerk, Knutux, Yath, Amarvc, Pcarbonn, Stephan Leclercq, Antandrus, Jo-
Jan, Savant1984, Jossi, Karol Langner, CSTAR, Rdsmith4, APH, Anythingyouwant, Bumm13, Thincat, Aaron Einstein, Edsanville, Robin
klein, Muijz, Zondor, Guybrush, Grunt, Lacrimosus, Chris Howard, L-H, Ta bu shi da yu, Freakofnurture, Sfngan, Venu62, Spiy sperry,
CALR, Ultratomio, KeyStroke, Noisy, Discospinster, Caroline Thompson, Rich Farmbrough, H0riz0n, FT2, Pj.de.bruin, Hidaspal, Pjacobi,
Vsmith, Wk muriithi, Silence, Smyth, Phil179, Moogoo, WarEagleTH, Smear~enwiki, Paul August, Dmr2, Bender235, ESkog, Nabla, Dat-
aphile, Dpotter, Floorsheim, El C, Lankiveil, Kross, Laurascudder, Edward Z. Yang, Shanes, Spearhead, RoyBoy, Femto, MPS, Bobo192,
Army1987, John Vandenberg, AugustinMa, Geek84, GTubio, Clarkbhm, SpaceMonkey, Sjoerd visscher, 9SGjOSfyHJaQVsEmy9NS,
Sriram sh, Matt McIrvin, Sasquatch, BM, Firewheel, MtB, Nsaa, Storm Rider, Alansohn, Gary, ChristopherWillis, Tek022, ZiggyZig,
Keenan Pepper, La hapalo, Gpgarrettboast, Pippu d'Angelo, PAR, Batmanand, Hdeasy, Bart133, Snowolf, Wtmitchell, Tycho, Leoadec, Jon
Cates, Mikeo, Dominic, Bsadowski1, W7KyzmJt, GabrielF, DV8 2XL, Alai, Nick Mks, KTC, Dan100, Chughtai, Falcorian, Oleg Alexan-
drov, Ashujo, Ott, Feezo, OwenX, Woohookitty, Linas, Superstring, Tripodics, Shoyer, StradivariusTV, Kzollman, Kosher Fan, JeremyA,
Tylerni7, Pchov, GeorgeOrr, Mpatel, Adhalanay, Firien, Wikiklrsc, GregorB, AndriyK, SeventyThree, Wayward, Prashanthns, DL5MDA,
Palica, Pfalstad, Graham87, Magister Mathematicae, Chun-hian, FreplySpang, Baker APS, JIP, RxS, Search4Lancer, Canderson7, Sj,
Saperaud~enwiki, Rjwilmsi, Jake Wartenberg, Linuxbeak, Tangotango, Bruce1ee, Darguz Parsilvan, Mike Peel, Pasky, HappyCamper,
Ligulem, The wub, Ttwaring, Reinis, Hermione1980, Sango123, Oo64eva, St33lbird, Kevmitch, Titoxd, Das Nerd, Alejo2083, FlaBot,
Moskvax, RobertG, Urbansky~enwiki, Arnero, Latka, Nihiltres, Pathoschild, Quuxplusone, Srleer, Kri, Cpcheung, Acett, Chobot,
DVdm, Gwernol, Niz, YurikBot, Wavelength, Paulraine, Arado, Loom91, Xihr, GLaDOS, Khatharr, Firas@user, Gaius Cornelius, Chaos,
Rsrikanth05, Rodier, Wimt, Anomalocaris, Royalbroil, David R. Ingham, NawlinWiki, Grafen, NickBush24, RazorICE, Stephen e nelson,
JocK, SCZenz, Randolf Richardson, Vb, E2mb0t~enwiki, Tony1, Syrthiss, SFC9394, DeadEyeArrow, Bota47, Kkmurray, Werdna, Bmju,
Wknight94, WAS 4.250, FF2010, Donbert, Light current, Enormousdude, 21655, Zzuuzz, TheKoG, Lt-wiki-bot, Nielad, Closedmouth,
Ketsuekigata, E Wing, Brina700, Modify, Dspradau, Netrapt, Petri Krohn, Badgettrg, Peter, Willtron, Mebden, RG2, GrinBot~enwiki,
Mejor Los Indios, Sbyrnes321, CIreland, Luk, Itub, Hvitlys, SmackBot, Paulc1001, Moeron, Rex the rst, InverseHypercube, Knowl-
edgeOfSelf, Royalguard11, K-UNIT, Lagalag, Pgk, Jagged 85, Clpo13, Chairman S., Pxfbird, Grey Shadow, Delldot, Petgraveyard,
Weiguxp, David Woolley, Lithium412, Philmurray, Yamaguchi , Robbjedi, Gilliam, Slaniel, Betacommand, Skizzik, Dauto, Holy Ganga,
JSpudeman, Modusoperandi, Anachronist, Stevenwagner, DetlevSchm, MK8, Jprg1966, MalafayaBot, Marks87, Silly rabbit, Complexica,
Colonies Chris, Darth Panda, Sajendra, Warbirdadmiral, El Chupacabra, Zhinz, Can't sleep, clown will eat me, Physika~enwiki, Scott3,
Scray, ApolloCreed, Ackbeet, Le fantome de l'opera, Onorem, Surfcuba, Voyajer, Addshore, Stiangk, Paul E T, Huon, Khoikhoi, King-
don, DenisDiderot, Cybercobra, Nakon, Nick125, SnappingTurtle, Dreadstar, Richard001, Akriasas, Freemarket, Weregerbil, Kleuske,
DeFoaBuSe, DMacks, Salamurai, LeoNomis, Sadi Carnot, Pilotguy, Byelf2007, Xezlec, DJIndica, Akubra, Rory096, Bcasterline, Har-
152 CHAPTER 21. SYMMETRIC MATRIX

ryboyles, JzG, Richard L. Peterson, RTejedor, AmiDaniel, UberCryxic, Wtwilson3, Zslevi, LWF, Gobonobo, Jaganath, JorisvS, Evan
Robidoux, Mgiganteus1, Zarniwoot, Goodnightmush, Jordan M, Ex nihil, Gwendy, SirFozzie, Waggers, MarphyBlack, Caiaa, Asyn-
deton, Dan Gluck, BranStark, Iridescent, JMK, Dreftymac, Joseph Solis in Australia, UncleDouggie, Rnb, Hikui87~enwiki, Cain47,
Mbenzdabest, Nturton, Civil Engineer III, Cleric12121, Tawkerbot2, Chetvorno, Carborn1, Mustbe, SkyWalker, JForget, Frovingslosh,
Ale jrb, Peace love and feminism, Wafulz, Sir Vicious, Asmackey, Dycedarg, Lavateraguy, Van helsing, The ed17, Bad2101, Jayunder-
scorezero, CBM, BeenAroundAWhile, JohnCD, Nunquam Dormio, Harriemkali, Swwright, Wquester, N2e, Melicans, Smallpond, Mya-
suda, Gregbard, Xanas Servant, Dragons Blood, Cydebot, Wrwrwr, Beek man, Meznaric, Jack O'Lantern, Peterdjones, Meno25, Gogo
Dodo, Islander, DangApricot, NijaMunki, Pascal.Tesson, Hughgr, Benvogel, Michael C Price, Xlynx, Doug Weller, Christian75, Dumb-
BOT, FastLizard4, Waxigloo, Amit Moscovich, FrancoGG, CieloEstrellado, Thijs!bot, Epbr123, Derval Sloan, Koeplinger, Mbell, N5iln,
Headbomb, Marek69, Ujm, Second Quantization, Martin Hedegaard, Philippe, CharlotteWebb, Nick Number, MichaelMaggs, Sbandrews,
Mentisto, Austin Maxwell, Cyclonenim, AntiVandalBot, Luna Santin, Widefox, Tkirkman, Eveross, Lontax, Grafnita, Rakniz, Prolog,
Gnixon, CStar, TimVickers, Dylan Lake, Casomerville, Danger, Farosdaughter, Tim Shuba, North Shoreman, Yellowdesk, Glennwells,
Byrgenwulf, GaaraMsg, Figma, JAnDbot, Leuko, Husond, Superior IQ Genius, MER-C, CosineKitty, Matthew Fennell, Eurobas, IJMacD,
Andonic, Dcooper, Hut 8.5, 100110100, Skewwhiy, Four Dog Night, Acroterion, Magioladitis, Connormah, Mattb112885, Bongwar-
rior, VoABot II, AtticusX, Kuyabribri, JamesBWatson, SHCarter, FagChops, Bene, Rivertorch, Michele123, Zooloo, Jmartinsson, Thun-
derhead~enwiki, Couki, Catgut, Indon, ClovisPt, Dirac66, 28421u2232nfenfcenc, Joe hill, Schumi555, Adventurer, Cpl Syx, Robb37,
Quantummotion, DerHexer, Chaujie328, Khalid Mahmood, Teardrop onthere, Guitarspecs, Info D, Seba5618, Gjd001, CiA10386, Mar-
tinBot, Arjun01, Rettetast, Mike6271, Keith D, Fpaiano~enwiki, CommonsDelinker, AlexiusHoratius, Andrej.westermann, Thirdright,
Dinkytown, Shellwood, J.delanoy, DrKay, Trusilver, Kaesle, Numbo3, NightFalcon90909, Uncle Dick, Maurice Carbonaro, Kevin ayl-
ward, 5Q5, StonedChipmunk, Foober, Acalamari, Metaldev, Bot-Schafter, Katalaveno, DarkFalls, McSly, Bustamonkey2003, Ignatzmice,
Tarotcards, JayJasper, Gcad92, Detah, LucianLachance, Midnight Madness, NewEnglandYankee, Rwessel, Nin0rz4u 2nv, SJP, MKolt-
now, KCinDC, Han Solar de Harmonics, Cmichael, Juliancolton, Cometstyles, MoForce, Chao129, Elenseel, Wfaze, Samlyn.josfyn,
Martial75, GrahamHardy, CardinalDan, Sheliak, Spellcast, Signalhead, Pgb23, Cuzkatzimhut, Zakuragi, MBlue2020, Pleasantville, Lo-
kiClock, Lears Fool, Soliloquial, Philip Trueman, TXiKiBoT, Oshwah, Maximillion Pegasus, SanfordEsq, RyanB88, SCriBu, Nxavar,
Sean D Martin, Sankalpdravid, ChooseAnother, Qxz, Someguy1221, Liko81, Bsharvy, Olly150, XeniaKon, Clarince63, Seraphim, Sai-
bod, Fizzackerly, Zolot, Raymondwinn, David in DC, Handsome Pete, Geometry guy, Ilyushka88, Leavage, Krazywrath, V81, Sodicadl,
RandomXYZb, Lerdthenerd, Andy Dingley, Enigmaman, Meters, Lindsaiv, Synthebot, Antixt, Falcon8765, Enviroboy, Spinningspark,
H1nomaru senshi, The Devils Advocate, Monty845, AlleborgoBot, Nagy, The Mad Genius, Logan, PGWG, DarthBotto, Vitalikk, Bel-
sazar, Katzmik, EmxBot, Givegains, Kbrose, Mk2rhino, YohanN7, SieBot, Ivan tambuk, Nibbleboob, Graham Beards, WereSpielChe-
quers, Dawn Bard, AdevarTruth, RJaguar3, Hekoshi, Yintan, 4RM0~enwiki, Ujjwol, Bentogoa, Ferret, Jc-S0CO, JSpung, Arjen Di-
jksman, Oxymoron83, Antonio Lopez, Henry Delforn (old), Hello71, AnonGuy, Lightmouse, Radzewicz, Hobartimus, Jaquesthehunter,
Michael Courtney, Macy, Hatster301, Swegei, Curlymeatball38, Quackbumper, Coldcreation, Zenbullets, StaticGull, Heptarchy of teh
Anglo-Saxons, baby, Mygerardromance, Fishnet37222, Stentor7, Mouselb, Randy Kryn, Velvetron, ElectronicsEnthusiast, Darrellpenta,
Soporaeternus, Martarius, ClueBot, NickCT, AllPurposeScientist, Scottstensland, Yeahyeahkickball, The Thing That Should Not Be,
EMC125, Zero over zero, Infrasonik, MichaelVernonDavis, Herakles01, Drmies, Cp111, Diafanakrina, Macka92, Mrsastrochicken, Van-
dalCruncher, Agge1000, Otolemur crassicaudatus, Ridge Runner, Neverquick, Asdf1990, DragonBot, Djr32, Ondon, Excirial, HounsGut,
Welsh-girl-Lowri, Quercus basaseachicensis, Jusdafax, Krackenback, Winston365, Brews ohare, Sukaj, Viduoke, NuclearWarfare, Ice
Cold Beer, Arjayay, Terra Xin, PhySusie, Kding, Imalad, The Red, Mikaey, SchreiberBike, Vlatkovedral, Joeawfjdls453, Thingg, Russel
Mcpigmin, Aitias, Scalhotrod, Versus22, Maaparty303, SoxBot III, Apparition11, Mrvanner, Crowsnest, Vanished user uih38riiw4hjlsd,
DumZiBoT, Finalnight, CBMIBM, Javafreakin, X41, XLinkBot, Megankerr, Yokabozeez, Arthur chos, Odenluna, Matthewsasse1, Sakura
Cartelet, Ajcheema, AndreNatas, Paul bunion, WikHead, Loopism, NellieBly, Mifter, JinJian, Truthnlove, Airplaneman, Billcosbyislone-
lypart2, Mojska, Stephen Poppitt, Willieru18, Tayste, Addbot, Ryan ley, 11341134a, Willking1979, Manuel Trujillo Berges, Kadski,
TylerM37, Wareagles18, XTRENCHARD29x, 11341134b, Tcncv, Betterusername, Non-dropframe, Captain-tucker, Robertd514, Fg-
nievinski, Mjamja, Harrytipper, SunDragon34, Blethering Scot, Ronhjones, PandaSaver, WMdeMuynck, Aboctok, JoshTW, Canadian-
LinuxUser, Fluernutter, Looie496, Cst17, MrOllie, BualoBill90, Mitchellsims08, Neonorange, Chzz, AnnaFrance, Favonian, LinkFA-
Bot, Adolfman, Brufnus, Barak Sh, AgadaUrbanit, Ehrenkater, Tide rolls, Lightbot, NoEdward, Romaioi, Jan eissfeldt, Teles, Jarble,
Csdavis1, Ttasterul, Luckas-bot, Yobot, OrgasGirl, Tohd8BohaithuGh1, TaBOT-zerem, Niout, II MusLiM HyBRiD II, Kan8eDie, Nal-
limbot, Brougham96, KamikazeBot, Fearingfearitself, Positivetruthintent, IW.HG, Solo Zone, Jackthegrape, Eric-Wester, Magog the Ogre,
Armegdon, N1RK4UDSK714, Octavianvs, AnomieBOT, Captain Quirk, Jim1138, IRP, Rnpg1014, Piano non troppo, AdjustShift, Csi-
gabi, Giants27, Materialscientist, Gierens22, Supppersmart, The High Fin Sperm Whale, Citation bot, Bci2, Frankenpuppy, LilHelpa,
The Firewall, Joshualmer, Rightly, Mollymop, Xqbot, Nxtid, Sionus, Raaziq, Amareto2, Melmann, Capricorn42, Jostylr, Dbroesch,
Mark Swiggle, TripLikeIDo, Benvirg89, Sokratesinabasket, Gilo1969, Physprof, Grim23, P99am, Qwertyuio 132, Gap9551, Almabot,
Polgo, GrouchoBot, Abce2, Jagbag2, Frosted14, Toofy mcjack34, Richard.decal, Qzd800, Trurle, Omnipaedista, Mind my edits, Willi-
unWeales, Kesaloma, Charvest, The Spam-a-nata, Dale Ritter, Shipunits, FaTony, Gr33k b0i, Shadowjams, Adrignola, Dingoatscritch,
Spakwee, A. di M., Naturelles, Dougofborg, Cigarettizer, , C.c. hopper, JoshC306, Chjoaygame, GliderMaven, Dylan620 II, Bboy-
dill, Magnagr, Kroin, Tobby72, Pepper, Commander zander, Guy82914, PhysicsExplorer, Kenneth Dawson, Colinue, Steve Quinn,
N4tur4le, Pratik.mallya, Razataza, Machine Elf 1735, 06twalke, TTGL, Izodman2012, Xenfreak, Iquseruniv, HamburgerRadio, Cita-
tion bot 1, Cheryledbernard, Greg HWWOK Shaw, WQUlrich, Brettwats, Pinethicket, I dream of horses, Pink Bull, Tom.Reding, Lithium
cyanide, DanielGlazer, Serols, Deaddogwalking, FloridaSpaceCowboy, RobinK, Liarliar2009, JereyVest, Seattle Jrg, Reconsider the
static, Fredkinfollower, Superlions123, GreenReections, Roseohioresident, Tjlafave, FoxBot, Chris5858, Anonwhymus, Trappist the
monk, Buddy23Lee, 3peasants, Beta Orionis, Train2104, Hickorybark, Creativethought20, Lotje, PorkHeart, Michael9422, Lmp883,
Bowlofknowledge, Leesy1106, Doc Quintana, Reaper Eternal, Azatos, SeriousGrinz, Pokemon274, Specs112, Vera.tetrix, Earthandmoon,
MicioGeremia, Tbhotch, Jesse V., Sideways713, Dannideak, Factosis, MR87, Borki0, Taylo9487, Updatehelper, Seawolf1111, Ones-
moothlefty, Carowinds, Bento00, Beyond My Ken, Andy chase, WildBot, Deadlyops, Phyguy03, EmausBot, John of Reading, Dave-
johnsan, Orphan Wiki, Bookalign, WikitanvirBot, Mahommed alpac, Dr Aaij, Gfoley4, Roxbreak, Word2need, Beatnik8983, Alamadte,
Racerx11, Dickwet89, GoingBatty, RA0808, Minimacs Clone, NotAnonymous0, Dmblub, KHamsun, Wham Bam Rock II, Solarra, Ste-
venganzburg, Elee, Slightsmile, Tommy2010, Uploadvirus, Wikipelli, Dcirovic, Elitedarklord dragonslyer 3.14159, AsceticRose, JSquish,
AlexBG72, PBS-AWB, White Trillium, Harddk, Checkingfax, Angelsages, NickJRocks95, F, Josve05a, Stanford96, MithrandirAgain,
Imperial Monarch, 1howardsr1, Plotfeat, User10 5, Brazmyth, Raggot, Alvindclopez, Dalma112211221122, Wayne Slam, Tolly4bolly,
EricWesBrown, Mattedia, Jacksccsi, L Kensington, Qmtead, Chrisman62, Lemony123, Final00123, Maschen, Donner60, HCPotter, Sci-
entic29, Notolder, Pat walls1, ChuispastonBot, Roberts Ken, RockMagnetist, TYelliot, Llightex, DJDunsie, DASHBotAV, The beings,
Whoop whoop pull up, Isocli, ClueBot NG, KagakuKyouju, Professormeowington, CocuBot, MelbourneStar, This lousy T-shirt, Satellizer,
MC ShAdYzOnE, Baseball Watcher, Sabri Al-Sa, Arespectablecitizen, Jj1236, Braincricket, ScottSteiner, Wikishotaro, Widr, Machdeep,
21.9. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 153

Ciro.santilli, Mikeiysnake, Dorje108, Anupmehra, Theopolisme, MerlIwBot, BlooddRose, Helpful Pixie Bot, Novusuna, Olaniyob, Billy-
bobjow, Leo3232, Elochai26, Jubobro, Ieditpagesincorrectly, Bibcode Bot, Psaup09, Lowercase sigmabot, Saurabhagat, BG19bot,
Physics1717171, Brannan.brouse, ThisLaughingGuyRightHere, Happyboy2011, Hashem sfarim, The Mark of the Beast, Northamer-
ica1000, Declan12321, Cyberpower678, BobTheBuilder1997, Yowhatsupdude, Metricopolus, Solomon7968, Mark Arsten, Bigsean0300,
Chander, Guythundar, Joydeep, Trevayne08, Roopydoop55, Aranea Mortem, Jamessweeen, F=q(E+v^B), Vagigi, DARIO SEVERI,
Snow Blizzard, Hipsupful, Laye Mehta, Glacialfox, Winston Trechane, In11Chaudri, Achowat, Bfong2828, PinkShinyRose, Tm14, Lieu-
tenant of Melkor, Penguinstorm300, Pkj61, Williamxu26, Jnracv, Samwalton9, Lbkt, Kisokj, Bakkedal, Cyberbot II, StopTheCrackpots,
Layzeeboi, Callum Inglis, Davidwhite18, Macven, Khazar2, Adwaele, Gdrg22, BuzyBody, BrightStarSky, Dexbot, Webclient101, Autis-
ticCatnip, Garuda0001, William.winkworth, Belief action, Harrycol123, Saehry, Matthewrobertolson, Jamesx12345, Josophie, Brirush,
Athomeinkobe, Thepalerider2012, JustAMuggle, Reatlas, Joeinwiki, Mmcev106, Darvii, Loganfalco, Everymorning, Jakec, Rod Pierce,
Backendgaming, DavidLeighEllis, Geometriccentaur, Rauledc, Eapbar, Ryomaiinsai12345, Pr.malek, LieutenantLatvia, Quadratic for-
mula, Desswarrior, Ray brock, The Herald, Shawny J, DrYusuf786, Bubblynoah, JWNoctis, Asherkirschbaum, W. P. Uzer, Cfunkera,
SJ Defender, Melquiades Babilonia, Bojo1498, Atticus Finch28, Dfranz1012, PhuongAlex, JaconaFrere, 15petedc, Adamtt9, Aspaas-
Bekkelund, QuantumMatt101, Htp0020, Derenek, Russainbiaed na, Internucleotide, Emmaellix, Renegade469, Nikrulz07, HiYahhFriend,
Johntrollston1233, BethNaught, HolLak456, Castielsbloodyface, Trackteur, Black789Green456, Kinetic37, Theskru, DaleReese1962,
Zazzi01, Gareld Gareld, Potatomuncher2000, 3primetime3, 420noscopekills, HMSLavender, The Original Bob, EvilLair, 427454LSX,
ChamithN, SA 13 Bro, Suman Chatterjee DHEP, HelloFriendsOfPlanetEarth, Y-S.Ko, Zppix, Audiorew, Trentln1852, CheeseButtery,
Blackbeast75, Justdausualtf, Whijus19, Govindaharihari, Rubbish computer, Dubsir, Iazyges, Lanzdsey, Tetra quark, Isambard King-
dom, Rohin2002, Bloodorange1234, Harsh mahesheka, Skipfortyfour, Username12345678901011121314151617181920, Camisboss5,
WebdriverHead, SamuelFey666, Cnbr15, Amccann421, Jerodlycett, KasparBot, MintyTurtle01, Peter Richard Obama, Conana007, An-
archyte, Sweepy, Pengyulong7, TheDoctor07, Tropicalkitty, Shinobabu20081996, C.Gesualdo, Jpskycak, Murph9000, CAPTAIN RAJU,
Babymissfortune, TerraCodes, Javathunderman, MB, Kschmit90, Seventhorbitday, GamersUnite, 420BlazeItPhaggot, Jejef, Ryan-
merl8, Matthewadinatajapati, Fthatshiit, Thierry Dugnolle, Zackwright07, InternetArchiveBot, RedExplosiveswiki, Urmomisdumb69, Jon-
ahSpars, KGirlTrucker81, GreenC bot, Eisengetribe13, PlayGatered, ThePlatypusofDoom, Eep03, Fmadd, RunnyAmiga, PANDA12346,
Jmcgnh, Konic004, Mindopener420, Imsarvesh18, Ecnomercy, Bender the Bot, Ebizo5209, Mramoeba, Joeypeeps, Zane Kata, Pot-
holet, Je shcin, Cyrus noto3at bulaga, MysterKitty, Dajonsmmns, Moosealinee, T.J.M, Friendshipofdeath25, VeritasLaureate, Ni-
hal7799622906, Asgbe, KehrerK, WP 3456, Suhasbhokare, Cosmicpixelgames, Bennv3771, Jake.hoer, Omi patil, , Pratik tan-
gade, Amk7313, Eschuelk000, Wadneyare2017, DrHelenK, Bigboy42, A010101, Michpod, Quatumdoodoo, All Knowing Time Demon,
Mackestu55, SALUTATIONEZZZ, Ghori mazail, CybFox and Anonymous: 1857
Markov chain Source: https://en.wikipedia.org/wiki/Markov_chain?oldid=768623230 Contributors: AxelBoldt, The Anome, Ed Poor,
Deb, Olivier, Drseudo, Jdlh, Michael Hardy, Dan Koehl, Kku, Tomi, Chadloder, Tregoweth, Ejrh, Samuelsen, Angela, Andres, Bjcairns,
A5, Charles Matthews, Timwi, Dysprosia, Furrykef, Finlay McWalter, Aliekens, Phil Boswell, Merriam~enwiki, LaurentPerrinet, Hadal,
Qlmatrix, Ruakh, Seth Ilys, Wile E. Heresiarch, Magic Window, Tea2min, Centrx, Giftlite, Gwalla, Christopher Parham, DavidCary,
Telemakh0s, BenFrantzDale, Soundray~enwiki, Dratman, Mellum, Leonard G., Elmindreda, Dj Vu, Neilc, Anthonywong, Andycjp,
Altarego, Gzuckier, Karol Langner, Alex Cohn, Cihan, K-squire, Ehudshapira, Urhixidur, Shiftchange, Luqui, Ericamick, Xezbeth, Ben-
der235, Kjoonlee, MisterSheik, El C, Shanes, SgtThroat, O18, C S, Jumbuck, Storm Rider, Msh210, Denis.arnaud, Gintautasm, Arthena,
Neonumbers, Sligocki, Julovi, Mbloore, Danhash, Jheald, RainbowOfLight, Boyd Steere, Rotring, Kenyon, Oleg Alexandrov, Marasmu-
sine, Linas, Mindmatrix, LOL, Aaron McDaid, OdedSchramm, Wikiklrsc, Isnow, Joke137, Stoni, Rjwilmsi, Salix alba, Miserlou, Cww,
Klortho, GnniX, JYOuyang, Chobot, WriterHound, Timboe, YurikBot, Wavelength, Grubber, Stunetii, Archelon, Gaius Cornelius, Pseu-
domonas, Thane, Gareth Jones, Ino5hiro, Schmock, Tachs, Jmchen, Hirak 99, Mythobeast, Lt-wiki-bot, Cedar101, Digfarenough, Tyre-
nius, DmitriyV, Shepard, Rdbrady, MartinGugino, SmackBot, RDBury, FocalPoint, Tom Lougheed, Karl Stroetmann, Shabda, Frymaster,
Vvarkey, Chris the speller, MK8, Indy90~enwiki, Andreyf, Roscelese, Droll, DoctorW, Thomas Bliem, DHN-bot~enwiki, Jdthood, Svein
Olav Nyberg, Zoonfafer, TheKMan, LouScheer, Ddon, Dreadstar, Hl, Eggstone, Jbergquist, Henning Makholm, Andrei Stroe, Argle-
bargleIV, Wvbailey, Loodog, Khim1, Jim.belk, Nijdam, JHunterJ, Slakr, Doczilla, Drae, MTSbot~enwiki, Negrulio, Yoderj, Iridescent,
JMK, W0le, Eassin, Matthew Meta, Tawkerbot2, Ylloh, Trevor.tombe, CmdrObot, Shorespirit, Nicolaennio, DavidFHoughton, Reques-
tion, Ezrakilty, Cosy, Katya0133, Myasuda, ChrisKennedy, Peterdjones, Skittleys, Josephorourke, Fij, Quibik, HitroMilanese, Mikewax,
Zalgo, Grubbiv, Epbr123, Headbomb, Rkrish67, Mikeeg555, Nick Number, Urdutext, Mmortal03, LachlanA, Gioto, Quintote, Lovibond,
Mack2, Ncauthor, Alhenawy, JAnDbot, Harish victory, CosineKitty, BrotherE, F.Shelley, Coee2theorems, Magioladitis, Leandro79,
Brusegadi, Bubba hotep, SSZ, A3nm, David Eppstein, JaGa, Eeera, Ekotkie, Seanpor, Michael.Clerx, Sarma.bhs, Gaidheal1, Steve98052,
Faridani, Tarotcards, Quantling, JonMcLoone, Policron, Doctahdrey, DavidCBryant, HyDeckar, DorganBot, SirHolo, TWiStErRob,
Yayavar, Mistercupcake, VolkovBot, JohnBlackburne, LokiClock, Maghnus, TXiKiBoT, Egkauston, Someguy1221, UnitedStatesian,
Ankitpatel715, Greg searle, Falcon8765, Forwardmeasure, Paulthree, Canavalia, Fcady2007, Maxlittle2007, SieBot, Ph0t0phobic, IradBG,
Maurizio.polito, BusError, Lightmouse, Fratrep, OKBot, AlanUS, Randomblue, Melcombe, SanderEvers, ClueBot, Rks22, Snigbrook, Bo-
bathon71, MATThematical, Jim 14159, Boing! said Zebedee, Idempotent, Nanmus, Ecov, Bender2k14, Spmeyn, Aurelius173, Terra Xin,
Stypex, SchreiberBike, Citrus Lover, Aardvark23a, 1ForTheMoney, Siniestra, Qwfp, Rmerks, XLinkBot, Efexan~enwiki, Autotomic,
Shepelyansky, Jeferman, Voice In The Wilderness, Tayste, Addbot, DOI bot, Cuaxdon, MrOllie, Download, Xicer9, Dyaa, OlEnglish, Jar-
ble, Greyhood, Lnummeli, Luckas-bot, Yobot, Ht686rg90, Ptbotgourou, Ljaun, Nallimbot, Bryanjohnston, AnomieBOT, ModernMajor,
Rubinbot, AdjustShift, Materialscientist, Citation bot, ArthurBot, Xqbot, Dithridge, Jtsch, Drilnoth, Jergling, NOrbeck, GrouchoBot, Und-
soweiter, Eed3si9n, FrescoBot, Olexa Riznyk, Gausseliminering, Syngola, Planetmarshall, Citation bot 1, MorganGreen, Greender, I dream
of horses, Kiefer.Wolfowitz, Bpiwowar, Stpasha, RedBot, Izmirlig, So nazzer its pav, Vaahteramen Eemeli, Icfaldu, Kokoklems, Ptj tsub-
asa, RC Howe, Begoon, Duoduoduo, Yunesj, RjwilmsiBot, Xphileprof, PeterWT, Dewritech, Wikipelli, Dcirovic, Julienbarlan, Kazantsev,
Quondum, L Kensington, Sigma0 1, 123forman, ClueBot NG, BarrelProof, Makrai, JXtra, Mpaa, Jj1236, Kdcoo, Theinactivist, Capodis-
tria, Helpful Pixie Bot, Ricardohz, Wbm1058, BG19bot, Andytango, Jafusmaximus, Nunoxic, Samoon, Guanhomer, Austinprince, Pro-
tein Chemist, CitationCleanerBot, Mocahante, Kritboy, Manoguru, M.berlinkov, BillBucket, Lucyinthesky45, Soa karampataki, Vu2swx,
Pratyya Ghosh, Cyberbot II, Ideafarmcity, Ethlew, Dexbot, Mogism, Imdadasad, Numbermaniac, Sixstring91, Y256, RMCD bot, Ma-
gusapollo, Limit-theorem, Cosmicraga, 7804j, Nbrader, PierreYvesLouis, TreeFell, Uociucamin, Pokechu22, Myconix, Iancj88, Sangdon
Lee, Eugenio Lpez Cortegano, Elenktik, Geitonaki, Sergio Yaksic, Quenhitran, Improbable keeler, Ambrosia0, Ixthings 69, Monkbot,
Sherloco, Yikkayaya, Abacenis, Technetium-99, JiyiZhou, Jh4mit, Firstinshow, Rjason1991, Sourabhghurye, Evermore345, KasparBot,
Ikkohm, Baking Soda, InternetArchiveBot, Latex-yow, Arif212, GreenC bot, Fmadd, Bender the Bot, Raghummr, Moritz Kohls, Sir Rin,
Aleekaror, Lkulakova, Edthat2, Magic links bot, Deniskelleher, Jemonggg and Anonymous: 466
Density matrix Source: https://en.wikipedia.org/wiki/Density_matrix?oldid=769112974 Contributors: AxelBoldt, The Anome, Tarquin,
FlorianMarquardt, Modemac, Michael Hardy, Delirium, Looxix~enwiki, Popas11, AugPi, Phys, Shahard~enwiki, Bkell, Giftlite, Lock-
eownzj00, CSTAR, Creidieki, Frau Holle, Chris Howard, V79, Floorsheim, Phobos~enwiki, Chtito, Oleg Alexandrov, Isnow, BD2412,
154 CHAPTER 21. SYMMETRIC MATRIX

Nanite, Ketiltrout, Rjwilmsi, J S Lundeen, Chobot, Tone, YurikBot, Ugha, Wavelength, Archelon, Aaron Brenneman, Oakwood, Ne-
trapt, Sbyrnes321, SmackBot, Betacommand, Chris the speller, Colonies Chris, Vina-iwbot~enwiki, Mets501, Andreas Rejbrand, Heqs,
Mct mht, Michael C Price, Thijs!bot, Headbomb, Marek69, Gnixon, Shlomi Hillel, B7582, Damonturney, Uisqebaugh, RogierBrussee,
Jakob.scholbach, MetsBot, GIrving, Victor Blacus, Marek.zukowski, Cuzkatzimhut, VolkovBot, LokiClock, Saibod, Ocsenave, YohanN7,
SieBot, Randy Kryn, DFRussia, Brews ohare, SchreiberBike, DumZiBoT, Hess88, Addbot, Dr. Universe, Legobot, Luckas-bot, Yobot,
Niout, Li3939108, Unara, Citation bot, Zeroimpl, LouriePieterse, GrouchoBot, Pradameinho, , FrescoBot, Caridin, Citation bot 1,
MastiBot, Aadagger, Jordgette, Havresylt, Mauriachigar, ZroBot, Bamyers99, AManWithNoPlan, Zueignung, RockMagnetist, Chester
Markel, Physics is all gnomes, Zak.estrada, Helpful Pixie Bot, Bibcode Bot, JpMarat, SrijitaK, Pracec, Mathphysman, Monkbot, Oisguad,
Luis Goslin, SSA7471, Integrvl, EntropicPrinciple, Zachb97 and Anonymous: 73
Matrix (mathematics) Source: https://en.wikipedia.org/wiki/Matrix_(mathematics)?oldid=769419035 Contributors: AxelBoldt, Tarquin,
Tbackstr, Hajhouse, XJaM, Ramin Nakisa, Stevertigo, Patrick, Michael Hardy, Wshun, Cole Kitchen, SGBailey, Chinju, Zeno Gantner,
Dcljr, Ejrh, Looxix~enwiki, Muriel Gottrop~enwiki, Angela, , Poor Yorick, Rmilson, Andres, Schneelocke, Charles Matthews,
Dysprosia, Jitse Niesen, Lou Sander, Dtgm, Bevo, J D, Francs2000, Robbot, Mazin07, Sander123, Chrism, Fredrik, R3m0t, Gandalf61,
MathMartin, Sverdrup, Rasmus Faber, Bkell, Paul Murray, Neckro, HaeB, Tea2min, Tosha, Giftlite, Jao, Arved, BenFrantzDale, Neto-
holic, Herbee, Dissident, Dratman, Michael Devore, Waltpohl, Duncharris, Macrakis, Utcursch, Alexf, Antandrus, MarkSweep, Profvk,
Wiml, Urhixidur, Sam nead, Azuredu, Barnaby dawson, Porges, PhotoBox, Shahab, Rich Farmbrough, FiP, ArnoldReinhold, Pavel Voze-
nilek, Paul August, Bender235, ZeroOne, El C, Rgdboer, JRM, NetBot, The strategy freak, La goutte de pluie, Obradovic Goran, Mdd,
Tsirel, LutzL, Landroni, Jumbuck, Jigen III, Alansohn, ABCD, Fritzpoll, Wanderingstan, Mlm42, Jheald, Simone, RJFJR, Dirac1933,
AN(Ger), Adrian.benko, Oleg Alexandrov, Nessalc, Woohookitty, Igny, LOL, Webdinger, David Haslam, UbiquitousUK, Username314,
Tabletop, Waldir, Prashanthns, Mandarax, Qwertyus, SixWingedSeraph, Grammarbot, Porcher, Sjakkalle, Koavf, Salix alba, Joti~enwiki,
Watcharakorn, SchuminWeb, Old Moonraker, RexNL, Jrtayloriv, Krun, Fresheneesz, Srleer, Vonkje, Masnevets, NevilleDNZ, Chobot,
Krishnavedala, Karch, DVdm, Bgwhite, YurikBot, Wavelength, Borgx, RussBot, Michael Slone, Bhny, NawlinWiki, Rick Norwood,
Jfheche, 48v, Bayle Shanks, Jimmyre, Misza13, Samuel Huang, Merosonox, DeadEyeArrow, Bota47, Glich, Szhaider, Ms2ger, Jezz-
abr, Leptictidium, Mythobeast, Spondoolicks, Alasdair, Lunch, Sardanaphalus, SmackBot, RDBury, CyclePat, KocjoBot~enwiki, Jagged
85, GoonerW, Minglai, Scott Paeth, Gilliam, Skizzik, Saros136, Chris the speller, Optikos, Bduke, Silly rabbit, DHN-bot~enwiki, Colonies
Chris, Darth Panda, Scwlong, Foxjwill, Can't sleep, clown will eat me, Smallbones, KaiserbBot, Rrburke, Mhym, SundarBot, Jon Aw-
brey, Tesseran, Aghitza, The undertow, Lambiam, Wvbailey, Attys, Nat2, Cronholm144, Terry Bollinger, Nijdam, Aleenf1, IronGar-
goyle, Jacobdyer, WhiteHatLurker, Beetstra, Kaarebrandt, Mets501, Neddyseagoon, Dr.K., P199, MTSbot~enwiki, Quaeler, Rschwieb,
Levineps, JMK, Tawkerbot2, Dlohcierekim, DKqwerty, Dan1679, Propower, CRGreathouse, CBM, JohnCD, INVERTED, SelfStudy-
Buddy, HalJor, MC10, Pascal.Tesson, Bkgoodman, Alucard (Dr.), Juansempere, Codetiger, Bellayet, , Epbr123, Paragon12321,
Markus Pssel, Aeriform, Gamer007, Headbomb, Marek69, RobHar, Urdutext, AntiVandalBot, Lself, Jj137, Hermel, Oatmealcook-
iemon, Dhrm77, JAnDbot, Fullverse, MER-C, The Transhumanist, Yanngerotin~enwiki, Bennybp, VoABot II, Fusionmix, T@nn, JNW,
Jakob.scholbach, Rivertorch, EagleFan, JJ Harrison, Sullivan.t.j, David Eppstein, User A1, ANONYMOUS COWARD0xC0DE, JoergenB,
Philg88, Nevit, Hbent, Gjd001, Doccolinni, Yodalee327, R'n'B, Alfred Legrand, J.delanoy, Rlsheehan, Maurice Carbonaro, Richard777,
Wayp123, Toghrul Talibzadeh, Aqwis, It Is Me Here, Cole the ninja, TomyDuby, Peskydan, AntiSpamBot, JonMcLoone, Policron, Doug4,
Fylwind, Kevinecahill, Ben R. Thomas, CardinalDan, OktayD, Egghead06, X!, Malik Shabazz, UnicornTapestry, Shiggity, VolkovBot,
Dark123, JohnBlackburne, LokiClock, VasilievVV, DoorsAjar, TXiKiBoT, Hlevkin, Rei-bot, Anonymous Dissident, D23042304, PaulTa-
nenbaum, LeaveSleaves, BigDunc, Wolfrock, Surajx, Wdrev, Brianga, Dmcq, KjellG, AlleborgoBot, Symane, Anoko moonlight, W4chris,
Typoer, Neparis, T-9000, D. Recorder, ChrisMiddleton, GirasoleDE, Dogah, SieBot, Ivan tambuk, Bachcell, Gerakibot, Cwkmail,
Yintan, Radon210, Elcobbola, Blueclaw, Paolo.dL, Oxymoron83, Ddxc, Oculi, Manway, AlanUS, Anchor Link Bot, Rinconsoleao, Denis-
arona, Canglesea, Myrvin, DEMcAdams, ClueBot, Sural, Wpoely86, Remag Kee, SuperHamster, LizardJr8, Masterpiece2000, Excirial,
Da rulz07, Bender2k14, Ftbhrygvn, Muhandes, Brews ohare, Tyler, Livius3, Jotterbot, Hans Adler, Manco Capac, MiraiWarren, Qwfp,
Johnuniq, TimothyRias, Lakeworks, XLinkBot, Marc van Leeuwen, Rror, AndreNatas, Jaan Vajakas, Porphyro, Stephen Poppitt, Addbot,
Proofreader77, Deepmath, RPHv, Steve.jaramillov~enwiki, WardenWalk, Jccwiki, CactusWriter, Mohamed Magdy, MrOllie, Tide rolls,
Gail, Jarble, CountryBot, LuK3, Luckas-bot, Yobot, Senator Palpatine, QueenCake, TestEditBot, AnomieBOT, Autarkaw, Gazzawi, Ar-
chon 2488, IDangerMouse, MattTait, Kingpin13, Materialscientist, Citation bot, Wrelwser43, LilHelpa, FactSpewer, Xqbot, Capricorn42,
Drilnoth, HHahn, El Caro, BrainFRZ, J04n, Nickmn, RibotBOT, Cerniagigante, Smallman12q, WaysToEscape, Much noise, LucienBOT,
Tobby72, VS6507, Recognizance, Sawomir Biay, Izzedine, IT2000, HJ Mitchell, Sae1962, Jamesooders, Cafreen, Citation bot 1, Swords-
mankirby, I dream of horses, Kiefer.Wolfowitz, MarcelB612, NoFlyingCars, RedBot, RobinK, Kallikanzarid, Jordgette, ItsZippy, Vairoj,
Rentzepopoulos, SeoMac, MathInclined, The last username left was taken, Earthandmoon, Birat lamichhane, Katovatzschyn, Soupjvc,
Sfbaldbear, Salvio giuliano, Cowpig, Mandolinface, EmausBot, Lkh2099, Nurath224, Primefac, DesmondSteppe, RIS cody, Slawekb,
Gclink, Quondum, Chocochipmun, Jadzia2341, U+003F, Rcorcs, , Maschen, Babababoshka, Adjointh, Donner60,
Pun, JFB80, Anita5192, Petrb, Mikhail Ryazanov, ClueBot NG, Wcherowi, Michael P. Barnett, Rtucker913, Satellizer, Rank Penguin,
Tyrantbrian, Frietjes, Dsperlich, Helpful Pixie Bot, Rxnt, Christian Matt, MarcoPotok, BG19bot, Wiki13, Muscularmussel, MusikAni-
mal, JMtB03, Brad7777, Ren Vpenk, Soa karampataki, BattyBot, Freesodas, IkamusumeFan, Lucaspentzlp, OwenGage, Enterprisey,
Dexbot, Mark L MacDonald, Numbermaniac, Frosty, Gordino110, JustAMuggle, Reatlas, Acetotyce, Debouch, Wamiq, Ugog Nizdast,
Zenibus, SwimmerOfAwesome, Jianhui67, OrthogonalFrog, Albert Maosa, Airwoz, Derpghvdyj, Mezafo, Botha42, CarnivorousBunny,
Xxhihi, Fzzle, Sordin, Username89911998, Gronk Oz, Hidrolandense, Ansathas, Kellywacko, Frost.derec, Norbornene, Solid Frog, Lo-
raof, Cleaner 1, JArnold99, Anson Law Sum Kiu, Mutantoe, Kavya l, Graboy, Minima2014, Mikeloud, H.dryad, Yrtnasd, Skpandey12,
Kavyalat9, Fmadd, Pictomania, Chuckwoodjohn, Nickfury95, Tompop888, CarlosGonz27 and Anonymous: 659
Eigenvalues and eigenvectors Source: https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors?oldid=768140060 Contributors: Tar-
quin, Gareth Owen, Tomo, Stevertigo, Patrick, Michael Hardy, Chris-martin, Cyde, Delirium, Stevenj, Dcoetzee, Dysprosia, Jitse Niesen,
Patrick0Moran, Bevo, McKay, Spikey, Shizhao, Nickshanks, Aliekens, Robbot, Josh Cherry, Moriori, Schutz, Gak, Lowellian, Gandalf61,
Timrollpickering, Sunray, Dmn, Tea2min, Adam78, Denwid, Ancheta Wis, Connelly, Giftlite, Nichalp, Haeleth, BenFrantzDale, Drat-
man, Kmote, Bovlb, Zinnmann, Jason Quinn, Jorge Stol, LucasVB, DragonySixtyseven, Pmanderson, Almit39, Edsanville, Fintor, Mike
Rosoft, Natrij, Lone Isle, Iainscott, ObsessiveMathsFreak, Rspeer, Mattrix, ZeroOne, Zaslav, Kjoonlee, Kimbly, Pedant, Eric Forste,
Brian0918, Pt, Rgdboer, Circeus, Billymac00, Kappa, Giraedata, Scott Ritchie, Blotwell, Rajah, Deryck Chan, Crust, Jakew, Varuna,
Landroni, Thornn~enwiki, Jrme, Alansohn, Jhertel, Kanie, Keenan Pepper, Katefan0, LordViD, Jefromi, Jheald, RJFJR, Reaverdrop,
Itsmine, Forderud, Oleg Alexandrov, Soultaco, Woohookitty, Shreevatsa, Igny, LOL, Dmazin, StradivariusTV, Guardian of Light, BillC,
Ruud Koot, HcorEric X, Jok2000, Tabletop, Male1979, Plrk, Waldir, Agthorr, Wayward, Jbarta, Marudubshinki, Magister Mathemati-
cae, BD2412, Abstracte, FreplySpang, Yurik, Rjwilmsi, Tyraios, NatusRoma, MarSch, Salix alba, Somesh, HappyCamper, Bdegfcunbbfv,
Boris Alexeev, Titoxd, Zylinder~enwiki, Mathbot, JYOuyang, TheDJ, Saketh, Fresheneesz, TeaDrinker, Tatpong~enwiki, Kri, Chobot,
Chrisbaird.ma, Bgwhite, Metaeducation, JPD, Rmbyoung, YurikBot, Wavelength, Mushin, Hairy Dude, Dmharvey, Michael Slone, Con-
21.9. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 155

scious, Hede2000, JabberWok, Markus Schmaus, Dotancohen, Grubber, Gaius Cornelius, NawlinWiki, Rick Norwood, Edinborgarste-
fan, Buster79, TVilkesalo~enwiki, Yahya Abdal-Aziz, Taco325i, Joelr31, Cleared as led, Capagot, Orizon, Dhollm, Vb, TDogg310,
DanBri, Crasshopper, Chichui, Ms2ger, Deeday-UK, NormDor, Bobguy7, JahJah, Kier07, Stevelinton, Adpadu, Kungfuadam, Phyrexi-
caid, Mebden, Williampoetra, GunnerJr, Amberrock, Finell, Luk, Sardanaphalus, SmackBot, Nihonjoe, Incnis Mrsi, TheRealInsomnius,
Bjelleklang, Jtwdog, InvictaHOG, Eskimbot, Oxygene123, Shai-kun, JeeAlex, Commander Keane bot, Gilliam, Jcarroll, Hitman012,
Davigoli, AhmedHan, PabloE, Oli Filth, Silly rabbit, Complexica, Adpete, Nbarth, DHN-bot~enwiki, Colonies Chris, Hongooi, Adam-
Smithee, Javalenok, Aacool, Ntjohn, Kjetil1001, RProgrammer, Riteshsood, LkNsngth, Napalm Llama, Trifon Triantallidis, Michael-
Billington, Lacatosias, Yoshigev, Gbnogkfs~enwiki, Cotterr2, J. Finkelstein, Moala, Severoon, Sebastian Klein, WhiteHatLurker, Hiiiiiiiii-
iiiiiiiiiiii, Hetar, Simon12, Iridescent, JMK, Tawkerbot2, Tiny green, Vaughan Pratt, Ahmad.tachyon, CRGreathouse, Laplacian, A civilian,
Mcstrother, Szabolcs Nagy, DavidFHoughton, Tac-Tics, Myasuda, Safalra, Mct mht, Cydebot, Benzi455, Skittleys, Christian75, Editor at
Large, Xantharius, Repliedthemockturtle, Rbanzai, Thijs!bot, Wikid77, Headbomb, Rlupsa, Marek69, Gimbeline, Nick Number, Urdu-
text, Luna Santin, Ste4k, Danger, LaQuilla, Etr52, Daytona2, Erxnmedia, JAnDbot, Dr. Nobody, Wootery, Sanchom, Je560, Thenub314,
VoABot II, Albmont, Jakob.scholbach, Rich257, Baccyak4H, Crunchy Numbers, Not a dog, MetsBot, Dirac66, David Eppstein, User
A1, ANONYMOUS COWARD0xC0DE, JaGa, GermanX, Ekwity, Dwwaddell, Sameerkale, Dima373, Bissinger, R'n'B, Johnpacklam-
bert, Leyo, Maurice Carbonaro, Yonidebot, Wayp123, Smite-Meister, Lantonov, TomyDuby, Bknittel, Jayden54, Haseldon, Veganaxos,
Lzyvzl, Policron, 83d40m, STBotD, DorganBot, Maziar.irani, PatriciaJH, H1voltage, Izno, Idioma-bot, Error9312, Larryisgood, The Duke
of Waltham, JohnBlackburne, LokiClock, Butwhatdoiknow, Hydrogravity, Gdorner, Oshwah, Like.liberation, Red Act, Rei-bot, Josp-
mathilde, Dependent Variable, Ttennebkram, Reddevyl, Zapurva, LBehounek, TerraNova whatcanidotomakethisnottoosimilartosomeoth-
ername, Aarongeller, Ajto8, Anoko moonlight, StevenJohnston, SieBot, Philgoetz, Ichakrab, Cwkmail, Flyer22 Reborn, Paolo.dL, Moon-
raker12, M4ry73, AlanUS, Anchor Link Bot, Tesi1700, Randomblue, Lalahuma, Curtdbz, Kanonkas, Madanor, ClueBot, Justin W Smith,
Plastikspork, JuPitEer, Mild Bill Hiccup, Ellliottt, TypoBoy, Timhoooey, Rchandan, Bender2k14, Muhandes, Jakarr, Arjayay, Blaner-
hoads, Kausikghatak, Urtis~enwiki, AlexCornejo, Martyulrich, Acabashi, Rubybrian, Alousybum, Crowsnest, Humanengr, Bochev, DumZ-
iBoT, One-eyed pirate, RexxS, Gtbanta, XLinkBot, Forbes72, Marc van Leeuwen, Jamesjhs, Porejide, D1ma5ad, Shishir0610, Luolimao,
MystBot, Gelsus~enwiki, Stephen Poppitt, Addbot, Fgnievinski, Protonk, EconoPhysicist, FiriBot, Favonian, OlEnglish, Zorrobot, TeH
nOmInAtOr, Wireless friend, Kevinj04, , Yobot, Timeroot, Amirobot, CinchBug, Doleszki, AnomieBOT, Piano non troppo, Cita-
tion bot, Xelnx, Xqbot, Restu20, Raamaiden, Srich32977, The suocated, GrouchoBot, RibotBOT, Alainr345, NinjaDreams, FrescoBot,
Fortdj33, Gwideman, Mxipp, Boyzindahoos, Sawomir Biay, NewEconomist, Tharthan, Tkuvho, I dream of horses, Kiefer.Wolfowitz, Cos-
mikz, MarcelB612, Hunter.moseley, Rohitphy, Foobarnix, MedicineMan555, Gunderburg, Pushkar3, Matthewmoncek, Ruslan Sharipov,
Dinamik-bot, David Binner, Skakkle, Xnn, Kedwar5, TjBot, 123Mike456Winston789, Mandolinface, EmausBot, WikitanvirBot, Joseph-
Catrambone, KHamsun, Dcirovic, Slawekb, Thecheesykid, Ouzel Ring, Chire, A930913, Quondum, AManWithNoPlan, Ms2756, Frank-
Flanagan, Anita5192, Mikhail Ryazanov, ClueBot NG, Saburr, Dfsisson, Jj1236, Frietjes, TreyGreer62, 336, Widr, Helpful Pixie Bot,
Rxnt, Curb Chain, Bibcode Bot, BG19bot, Prof McCarthy, JinPan, Krucraft, GlaedrH, Hankel operator, Rushiagr, Trombonechamp,
Zedshort, JMtB03, Brad7777, Winston Trechane, Thegreatgrabber, Kalmiopsiskid, DarafshBot, Sschongster, ChrisGualtieri, Dexbot, Jur-
genNL, Mark L MacDonald, Colonel angel, Mogism, JeAEdmonds, Amgtech, Frizzil, Jonex, Indronil Ghosh, Manixer, MartCMdeJong,
Tator2, Abottelli, EFZR090440, Popa910, DavidLeighEllis, Qiangshiweiguan, Schismata, Nigellwh, Aupi, Sherif helmy, Xdever, Lepfer-
nandes, , Pscrape, Ekman, Menthoaltum, , CanuckMonkey, GregBrimble, Monkbot, Trackteur, Ndv79, Verdana
Bold, Pratapraj11pawar, IchiroSuzuki51, Purgy Purgatorio, Loraof, Sudaharan, Carlostp12, User0x539, Wf6humboldt, VexorAbVikip-
dia, Monchisan, Csa slb, Maja123321, Denofs8, Zyvov, Yangyang0117, Sudoer41, Juz bcoz im prem, Dataclysm, Sjarrad, Fmadd, TheRe-
alPascalRascal, L8 ManeValidus, PattyLocke and Anonymous: 568
Positive-denite matrix Source: https://en.wikipedia.org/wiki/Positive-definite_matrix?oldid=768525835 Contributors: AxelBoldt,
Shd~enwiki, Torfason, Michael Hardy, Wshun, Dan Koehl, Cyp, Stevenj, Charles Matthews, Dcoetzee, Jitse Niesen, Phys, Josh Cherry,
MathMartin, Elusus, Tea2min, Giftlite, Fropu, Jorge Stol, TedPavlic, Mattrix, Bender235, Floorsheim, Pt, El C, Erik456, O18, Hespe-
rian, Blahma, PAR, Sean3000, Cburnett, Jheald, Forderud, Simetrical, Eclecticos, Btyner, Qwertyus, Sj, Strait, Kevmitch, FlaBot, Don
Gosiewski, Sodin, Chobot, Algebraist, YurikBot, Wavelength, Syth, Bruguiea, Crasshopper, Eli Osherovich, Lunch, SmackBot, Maksim-
e~enwiki, Eskimbot, Cabe6403, Njerseyguy, Drewnoakes, Nbarth, Svein Olav Nyberg, Kjetil1001, Lambiam, Tim bates, Breno, Lilily,
CRGreathouse, Myasuda, Mct mht, Thijs!bot, Lfscheidegger, LachlanA, Ben pcc, JAnDbot, MER-C, Wootery, Stangaa, Magioladitis,
JamesBWatson, Cyktsui, MetsBot, Americanhero, Tercer, Mythealias, Leyo, Maurice Carbonaro, Nathanshao, Policron, Ratfox, DavidIM-
cIntosh, Tomtheebomb, NathanHagen, PaulTanenbaum, Kkilger, Philmac, Daviddoria, AlleborgoBot, Hsbhat, PeterBFZ, Andrs Cataln,
Yahastu, Sharov, Skippydo, DesolateReality, Yoda of Borg, Jdgilbey, Wcy~enwiki, Bender2k14, Muhandes, Bluemaster, Qwfp, Dinge-
nis, Gjnaasaa, Job Inkop~enwiki, Tayste, Addbot, Cst17, Dr. Universe, PV=nRT, Luckas-bot, Yobot, Nghtwlkr, Nimrody, Legendre17,
AnomieBOT, Joule36e5, Materialscientist, ArthurBot, Bdmy, Airalcorn2, Raamaiden, Mardebikas, Pupdike, Sawomir Biay, Vineethku-
ruvath, Haein45, Avarela1965, MastiBot, Dzlot, Dividingbyzerofordummies, Ashis.csedu, Toolnut, Pfm77, Begoon, Duoduoduo, Suu-
sion of Yellow, Xnn, Wyverald, EmausBot, Wisapi, GoingBatty, Felix Homann, Chaohuang, Osociety, Jadzia2341, Wayne Slam, Zaran,
Joao Meidanis, 1, Maschen, Akseli.palen, ClueBot NG, Est nomis, Fioravante Patrone, Joel B. Lewis, Helpful Pixie Bot, Lubdone,
BG19bot, Dvomedo, Solomon7968, Intervallic, Manoguru, Msinvent, ChrisGualtieri, YFdyh-bot, Preston Kemeny, Davidcy123, Egorlar-
ionov, Frytvm, Digiuno, Webclient101, Limit-theorem, HEKrogstad, GoplaWHya, The Disambiguator, Brownerthanu, Loraof, Srinivas
tudelft, Rangdor, Rdsk2014, H1729, Fmadd, Ben300694, Jarlesog and Anonymous: 176
Cambridge University Press Source: https://en.wikipedia.org/wiki/Cambridge_University_Press?oldid=766638248 Contributors: Vicki
Rosenzweig, Ianp, Olivier, Rbrwr, Michael Hardy, Lquilter, Ronz, Notheruser, Dimadick, Pigsonthewing, Postdlf, Geogre, Giftlite, Lupin,
Duncharris, Solipsist, Stevietheman, Alexf, BozMo, Piotrus, Icairns, Jayjg, Bluap, Dirac1933, Pcpcpc, Woohookitty, Xover, David Haslam,
Tim!, Chirags, FlaBot, Nihiltres, SouthernNights, NekoDaemon, Gdrbot, Adoniscik, The Rambling Man, Mark Ironie, Daniel Mietchen,
Jpbowen, Number 57, Nikkimaria, SMcCandlish, SmackBot, JK23, Sebesta, Chris the speller, MalafayaBot, Bazonka, Ste, Andrei Stroe,
JzG, Prof02, Simongraham, Hu12, Colonel Warden, Cydebot, Danrok, Lo2u, Malleus Fatuorum, Julia Rossi, Geniac, Magioladitis, The
Anomebot2, Matt Lewis, Kathrynbooth, Axlq, It Is Me Here, Krishnachandranvn, Robertson-Glasgow, Pjv7ex, Djr13, PacicWonder-
land, GrahamHardy, Hugo999, Nikthestunned, Deor, Shortride, Mrh30, TXiKiBoT, Guillaume2303, JhsBot, Broadbot, Tamorlan, Ir-
fan82, Noveltyghost, SE7, SimonTrew, Int21h, Svick, Calatayudboy, ClueBot, Testu, CharlieRCD, Alexbot, Adimovk5, Versus22, DumZ-
iBoT, RMFan1, Captain108, BarretB, Gerhardvalentin, Addbot, Betterusername, Fgnievinski, Numbo3-bot, Kicior99, Lightbot, Legobot,
Luckas-bot, Yobot, Amirobot, We66er, Fosterjjj, Johnmanj, AnomieBOT, Momoricks, Theovetes, Galoubet, Tmwns, XYGuy, Mud-
snakezim, Glappelle, Eric Blatant, Miesianiacal, Omnipaedista, Dfhehuii, Fianara, Aaditya 7, Dorecou, Orphdrug, McAnt, FrescoBot,
OMcSpin, Filosophy, Moonraker, MondalorBot, CLC Editorial, Lotje, Leto 78, Spdiy, Cmdcam01, RjwilmsiBot, Shhhnotsoloud, Ja-
son86~enwiki, ZroBot, Solus ipse Inc., Philafrenzy, Iketsi, Spicemix, ClueBot NG, Helpful Pixie Bot, Chilliwow, Churchway, BG19bot,
Aliwal2012, YFdyh-bot, Khazar2, Mogism, Dads Knife, Rupert loup, Randykitty, A7592, Fuzzy mongoose, Acjones49, ArcticTree, Za-
cwill, Michaelthelamb, Narky Blert, Berlidam Dergilk, Jtrrs0, KasparBot, Spyros 78, BU Rob13, Permstrump, VernonF, Nguyenthibiet,
156 CHAPTER 21. SYMMETRIC MATRIX

WilliamLaud, Bender the Bot, Magic links bot and Anonymous: 79

Hamiltonian (quantum mechanics) Source: https://en.wikipedia.org/wiki/Hamiltonian_(quantum_mechanics)?oldid=768735012 Con-


tributors: AxelBoldt, CYD, AstroNomer, Toby Bartels, Mjb, David spector, Heron, Stevertigo, Michael Hardy, TakuyaMurata, Sebas-
tianHelm, Looxix~enwiki, AugPi, Rossami, Timwi, Evgeni Sergeev, Phys, Philopp, Bevo, Branddobbe, MrJones, MathMartin, Tea2min,
Giftlite, Lethe, Alison, JeBobFrank, PenguiN42, HorsePunchKid, Karol Langner, Glogger, Kevyn, Mormegil, Guanabot, Masudr, Pja-
cobi, Bender235, Aranel, Marx Gomes, Alexwg, Gene Nygaard, Oleg Alexandrov, WadeSimMiser, SeventyThree, BD2412, JVz, An-
driy.vynogradov, Dirkbike, Srleer, Kri, Chobot, Roboto de Ajvol, YurikBot, Wavelength, Archelon, Daniel Mietchen, Enormousdude,
RodVance, Dkasak, Timothyarnold85, KasugaHuang, Mhardcastle, SmackBot, David Woolley, Gilliam, Colonies Chris, Beels99, Nr-
cprm2026, Daniel.o.jenkins, Akriasas, Jaganath, UncleDouggie, Freelance Intellectual, Marbini, Mct mht, Dslc, Christian75, Wompa99,
Headbomb, Xuanji, Second Quantization, 1of3, JAnDbot, Jbom1, BruceCamber, Andrej.westermann, Maurice Carbonaro, Hugh Hudson,
Telecomtom, VolkovBot, Thurth, Italian Number 21, Starseeker shkm, Pamputt, Senemmar, Quietbritishjim, BartekChom, Randy Kryn,
EoGuy, Mild Bill Hiccup, Shaded0, Zdravljica, WikHead, SilvonenBot, Subversive.sound, Truthnlove, Addbot, Listing Port, Loupeter,
Luckas-bot, Yobot, Fraggle81, Nallimbot, AnomieBOT, Rubinbot, Jim1138, Citation bot, LilHelpa, Obersachsebot, Xqbot, Craig Pem-
berton, Leafriend, Kirsim, Puzl bustr, Niels1004, Ganeshsashank, EmausBot, John of Reading, Dcirovic, Mike of Wikiworld, Jadzia2341,
Superdelocalizable, Maschen, Notolder, Doukafter, ChuispastonBot, Mikhail Ryazanov, Tr00rle, Helpful Pixie Bot, Dmichaud83, Avoca-
toBot, Ilent2, Mn-imhotep, F=q(E+v^B), ThePastIsObdurate, Jimw338, Itnice, Dexbot, Aroakley, HoboMcJoe, Biblioworm, KasparBot,
Erjas3oa, Ziaozi, Magic links bot and Anonymous: 90

Quantum state Source: https://en.wikipedia.org/wiki/Quantum_state?oldid=768300822 Contributors: RTC, Michael Hardy, Julesd, An-
dres, Laussy, Patrick0Moran, Bevo, BenRG, Bkalafut, Rorro, Papadopc, Tea2min, Giftlite, MathKnight, MichaelHaeckel, CSTAR, H
Padleckas, Elroch, Mschlindwein, Chris Howard, Freakofnurture, Hidaspal, Slipstream, Bender235, Giraedata, Geschichte, Alansohn,
Cortonin, Dan East, Ott, Woohookitty, Mpatel, Dzordzm, Colin Watson, Rjwilmsi, Mathbot, Margosbot~enwiki, Fresheneesz, Bgwhite,
Wavelength, RobotE, Bambaiah, Agent Foxtrot, Hydrargyrum, PoorLeno, Larsobrien, Modify, Sbyrnes321, A13ean, Incnis Mrsi, Ptpare,
Jutta234, Physis, Erwin, CapitalR, Petr Matas, BeenAroundAWhile, Mct mht, Phatom87, Dragons Blood, Waxigloo, Thijs!bot, Colincmr,
Headbomb, Second Quantization, Iviney, Eleuther, Bizzon, Magioladitis, Tercer, B. Wolterding, R'n'B, Hans Dunkelberg, Maurice Car-
bonaro, ARTE, Hulten, Sheliak, VolkovBot, LokiClock, Kinneytj, Thurth, TXiKiBoT, V81, Spinningspark, Kbrose, YohanN7, SieBot,
Phe-bot, Jdcaneld, OKBot, Denisarona, Randy Kryn, StewartMH, ClueBot, Alksentrs, EoGuy, Rockfang, SchreiberBike, The-tenth-
zdog, TimothyRias, Dragon, SilvonenBot, RealityDysfunction, Porphyro, Stephen Poppitt, Addbot, Bob K31416, Luckas-bot, Yobot,
JTXSeldeen, AnomieBOT, Gtz, Xqbot, Pvkeller, J04n, GrouchoBot, Omnipaedista, Nathanielvirgo, Waleswatcher, WaysToEscape, ,
Chjoaygame, FrescoBot, Freddy78, Steve Quinn, Machine Elf 1735, Oxonienses, RedBot, RobinK, BasvanPelt, Heurisko, Lotje, Ea-
gleclaw6, RjwilmsiBot, Pierluigi.taddei, EmausBot, John of Reading, Gaurav biraris, Solomonfromnland, Harddk, Josve05a, Zephyrus
Tavvier, Maschen, Xronon, ClueBot NG, MelbourneStar, Theopolisme, Helpful Pixie Bot, Bibcode Bot, BG19bot, F=q(E+v^B), Gan-
itvidya, DrBugKiller, Chetan666, Jochen Burghardt, W. P. Uzer, Noix07, 7Sidz, Monkbot, Cpt Wise, Pratixit, AliShug, Tyttcfm, Inter-
netArchiveBot, tale.cohomology, AlterHollow, GreenC bot, Bender the Bot and Anonymous: 80

Hermitian matrix Source: https://en.wikipedia.org/wiki/Hermitian_matrix?oldid=764460728 Contributors: AxelBoldt, Michael Hardy,


TakuyaMurata, Looxix~enwiki, Jitse Niesen, Lzur, Connelly, Giftlite, BenFrantzDale, Fropu, TedPavlic, Qutezuce, MisterSheik, Oyz,
3mta3, Caesura, Oleg Alexandrov, Tbsmith, Oliphaunt, Smmurphy, Isnow, Milez, Obersachse, Magister Mathematicae, Rjwilmsi, Dude-
galea, Mathbot, RexNL, Chobot, Bgwhite, YurikBot, Ugha, Gene.arboit, Bruguiea, Vanished user 1029384756, Banus, Marra,
robot, SmackBot, RDBury, Incnis Mrsi, InverseHypercube, NickGarvey, Mhss, Janm67, Octahedron80, Javalenok, Rschwieb, Andreas
Rejbrand, Myasuda, MaxEnt, Mct mht, Ckhung, Escarbot, Evanpw, Catslash, Chutzpan, Qjqash3, Scot.parker, Tercer, Ynsingh, Hasel-
don, Soumya m, Policron, Arlia101, Nappyrash, Jasonevans, DorganBot, Simonjwright, Neparis, Mmakin~enwiki, Randomblue, Foot-
ballStatWhore, ClueBot, Addbot, Drevicko, Luckas-bot, Yobot, Ptbotgourou, Nallimbot, AnomieBOT, ArthurBot, Xqbot, ,
Blennow, Raamaiden, Pupdike, Calleman21, Stpasha, RedBot, FoxBot, TjBot, EmausBot, GoingBatty, Quondum, Helpful Pixie Bot,
BG19bot, CitationCleanerBot, Manoguru, JPaestpreornJeolhlna, Blackclaws, Fjnplx, HarrisonTotty, Relaxmikeb, BuzzBloom, Reformasky
org, Sankalpa Sarkar, Simplexity22, Fmadd, Bender the Bot, Tomasvasconcelos and Anonymous: 69

Trace (linear algebra) Source: https://en.wikipedia.org/wiki/Trace_(linear_algebra)?oldid=769048070 Contributors: AxelBoldt, CYD,


Tarquin, Stevertigo, Patrick, Michael Hardy, Wshun, TakuyaMurata, Charles Matthews, Dysprosia, Phys, Robbot, Benwing, Robinh,
Aetheling, Giftlite, BenFrantzDale, Lethe, Fropu, Phe, Eranb, Jewbacca, DcoetzeeBot~enwiki, Zaslav, Pt, Rgdboer, Erik456, Oyz,
Tsirel, RobertStar20, Oleg Alexandrov, Japanese Searobin, Joriki, LOL, StradivariusTV, Chochopk, Btyner, Adiel, Marudubshinki, Jsha-
dias, MarSch, Mathbot, Kri, Chobot, Algebraist, YurikBot, Wavelength, Laurentius, RobotE, Lukpank, JabberWok, Archelon, Ksyrie,
Edinborgarstefan, Bota47, Jemebius, Whaa?, Melchoir, Achab, Pokipsy76, Eskimbot, Spireguy, Octahedron80, Nbarth, Sciyoshi~enwiki,
Calc rulz, V1adis1av, Berland, Eric Olson, J. Finkelstein, WhiteHatLurker, Kaarebrandt, Spiel496, Eassin, Dan1679, Jackzhp, Myasuda,
Mct mht, Mon4, Talgalili, Keyi, Thijs!bot, Egrin, AntiVandalBot, Salgueiro~enwiki, MER-C, CattleGirl, Swpb, Email4mobile, Sulli-
van.t.j, Tercer, Dima373, R'n'B, Samikrc, Haseldon, Policron, Fylwind, Ged.R, Cuzkatzimhut, LokiClock, Moogwrench, Sue Rangell,
Seminalist, Katzmik, SieBot, Thehotelambush, ClueBot, Wpoely86, Kruusamgi, Saretakis, Addbot, Zorrobot, Legobot, Luckas-bot,
Yobot, Ptbotgourou, Amirobot, Kan8eDie, Nallimbot, Dinnertimeok, T, Nathanielvirgo, FrescoBot, LucienBOT, DrilBot, Duoduoduo,
Siteswapper, EmausBot, John of Reading, Wikipelli, Ebrambot, Quondum, ChuispastonBot, Randallbritten, Jaycech3n, ClueBot NG, Nine-
teen O'Clock, Antiqueight, AvocatoBot, Vkpd11, EdwardH, Jeremy112233, Timothy Gu, Gundamlh, Mjohnsondg, Jose Brox, Lucie911,
Comp.arch, Elkhorn9079, Kdmeaney, LHSPhantom, Apolanco115, BenJmaths, Fmadd and Anonymous: 128

Symmetric matrix Source: https://en.wikipedia.org/wiki/Symmetric_matrix?oldid=759983577 Contributors: AxelBoldt, Tarquin,


Patrick, Michael Hardy, TakuyaMurata, Stevenj, AugPi, Ideyal, Charles Matthews, Jitse Niesen, Sabbut, Phil Boswell, Robbot, Math-
Martin, Tosha, Giftlite, BenFrantzDale, Mr Bound, TedPavlic, Mecanismo, Bender235, Syp, Burn, Oleg Alexandrov, LOL, Rjwilmsi,
FlaBot, Chobot, Algebraist, YurikBot, KSmrq, Rsrikanth05, PrologFan, Ligand, Tob~enwiki, Pred, SmackBot, Maksim-e~enwiki, Tcol-
gan001, Octahedron80, Javalenok, Lhf, Pcgomes, Nishkid64, Jim.belk, Michael Kinyon, Mathsci, Myasuda, Mct mht, Thijs!bot, Vector-
Posse, Salgueiro~enwiki, Avjoska, Haseldon, Policron, Kyap, GregWoodhouse, Ocolon, Nagy, Zhar, Neparis, Anarchic Fox, The Thing
That Should Not Be, Muhandes, SchreiberBike, MidwestGeek, Addbot, DOI bot, , Ale66, Legobot, Luckas-bot, Yobot, AnomieBOT,
9258fahskh917fas, Jmattax, Carthradge, Askarovdaulet, FrescoBot, Telementor, Pz0513, Brentoids, Citation bot 1, Rapsar, Bourbakista,
Super7luv, The tree stump, Condmatstrel, Redav, Ebrambot, Suslindisambiguator, Quondum, AManWithNoPlan, Mjbmrbot, ClueBot
NG, Wmmrmchl, Koertefa, CitationCleanerBot, LoganFromSA, BattyBot, Kodiologist, Deltahedron, Makecat-bot, Trompedo, Zoydb,
Ethically Yours, JuanManwell, Fmadd, Tomasvasconcelos and Anonymous: 83
21.9. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 157

21.9.2 Images
File:AAMarkov.jpg Source: https://upload.wikimedia.org/wikipedia/commons/7/70/AAMarkov.jpg License: Public domain Con-
tributors: . ..: , 1964. . 2. . 475. Original artist: Un-
known<a href='https://www.wikidata.org/wiki/Q4233718' title='wikidata:Q4233718'><img alt='wikidata:Q4233718' src='https://upload.
wikimedia.org/wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/20px-Wikidata-logo.svg.png' width='20' height='11' srcset='https://
upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/30px-Wikidata-logo.svg.png 1.5x, https://upload.wikimedia.
org/wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/40px-Wikidata-logo.svg.png 2x' data-le-width='1050' data-le-height='590'
/></a>
File:ASA_conference_2008_-_13.JPG Source: https://upload.wikimedia.org/wikipedia/commons/5/57/ASA_conference_2008_-_13.
JPG License: CC BY-SA 3.0 Contributors: Own work (taken by myself) Original artist: myself (User:Piotrus)
File:Area_parallellogram_as_determinant.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/ad/Area_parallellogram_
as_determinant.svg License: Public domain Contributors: Own work, created with Inkscape Original artist: Jitse Niesen
File:Bloch_Sphere.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/f4/Bloch_Sphere.svg License: CC BY-SA 3.0 Con-
tributors: Own work Original artist: Glosser.ca
File:CUPPress2.jpg Source: https://upload.wikimedia.org/wikipedia/en/a/a6/CUPPress2.jpg License: CC-BY-SA-3.0 Contributors:
I McAnt created this work entirely by myself.
Original artist:
McAnt (talk)
File:CamPress1.jpg Source: https://upload.wikimedia.org/wikipedia/en/a/a2/CamPress1.jpg License: CC-BY-SA-3.0 Contributors:
I McAnt created this work entirely by myself.
Original artist:
McAnt (talk)
File:Cambridge_University_Press_Letters_Patent.jpg Source: https://upload.wikimedia.org/wikipedia/en/8/8b/Cambridge_
University_Press_Letters_Patent.jpg License: Public domain Contributors: ? Original artist: ?
File:Cambridge_University_Press_logo.svg Source: https://upload.wikimedia.org/wikipedia/en/1/11/Cambridge_University_Press_
logo.svg License: Fair use Contributors:
The logo is from the Annual Report for the year ending 30 April 2010 on cambridge.org website. Original artist: ?
File:Commons-logo.svg Source: https://upload.wikimedia.org/wikipedia/en/4/4a/Commons-logo.svg License: PD Contributors: ? Origi-
nal artist: ?
File:Determinant_example.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a7/Determinant_example.svg License: CC
BY-SA 3.0 Contributors: Own work Original artist: Krishnavedala
File:Diagram_for_spin_dynamics.png Source: https://upload.wikimedia.org/wikipedia/commons/b/b4/Diagram_for_spin_dynamics.
png License: CC BY-SA 4.0 Contributors: Own work Original artist: Brunolucatto
File:Edit-clear.svg Source: https://upload.wikimedia.org/wikipedia/en/f/f2/Edit-clear.svg License: Public domain Contributors: The
Tango! Desktop Project. Original artist:
The people from the Tango! project. And according to the meta-data in the le, specically: Andreas Nilsson, and Jakub Steiner (although
minimally).
File:Eigenfaces.png Source: https://upload.wikimedia.org/wikipedia/commons/6/67/Eigenfaces.png License: Attribution Contributors: ?
Original artist: ?
File:Eigenvalue_equation.svg Source: https://upload.wikimedia.org/wikipedia/commons/5/58/Eigenvalue_equation.svg License: GFDL
Contributors: This vector image was created with Inkscape. Original artist: Lyudmil Antonov Lantonov 16:35, 13 March 2008 (UTC)
File:Eigenvectors.gif Source: https://upload.wikimedia.org/wikipedia/commons/0/06/Eigenvectors.gif License: Public domain Contrib-
utors: Own work Original artist: Kie
File:Ellipse_in_coordinate_system_with_semi-axes_labelled.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/8e/
Ellipse_in_coordinate_system_with_semi-axes_labelled.svg License: CC BY-SA 3.0 Contributors: Own work Original artist:
Jakob.scholbach
File:Felix_Bloch,_Stanford_University.jpg Source: https://upload.wikimedia.org/wikipedia/commons/0/0b/Felix_Bloch%2C_
Stanford_University.jpg License: CC BY 3.0 Contributors: Stanford News Service Original artist: Stanford University / Courtesy Stanford
News Service
File:Felix_Bloch_1950s.jpg Source: https://upload.wikimedia.org/wikipedia/commons/9/9c/Felix_Bloch_1950s.jpg License: Public do-
main Contributors: http://www.gettyimages.co.uk/detail/news-photo/portrait-of-swiss-physicist-felix-bloch-leaning-on-a-news-photo/
158744802 Original artist: Unknown (Mondadori Publishers)
File:Finance_Markov_chain_example_state_space.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/95/Finance_
Markov_chain_example_state_space.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Gareth Jones
File:Financial_Markov_process.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a6/Financial_Markov_process.svg
License: CC BY-SA 3.0 Contributors: Own work Original artist: Gareth Jones
File:Flag_of_Brazil.svg Source: https://upload.wikimedia.org/wikipedia/en/0/05/Flag_of_Brazil.svg License: PD Contributors: ? Origi-
nal artist: ?
File:Flip_map.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3f/Flip_map.svg License: CC BY-SA 3.0 Contributors:
derived from File:Rotation_by_pi_over_6.svg Original artist: Jakob.scholbach
File:Folder_Hexagonal_Icon.svg Source: https://upload.wikimedia.org/wikipedia/en/4/48/Folder_Hexagonal_Icon.svg License: Cc-by-
sa-3.0 Contributors: ? Original artist: ?
158 CHAPTER 21. SYMMETRIC MATRIX

File:GaussianScatterPCA.png Source: https://upload.wikimedia.org/wikipedia/commons/1/15/GaussianScatterPCA.png License: CC-


BY-SA-3.0 Contributors:
PNG version of gigantic SVG
Own work (Original caption: I created this work entirely by myself. (Originally uploaded on en.wikipedia) -)
Original artist: Ben FrantzDale (talk) (Transferred by ILCyborg)
File:HAtomOrbitals.png Source: https://upload.wikimedia.org/wikipedia/commons/c/cf/HAtomOrbitals.png License: CC-BY-SA-3.0
Contributors: ? Original artist: ?
File:Homothety_in_two_dim.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/2b/Homothety_in_two_dim.svg License:
GFDL Contributors: This vector image was created with Inkscape. Original artist: Lyudmil Antonov -Lantonov 16:36, 13 March 2008
(UTC)
File:Hydrogen_Density_Plots.png Source: https://upload.wikimedia.org/wikipedia/commons/e/e7/Hydrogen_Density_Plots.png Li-
cense: Public domain Contributors: the English language Wikipedia (log).
Original artist: PoorLeno (talk)
File:Hyperbola2_SVG.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/d9/Hyperbola2_SVG.svg License: CC BY-SA
3.0 Contributors: Own work Original artist: IkamusumeFan
File:Infinite_potential_well.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/27/Infinite_potential_well.svg License:
Public domain Contributors: Created by bdesham in Inkscape. Original artist: Benjamin D. Esham (bdesham)
File:Intensities_vs_transition_probabilities.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/07/Intensities_vs_
transition_probabilities.svg License: CC BY-SA 3.0 Contributors: Own work, nach Heller, Lindenberg, Nuske, Schriever: Stochastische
Systeme, de Gruyter, Berlin 1978 Original artist: Erzbischof
File:Jordan_blocks.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4f/Jordan_blocks.svg License: CC BY-SA 3.0 Con-
tributors: Own work Original artist: Jakob.scholbach
File:Labelled_undirected_graph.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a5/Labelled_undirected_graph.svg
License: CC BY-SA 3.0 Contributors: derived from http://en.wikipedia.org/wiki/File:6n-graph2.svg Original artist: Jakob.scholbach
File:Landau1910.jpg Source: https://upload.wikimedia.org/wikipedia/commons/5/5e/Landau1910.jpg License: Public domain Con-
tributors: " " 4(95) Original artist: Unknown<a href='https://www.wikidata.org/wiki/Q4233718'
title='wikidata:Q4233718'><img alt='wikidata:Q4233718' src='https://upload.wikimedia.org/wikipedia/commons/thumb/f/ff/
Wikidata-logo.svg/20px-Wikidata-logo.svg.png' width='20' height='11' srcset='https://upload.wikimedia.org/wikipedia/commons/
thumb/f/ff/Wikidata-logo.svg/30px-Wikidata-logo.svg.png 1.5x, https://upload.wikimedia.org/wikipedia/commons/thumb/f/ff/
Wikidata-logo.svg/40px-Wikidata-logo.svg.png 2x' data-le-width='1050' data-le-height='590' /></a>
File:Lock-green.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/65/Lock-green.svg License: CC0 Contributors: en:File:
Free-to-read_lock_75.svg Original artist: User:Trappist the monk
File:Loudspeaker.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/8a/Loudspeaker.svg License: Public domain Contrib-
utors: New version of Image:Loudspeaker.png, by AzaToth and compressed by Hautala Original artist: Nethac DIU, waves corrected by
Zoid
File:Markov_chain_SVG.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/29/Markov_chain_SVG.svg License: CC
BY-SA 3.0 Contributors: Own work
Original artist: IkamusumeFan
File:Markov_chain_extremly_simple1.png Source: https://upload.wikimedia.org/wikipedia/commons/f/f3/Markov_chain_extremly_
simple1.png License: CC BY-SA 4.0 Contributors: Own work Original artist: Elenktik
File:Markovkate_01.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/2b/Markovkate_01.svg License: CC BY-SA 3.0
Contributors: Own work Original artist: Joxemai4
File:Matrix.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/bb/Matrix.svg License: GFDL Contributors: Own work
Original artist: Lakeworks
File:Matrix_multiplication_diagram_2.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/eb/Matrix_multiplication_
diagram_2.svg License: CC-BY-SA-3.0 Contributors: This le was derived from: Matrix multiplication diagram.svg
Original artist: File:Matrix multiplication diagram.svg:User:Bilou
File:Max_Planck_(1858-1947).jpg Source: https://upload.wikimedia.org/wikipedia/commons/a/a7/Max_Planck_%281858-1947%29.
jpg License: Public domain Contributors: http://www.sil.si.edu/digitalcollections/hst/scientific-identity/CF/display_results.cfm?alpha_
sort=p Original artist: Unknown<a href='https://www.wikidata.org/wiki/Q4233718' title='wikidata:Q4233718'><img alt='wikidata:
Q4233718' src='https://upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/20px-Wikidata-logo.svg.png'
width='20' height='11' srcset='https://upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/30px-Wikidata-logo.
svg.png 1.5x, https://upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/40px-Wikidata-logo.svg.png 2x'
data-le-width='1050' data-le-height='590' /></a>
File:Mchain_simple_corrected_C1.png Source: https://upload.wikimedia.org/wikipedia/commons/0/0b/Mchain_simple_corrected_
C1.png License: CC BY-SA 4.0 Contributors: Own work Original artist: Elenktik
File:Mode_Shape_of_a_Tuning_Fork_at_Eigenfrequency_440.09_Hz.gif Source: https://upload.wikimedia.org/wikipedia/
commons/8/82/Mode_Shape_of_a_Tuning_Fork_at_Eigenfrequency_440.09_Hz.gif License: CC BY-SA 4.0 Contributors: Own work
Original artist: Sudoer41
File:Model_of_nitrogen-vacancy_center_in_diamond.jpg Source: https://upload.wikimedia.org/wikipedia/commons/7/71/Model_
of_nitrogen-vacancy_center_in_diamond.jpg License: CC BY-SA 3.0 Contributors: Own work Original artist: Materialscientist
File:Mona_Lisa_eigenvector_grid.png Source: https://upload.wikimedia.org/wikipedia/commons/3/3c/Mona_Lisa_eigenvector_grid.
png License: CC0 Contributors: Image:Mona Lisa-restored.jpg Original artist: TreyGreer62
21.9. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 159

File:Mvchain_approx_C2.png Source: https://upload.wikimedia.org/wikipedia/commons/5/5b/Mvchain_approx_C2.png License: CC


BY-SA 4.0 Contributors: Own work Original artist: Elenktik
File:NVenergy.JPG Source: https://upload.wikimedia.org/wikipedia/commons/3/3c/NVenergy.JPG License: CC BY-SA 3.0 Contribu-
tors: Own work Original artist: Materialscientist
File:NVple.JPG Source: https://upload.wikimedia.org/wikipedia/commons/8/85/NVple.JPG License: CC BY-SA 3.0 Contributors: Own
work Original artist: Materialscientist
File:Nobel_Laureate_Sir_Anthony_James_Leggett_in_2007.jpg Source: https://upload.wikimedia.org/wikipedia/commons/0/09/
Nobel_Laureate_Sir_Anthony_James_Leggett_in_2007.jpg License: CC BY-SA 3.0 Contributors: Own work Original artist: Kenneth
C. Zirkel
File:Nobel_Prize.png Source: https://upload.wikimedia.org/wikipedia/en/e/ed/Nobel_Prize.png License: ? Contributors:
Derivative of File:NobelPrize.JPG Original artist:
Photograph: JonathunderMedal: Erik Lindberg (1873-1966)
File:Nuvola_apps_bookcase.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a5/Nuvola_apps_bookcase.svg Li-
cense: LGPL Contributors: The source code of this SVG is <a data-x-rel='nofollow' class='external text' href='//validator.
w3.org/check?uri=https%3A%2F%2Fcommons.wikimedia.org%2Fwiki%2FSpecial%3AFilepath%2FNuvola_apps_bookcase.
svg,<span>,&,</span>,ss=1#source'>valid</a>. Original artist: Peter Kemp
File:Nuvola_apps_edu_mathematics_blue-p.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3e/Nuvola_apps_edu_
mathematics_blue-p.svg License: GPL Contributors: Derivative work from Image:Nuvola apps edu mathematics.png and Image:Nuvola
apps edu mathematics-p.svg Original artist: David Vignoni (original icon); Flamurai (SVG convertion); bayo (color)
File:Nuvola_apps_kaboodle.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1b/Nuvola_apps_kaboodle.svg License:
LGPL Contributors: http://ftp.gnome.org/pub/GNOME/sources/gnome-themes-extras/0.9/gnome-themes-extras-0.9.0.tar.gz Original
artist: David Vignoni / ICON KING
File:People_icon.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/37/People_icon.svg License: CC0 Contributors: Open-
Clipart Original artist: OpenClipart
File:Pitt_Building_Picture_21_07_2010.jpg Source: https://upload.wikimedia.org/wikipedia/commons/a/a6/Pitt_Building_Picture_
21_07_2010.jpg License: Public domain Contributors: It is a picture taken by myself on 21st July 2010. Original artist: Hannah Gre-
gory, picture taken by Hannah Gregory
File:Portal-puzzle.svg Source: https://upload.wikimedia.org/wikipedia/en/f/fd/Portal-puzzle.svg License: Public domain Contributors: ?
Original artist: ?
File:Qm_step_pot_temp.png Source: https://upload.wikimedia.org/wikipedia/commons/8/87/Qm_step_pot_temp.png License: Public
domain Contributors: Own work Original artist: F=q(E+v^B)
File:QuantumHarmonicOscillatorAnimation.gif Source: https://upload.wikimedia.org/wikipedia/commons/9/90/
QuantumHarmonicOscillatorAnimation.gif License: CC0 Contributors: Own work Original artist: Sbyrnes321
File:Question_dropshade.png Source: https://upload.wikimedia.org/wikipedia/commons/d/dd/Question_dropshade.png License: Public
domain Contributors: Image created by JRM Original artist: JRM
File:RR5110-0083R.png Source: https://upload.wikimedia.org/wikipedia/commons/c/c8/RR5110-0083R.png License: Public domain
Contributors: (2008, pdf) Original artist:
File:Rotation.png Source: https://upload.wikimedia.org/wikipedia/commons/c/c9/Rotation.png License: Public domain Contributors:
Own work Original artist: Underdark
File:Rotation_by_pi_over_6.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/8e/Rotation_by_pi_over_6.svg License:
Public domain Contributors: Own work using Inkscape Original artist: RobHar
File:Rtd_seq_v3.gif Source: https://upload.wikimedia.org/wikipedia/commons/5/51/Rtd_seq_v3.gif License: CC BY 3.0 Contributors:
Tool:Resonant Tunneling Diode Simulation with NEGF simulator on www.nanoHUB.org. Link: http://nanohub.org/resources/8799 Orig-
inal artist: Saumitra R Mehrotra & Gerhard Klimeck
File:Saddle_Point_SVG.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/0d/Saddle_Point_SVG.svg License: CC BY-
SA 3.0 Contributors: Own work
Original artist: IkamusumeFan
File:Scaling_by_1.5.svg Source: https://upload.wikimedia.org/wikipedia/commons/c/c7/Scaling_by_1.5.svg License: Public domain
Contributors: Own work using Inkscape Original artist: RobHar
File:Shear.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/2a/Shear.svg License: GFDL Contributors: This vector image
was created with Inkscape. Original artist: Lyudmil Antonov --Lantonov 09:13, 17 March 2008 (UTC)
File:Solvay_conference_1927.jpg Source: https://upload.wikimedia.org/wikipedia/commons/6/6e/Solvay_conference_1927.jpg Li-
cense: Public domain Contributors: http://w3.pppl.gov/ Original artist: Benjamin Couprie
File:Sound-icon.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/47/Sound-icon.svg License: LGPL Contributors:
Derivative work from Silsor's versio Original artist: Crystal SVG icon set
File:Squeeze_r=1.5.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/67/Squeeze_r%3D1.5.svg License: Public domain
Contributors: Own work Original artist: RobHar
File:Stylised_Lithium_Atom.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/6f/Stylised_atom_with_three_Bohr_
model_orbits_and_stylised_nucleus.svg License: CC-BY-SA-3.0 Contributors: based o of Image:Stylised Lithium Atom.png by Halfdan.
Original artist: SVG by Indolences. Recoloring and ironing out some glitches done by Rainer Klute.
File:Symbol_book_class2.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/89/Symbol_book_class2.svg License: CC
BY-SA 2.5 Contributors: Mad by Lokal_Prol by combining: Original artist: Lokal_Prol
160 CHAPTER 21. SYMMETRIC MATRIX

File:Text_document_with_red_question_mark.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a4/Text_document_


with_red_question_mark.svg License: Public domain Contributors: Created by bdesham with Inkscape; based upon Text-x-generic.svg
from the Tango project. Original artist: Benjamin D. Esham (bdesham)
File:Transition_graph_pac-man.png Source: https://upload.wikimedia.org/wikipedia/commons/9/9a/Transition_graph_pac-man.png
License: CC BY 4.0 Contributors: Own work Original artist: Moritz Kohls
File:UA035-10.jpg Source: https://upload.wikimedia.org/wikipedia/commons/9/9c/UA035-10.jpg License: Public domain Contributors:
http://www.wnsstamps.post/en/stamps?search%5Blike%5D=UA035.10 Original artist: Ukrainian State Enterprise of Posts Ukrposhta
File:Unequal_scaling.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/be/Unequal_scaling.svg License: GFDL Contrib-
utors: This vector image was created with Inkscape. Original artist: Lyudmil Antonov -Lantonov 16:37, 13 March 2008 (UTC)
File:University_of_Cambridge_coat_of_arms_official.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/99/
University_of_Cambridge_coat_of_arms_official.svg License: Public domain Contributors: University of Cambridge Original artist:
University of Cambridge
File:VerticalShear_m=1.25.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/92/VerticalShear_m%3D1.25.svg License:
Public domain Contributors: Own work using Inkscape Original artist: RobHar
File:Vertical_polarization.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/6d/Vertical_polarization.svg License: CC
BY 3.0 Contributors: File:Polarimeter (Optical rotation).svg Original artist: Kaidor,
File:Wiener_process_3d.png Source: https://upload.wikimedia.org/wikipedia/commons/f/f8/Wiener_process_3d.png License: CC-BY-
SA-3.0 Contributors: The description as originally from Wikipedia. Original artist: Original uploader was Sullivan.t.j at English Wikipedia.
File:Wiki_letter_w_cropped.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1c/Wiki_letter_w_cropped.svg License:
CC-BY-SA-3.0 Contributors: This le was derived from Wiki letter w.svg: <a href='//commons.wikimedia.org/wiki/File:
Wiki_letter_w.svg' class='image'><img alt='Wiki letter w.svg' src='https://upload.wikimedia.org/wikipedia/commons/thumb/6/6c/Wiki_
letter_w.svg/50px-Wiki_letter_w.svg.png' width='50' height='50' srcset='https://upload.wikimedia.org/wikipedia/commons/thumb/6/6c/
Wiki_letter_w.svg/75px-Wiki_letter_w.svg.png 1.5x, https://upload.wikimedia.org/wikipedia/commons/thumb/6/6c/Wiki_letter_w.svg/
100px-Wiki_letter_w.svg.png 2x' data-le-width='44' data-le-height='44' /></a>
Original artist: Derivative work by Thumperward
File:Wikibooks-logo-en-noslogan.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/df/Wikibooks-logo-en-noslogan.
svg License: CC BY-SA 3.0 Contributors: Own work Original artist: User:Bastique, User:Ramac et al.
File:Wikibooks-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/fa/Wikibooks-logo.svg License: CC BY-SA 3.0
Contributors: Own work Original artist: User:Bastique, User:Ramac et al.
File:Wikinews-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/24/Wikinews-logo.svg License: CC BY-SA 3.0
Contributors: This is a cropped version of Image:Wikinews-logo-en.png. Original artist: Vectorized by Simon 01:05, 2 August 2006 (UTC)
Updated by Time3000 17 April 2007 to use ocial Wikinews colours and appear correctly on dark backgrounds. Originally uploaded by
Simon.
File:Wikiquote-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/fa/Wikiquote-logo.svg License: Public domain
Contributors: Own work Original artist: Rei-artur
File:Wikisource-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4c/Wikisource-logo.svg License: CC BY-SA 3.0
Contributors: Rei-artur Original artist: Nicholas Moreau
File:Wikiversity-logo-Snorky.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1b/Wikiversity-logo-en.svg License:
CC BY-SA 3.0 Contributors: Own work Original artist: Snorky
File:Wikiversity-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/91/Wikiversity-logo.svg License: CC BY-SA 3.0
Contributors: Snorky (optimized and cleaned up by verdy_p) Original artist: Snorky (optimized and cleaned up by verdy_p)
File:Wiktionary-logo-v2.svg Source: https://upload.wikimedia.org/wikipedia/en/0/06/Wiktionary-logo-v2.svg License: CC-BY-SA-3.0
Contributors: ? Original artist: ?

21.9.3 Content license


Creative Commons Attribution-Share Alike 3.0

You might also like