You are on page 1of 125

Kirchhoff's circuit laws

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Kirchhoff's circuit laws are two equalities that deal with the conservation of charge and
energy in electrical circuits, and were first described in 1845 by Gustav Kirchhoff. Widely
used in electrical engineering, they are also called Kirchhoff's rules or simply Kirchhoff's
laws (see also Kirchhoff's laws for other meanings of that term).

Both circuit rules can be directly derived from Maxwell's equations, but Kirchhoff preceded
Maxwell and instead generalized work by Georg Ohm.

Contents
[hide]

• 1 Kirchhoff's current law (KCL)


• 2 Changing charge density
o 2.1 Uses
• 3 Kirchhoff's voltage law (KVL)
o 3.1 Electric field and electric potential
• 4 See also
• 5 References

• 6 External links

[edit] Kirchhoff's current law (KCL)

The current entering any junction is equal to the current leaving that junction. i1 + i4 = i2 + i3

This law is also called Kirchhoff's point rule, Kirchhoff's junction rule (or nodal rule),
and Kirchhoff's first rule.

The principle of conservation of electric charge implies that:


At any node (junction) in an electrical circuit, the sum of currents flowing into that
node is equal to the sum of currents flowing out of that node.
or
The algebraic sum of currents in a network of conductors meeting at a point is zero.
(Assuming that current entering the junction is taken as positive and current leaving
the junction is taken as negative).

Recalling that current is a signed (positive or negative) quantity reflecting direction towards
or away from a node, this principle can be stated as:

n is the total number of branches with currents flowing towards or away from the node.

This formula is also valid for complex currents:

The law is based on the conservation of charge whereby the charge (measured in coulombs)
is the product of the current (in amperes) and the time (which is measured in seconds).

[edit] Changing charge density


Physically speaking, the restriction regarding the "capacitor plate" means that Kirchhoff's
current law is only valid if the charge density remains constant in the point that it is applied
to. This is normally not a problem because of the strength of electrostatic forces: the charge
buildup would cause repulsive forces to disperse the charges.

However, a charge build-up can occur in a capacitor, where the charge is typically spread
over wide parallel plates, with a physical break in the circuit that prevents the positive and
negative charge accumulations over the two plates from coming together and cancelling. In
this case, the sum of the currents flowing into one plate of the capacitor is not zero, but rather
is equal to the rate of charge accumulation. However, if the displacement current dD/dt is
included, Kirchhoff's current law once again holds. (This is really only required if one wants
to apply the current law to a point on a capacitor plate. In circuit analyses, however, the
capacitor as a whole is typically treated as a unit, in which case the ordinary current law holds
since exactly the current that enters the capacitor on the one side leaves it on the other side.)

More technically, Kirchhoff's current law can be found by taking the divergence of Ampère's
law with Maxwell's correction and combining with Gauss's law, yielding:

This is simply the charge conservation equation (in integral form, it says that the current
flowing out of a closed surface is equal to the rate of loss of charge within the enclosed
volume (Divergence theorem)). Kirchhoff's current law is equivalent to the statement that the
divergence of the current is zero, true for time-invariant ρ, or always true if the displacement
current is included with J.

[edit] Uses

A matrix version of Kirchhoff's current law is the basis of most circuit simulation software,
such as SPICE.

[edit] Kirchhoff's voltage law (KVL)

The sum of all the voltages around the loop is equal to zero. v1 + v2 + v3 + v4 = 0

This law is also called Kirchhoff's second law, Kirchhoff's loop (or mesh) rule, and
Kirchhoff's second rule.

The principle of conservation of energy implies that

The directed sum of the electrical potential differences (voltage) around any closed
circuit must be zero.
or
More simply, the sum of the emfs in any closed loop is equivalent to the sum of the
potential drops in that loop.
or
The algebraic sum of the products of the resistances of the conductors and the currents
in them in a closed loop is equal to the total emf available in that loop.

Similarly to KCL, it can be stated as:

Here, n is the total number of voltages measured. The voltages may also be complex:
This law is based on the conservation of "energy given/taken by potential field" (not
including energy taken by dissipation). Given a voltage potential, a charge which has
completed a closed loop doesn't gain or lose energy as it has gone back to initial potential
level.

This law holds true even when resistance (which causes dissipation of energy) is present in a
circuit. The validity of this law in this case can be understood if one realizes that a charge in
fact doesn't go back to its starting point, due to dissipation of energy. A charge will just
terminate at the negative terminal, instead of positive terminal. This means all the energy
given by the potential difference has been fully consumed by resistance which in turn loses
the energy as heat dissipation.

To summarize, Kirchhoff's voltage law has nothing to do with gain or loss of energy by
electronic components (resistors, capacitors, etc). It is a law referring to the potential field
generated by voltage sources. In this potential field, regardless of what electronic components
are present, the gain or loss in "energy given by the potential field" must be zero when a
charge completes a closed loop.

[edit] Electric field and electric potential

Kirchhoff's voltage law as stated above is equivalent to the statement that a single-valued
electric potential can be assigned to each point in the circuit (in the same way that any
conservative vector field can be represented as the gradient of a scalar potential). Then the
sum of the changes in this potential that occur as one makes an imaginary traverse around any
closed loop in the circuit should be equal to zero.

This could be viewed as a consequence of the principle of conservation of energy. Otherwise,


it would be possible to build a perpetual motion machine that passed a current in a circle
around the circuit.

Considering that electric potential is defined as a line integral over an electric field,
Kirchhoff's voltage law can be expressed equivalently as

which states that the line integral of the electric field around closed loop C is zero.

In order to return to the more special form, this integral can be "cut in pieces" in order to get
the voltage at specific components.

This is a simplification of Faraday's law of induction for the special case where there is no
fluctuating magnetic field linking the closed loop. Therefore, it practically suffices for
explaining circuits containing only resistors and capacitors.

In the presence of a changing magnetic field the electric field is not conservative and it
cannot therefore define a pure scalar potential—the line integral of the electric field around
the circuit is not zero. This is because energy is being transferred from the magnetic field to
the current (or vice versa). In order to "fix" Kirchhoff's voltage law for circuits containing
inductors, an effective potential drop, or electromotive force (emf), is associated with each
inductance of the circuit, exactly equal to the amount by which the line integral of the electric
field is not zero by Faraday's law of induction.

[edit] See also


History
Electromagnetic induction was discovered independently by Michael Faraday and Joseph
Henry in 1831; however, Faraday was the first to publish the results of his experiments.[2][3]

Faraday's disk

In Faraday's first experimental demonstration of electromagnetic induction (August 1831), he


wrapped two wires around opposite sides of an iron torus (an arrangement similar to a
modern transformer). Based on his assessment of recently-discovered properties of
electromagnets, he expected that when current started to flow in one wire, a sort of wave
would travel through the ring and cause some electrical effect on the opposite side. He
plugged one wire into a galvanometer, and watched it as he connected the other wire to a
battery. Indeed, he saw a transient current (which he called a "wave of electricity") when he
connected the wire to the battery, and another when he disconnected it.[4] Within two months,
Faraday had found several other manifestations of electromagnetic induction. For example,
he saw transient currents when he quickly slid a bar magnet in and out of a coil of wires, and
he generated a steady (DC) current by rotating a copper disk near a bar magnet with a sliding
electrical lead ("Faraday's disk").[5]

Faraday explained electromagnetic induction using a concept he called lines of force.


However, scientists at the time widely rejected his theoretical ideas, mainly because they
were not formulated mathematically.[6] An exception was Maxwell, who used Faraday's ideas
as the basis of his quantitative electromagnetic theory.[6][7][8] In Maxwell's papers, Faraday's
law of induction was expressed as a calculus equation, in a form recognizable today.

Lenz's law, formulated by Heinrich Lenz in 1834, describes "flux through the circuit", and
gives the direction of the induced electromotive force and current resulting from
electromagnetic induction (elaborated upon in the examples below).
Faraday's experiment showing induction between coils of wire: The liquid battery (right)
provides a current which flows through the small coil (A), creating a magnetic field. When
the coils are stationary, no current is induced. But when the small coil is moved in or out of
the large coil (B), the magnetic flux through the large coil changes, inducing a current which
is detected by the galvanometer (G).[9]

[edit] Faraday's law as two different phenomena


Some physicists have remarked that Faraday's law is a single equation describing two
different phenomena: The motional EMF generated by a magnetic force on a moving wire,
and the transformer EMF generated by an electric force due to a changing magnetic field.
James Clerk Maxwell drew attention to this fact in his 1861 paper On Physical Lines of
Force. In the latter half of part II of that paper, Maxwell gives a separate physical explanation
for each of the two phenomena. A reference to these two aspects of electromagnetic induction
is made in some modern textbooks.[10] As Richard Feynman states:[11]

So the "flux rule" that the emf in a circuit is equal to the rate of change of the magnetic flux
through the circuit applies whether the flux changes because the field changes or because the
circuit moves (or both).... Yet in our explanation of the rule we have used two completely
distinct laws for the two    – cases  for "circuit moves" and   for
"field changes". 
We know of no other place in physics where such a simple and accurate general principle
requires for its real understanding an analysis in terms of two different phenomena.

– Richard P. Feynman , The Feynman Lectures on Physics

Reflection on this apparent dichotomy was one of the principal paths that led Einstein to
develop special relativity:

It is known that Maxwell’s electrodynamics—as usually understood at the present time—


when applied to moving bodies, leads to asymmetries which do not appear to be inherent in
the phenomena. Take, for example, the reciprocal electrodynamic action of a magnet and a
conductor. The observable phenomenon here depends only on the relative motion of the
conductor and the magnet, whereas the customary view draws a sharp distinction between the
two cases in which either the one or the other of these bodies is in motion. For if the magnet
is in motion and the conductor at rest, there arises in the neighbourhood of the magnet an
electric field with a certain definite energy, producing a current at the places where parts of
the conductor are situated. But if the magnet is stationary and the conductor in motion, no
electric field arises in the neighbourhood of the magnet. In the conductor, however, we find
an electromotive force, to which in itself there is no corresponding energy, but which gives
rise—assuming equality of relative motion in the two cases discussed—to electric currents of
the same path and intensity as those produced by the electric forces in the former case.

– Albert Einstein, On the Electrodynamics of Moving Bodies[12]

[edit] Flux through a surface and EMF around a loop

The definition of surface integral relies on splitting the surface Σ into small surface elements.
Each element is associated with a vector dA of magnitude equal to the area of the element
and with direction normal to the element and pointing outward.

A vector field F(r, t) defined throughout space, and a surface Σ bounded by curve ∂Σ moving
with velocity v over which the field is integrated.

Faraday's law of induction makes use of the magnetic flux ΦB through a surface Σ, defined by
an integral over a surface:

where dA is an element of surface area of the moving surface Σ(t), B is the magnetic field,
and B·dA is a vector dot product. The surface is considered to have a "mouth" outlined by a
closed curve denoted ∂Σ(t). When the flux changes, Faraday's law of induction says that the
work done (per unit charge) moving a test charge around the closed curve ∂Σ(t), called the
electromotive force (EMF), is given by:
where is the magnitude of the electromotive force (EMF) in volts and ΦB is the magnetic
flux in webers. The direction of the electromotive force is given by Lenz's law.

For a tightly-wound coil of wire, composed of N identical loops, each with the same ΦB,
Faraday's law of induction states that

where N is the number of turns of wire and ΦB is the magnetic flux in webers through a single
loop.

In choosing a path ∂Σ(t) to find EMF, the path must satisfy the basic requirements that (i) it is
a closed path, and (ii) the path must capture the relative motion of the parts of the circuit (the
origin of the t-dependence in ∂Σ(t) ). It is not a requirement that the path follow a line of
current flow, but of course the EMF that is found using the flux law will be the EMF around
the chosen path. If a current path is not followed, the EMF might not be the EMF driving the
current.

[edit] Example: Spatially varying B-field

Figure 3: Closed rectangular wire loop moving along x-axis at velocity v in magnetic field B
that varies with position x.

Consider the case in Figure 3 of a closed rectangular loop of wire in the xy-plane translated in
the x-direction at velocity v. Thus, the center of the loop at xC satisfies v = dxC / dt. The loop
has length ℓ in the y-direction and width w in the x-direction. A time-independent but
spatially varying magnetic field B(x) points in the z-direction. The magnetic field on the left
side is B( xC − w / 2), and on the right side is B( xC + w / 2). The electromotive force is to be
found by using either the Lorentz force law or equivalently by using Faraday's induction law
above.

[edit] Lorentz force law method


A charge q in the wire on the left side of the loop experiences a Lorentz force q v × B k = −q
v B(xC − w / 2) j   ( j, k unit vectors in the y- and z-directions; see vector cross product),
leading to an EMF (work per unit charge) of v ℓ B(xC − w / 2) along the length of the left side
of the loop. On the right side of the loop the same argument shows the EMF to be v ℓ B(xC +
w / 2). The two EMF's oppose each other, both pushing positive charge toward the bottom of
the loop. In the case where the B-field increases with increase in x, the force on the right side
is largest, and the current will be clockwise: using the right-hand rule, the B-field generated
by the current opposes the impressed field.[13] The EMF driving the current must increase as
we move counterclockwise (opposite to the current). Adding the EMF's in a counterclockwise
tour of the loop we find

[edit] Faraday's law method

At any position of the loop the magnetic flux through the loop is

The sign choice is decided by whether the normal to the surface points in the same direction
as B, or in the opposite direction. If we take the normal to the surface as pointing in the same
direction as the B-field of the induced current, this sign is negative. The time derivative of the
flux is then (using the chain rule of differentiation or the general form of Leibniz rule for
differentiation of an integral):

(where v = dxC / dt is the rate of motion of the loop in the x-direction ) leading to:

as before.

The equivalence of these two approaches is general and, depending on the example, one or
the other method may prove more practical.

[edit] Example: Moving loop in uniform B-field


Figure 4: Rectangular wire loop rotating at angular velocity ω in radially outward pointing
magnetic field B of fixed magnitude. Current is collected by brushes attached to top and
bottom discs, which have conducting rims.

Figure 4 shows a spindle formed of two discs with conducting rims and a conducting loop
attached vertically between these rims. The entire assembly spins in a magnetic field that
points radially outward, but is the same magnitude regardless of its direction. A radially
oriented collecting return loop picks up current from the conducting rims. At the location of
the collecting return loop, the radial B-field lies in the plane of the collecting loop, so the
collecting loop contributes no flux to the circuit. The electromotive force is to be found
directly and by using Faraday's law above.

[edit] Lorentz force law method

In this case the Lorentz force drives the current in the two vertical arms of the moving loop
downward, so current flows from the top disc to the bottom disc. In the conducting rims of
the discs, the Lorentz force is perpendicular to the rim, so no EMF is generated in the rims,
nor in the horizontal portions of the moving loop. Current is transmitted from the bottom rim
to the top rim through the external return loop, which is oriented so the B-field is in its plane.
Thus, the Lorentz force in the return loop is perpendicular to the loop, and no EMF is
generated in this return loop. Traversing the current path in the direction opposite to the
current flow, work is done against the Lorentz force only in the vertical arms of the moving
loop, where

Consequently, the EMF is

where l is the vertical length of the loop, and the velocity is related to the angular rate of
rotation by v = r ω, with r = radius of cylinder. Notice that the same work is done on any path
that rotates with the loop and connects the upper and lower rim.

[edit] Faraday's law method

See also: Faraday paradox


An intuitively appealing but mistaken approach to using the flux rule would say the flux
through the circuit was just ΦB = B w ℓ, where w = width of the moving loop. This number is
time-independent, so the approach predicts incorrectly that no EMF is generated. The flaw in
this argument is that it fails to consider the entire current path, which is a closed loop.

To use the flux rule, we have to look at the entire current path, which includes the path
through the rims in the top and bottom discs. We can choose an arbitrary closed path through
the rims and the rotating loop, and the flux law will find the EMF around the chosen path.
Any path that has a segment attached to the rotating loop captures the relative motion of the
parts of the circuit.

As an example path, let's traverse the circuit in the direction of rotation in the top disc, and in
the direction opposite to the direction of rotation in the bottom disc (shown by arrows in
Figure 4). In this case, for the moving loop at an angle θ from the collecting loop, a portion of
the cylinder of area A = r ℓ θ is part of the circuit. This area is perpendicular to the B-field,
and so contributes to the flux an amount:

where the sign is negative because the right-hand rule suggests the B-field generated by the
current loop is opposite in direction to the applied B field. As this is the only time-dependent
portion of the flux, the flux law predicts an EMF of

in agreement with the Lorentz force law calculation.

Now let's try a different path. Follow a path traversing the rims via the opposite choice of
segments. Then the coupled flux would decrease as θ increased, but the right-hand rule
would suggest the current loop added to the applied B-field, so the EMF around this path is
the same as for the first path. Any mixture of return paths leads to the same result for EMF,
so it is actually immaterial which path is followed.

[edit] Direct evaluation of the change in flux

Figure 5: A simplified version of Figure 4. The loop slides with velocity v in a stationary,
homogeneous B-field.
The use of a closed path to find EMF as done above appears to depend upon details of the
path geometry. In contrast, the Lorentz-law approach is independent of such restrictions. The
following discussion is intended to provide a better understanding of the equivalence of paths
and escape the particulars of path selection when using the flux law.

Figure 5 is an idealization of Figure 4 with the cylinder unwrapped onto a plane. The same
path-related analysis works, but a simplification is suggested. The time-independent aspects
of the circuit cannot affect the time-rate-of-change of flux. For example, at a constant
velocity of sliding the loop, the details of current flow through the loop are not time
dependent. Instead of concern over details of the closed loop selected to find the EMF, one
can focus on the area of B-field swept out by the moving loop. This suggestion amounts to
finding the rate at which flux is cut by the circuit.[14] That notion provides direct evaluation of
the rate of change of flux, without concern over the time-independent details of various path
choices around the circuit. Just as with the Lorentz law approach, it is clear that any two
paths attached to the sliding loop, but differing in how they cross the loop, produce the same
rate-of-change of flux.

In Figure 5 the area swept out in unit time is simply dA / dt = v ℓ, regardless of the details of
the selected closed path, so Faraday's law of induction provides the EMF as:[15]

This path independence of EMF shows that if the sliding loop is replaced by a solid
conducting plate, or even some complex warped surface, the analysis is the same: find the
flux in the area swept out by the moving portion of the circuit. In a similar way, if the sliding
loop in the drum generator of Figure 4 is replaced by a 360° solid conducting cylinder, the
swept area calculation is exactly the same as for the case with only a loop. That is, the EMF
predicted by Faraday's law is exactly the same for the case with a cylinder with solid
conducting walls or, for that matter, a cylinder with a cheese grater for walls. Notice, though,
that the current that flows as a result of this EMF will not be the same because the resistance
of the circuit determines the current.

[edit] The Maxwell-Faraday equation

Figure 6: An illustration of Kelvin-Stokes theorem with surface Σ its boundary ∂Σ and


orientation n set by the right-hand rule.
A changing magnetic field creates an electric field; this phenomenon is described by the
Maxwell-Faraday equation:[16]

where:

denotes curl
E is the electric field
B is the magnetic field

This equation appears in modern sets of Maxwell's equations and is often referred to as
Faraday's law. However, because it contains only partial time derivatives, its application is
restricted to situations where the test charge is stationary in a time varying magnetic field. It
does not account for electromagnetic induction in situations where a charged particle is
moving in a magnetic field.

It also can be written in an integral form by the Kelvin-Stokes theorem:[17]

where the movement of the derivative before the integration requires a time-independent
surface Σ (considered in this context to be part of the interpretation of the partial derivative),
and as indicated in Figure 6:

Σ is a surface bounded by the closed contour ∂Σ; both Σ and ∂Σ are fixed,
independent of time
E is the electric field,
dℓ is an infinitesimal vector element of the contour ∂Σ,
B is the magnetic field.
dA is an infinitesimal vector element of surface Σ , whose magnitude is the area of an
infinitesimal patch of surface, and whose direction is orthogonal to that surface patch.

Both dℓ and dA have a sign ambiguity; to get the correct sign, the right-hand rule is used, as
explained in the article Kelvin-Stokes theorem. For a planar surface Σ, a positive path
element dℓ of curve ∂Σ is defined by the right-hand rule as one that points with the fingers of
the right hand when the thumb points in the direction of the normal n to the surface Σ.

The integral around ∂Σ is called a path integral or line integral. The surface integral at the
right-hand side of the Maxwell-Faraday equation is the explicit expression for the magnetic
flux ΦB through Σ. Notice that a nonzero path integral for E is different from the behavior of
the electric field generated by charges. A charge-generated E-field can be expressed as the
gradient of a scalar field that is a solution to Poisson's equation, and has a zero path integral.
See gradient theorem.
The integral equation is true for any path ∂Σ through space, and any surface Σ for which that
path is a boundary. Note, however, that ∂Σ and Σ are understood not to vary in time in this
formula. This integral form cannot treat motional EMF because Σ is time-independent.
Notice as well that this equation makes no reference to EMF ,  and indeed cannot do so
without introduction of the Lorentz force law to enable a calculation of work.

Figure 7: Area swept out by vector element dℓ of curve ∂Σ in time dt when moving with
velocity v.

Using the complete Lorentz force to calculate the EMF,

a statement of Faraday's law of induction more general than the integral form of the Maxwell-
Faraday equation is (see Lorentz force):

where ∂Σ(t) is the moving closed path bounding the moving surface Σ(t), and v is the
velocity of movement. See Figure 2. Notice that the ordinary time derivative is used, not a
partial time derivative, implying the time variation of Σ(t) must be included in the
differentiation. In the integrand the element of the curve dℓ moves with velocity v.

Figure 7 provides an interpretation of the magnetic force contribution to the EMF on the left
side of the above equation. The area swept out by segment dℓ of curve ∂Σ in time dt when
moving with velocity v is (see geometric meaning of cross-product):

so the change in magnetic flux ΔΦB through the portion of the surface enclosed by ∂Σ in time
dt is:

and if we add these ΔΦB-contributions around the loop for all segments dℓ, we obtain the
magnetic force contribution to Faraday's law. That is, this term is related to motional EMF.
[edit] Example: viewpoint of a moving observer

See also: Moving magnet and conductor problem

Revisiting the example of Figure 3 in a moving frame of reference brings out the close
connection between E- and B-fields, and between motional and induced EMF's.[18] Imagine an
observer of the loop moving with the loop. The observer calculates the EMF around the loop
using both the Lorentz force law and Faraday's law of induction. Because this observer
moves with the loop, the observer sees no movement of the loop, and zero v × B. However,
because the B-field varies with position x, the moving observer sees a time-varying magnetic
field, namely:

where k is a unit vector pointing in the z-direction.[19]

[edit] Lorentz force law version

The Maxwell-Faraday equation says the moving observer sees an electric field Ey in the y-
direction given by:

Here the chain rule is used:

Solving for Ey, to within a constant that contributes nothing to an integral around the loop,

Using the Lorentz force law, which has only an electric field component, the observer finds
the EMF around the loop at a time t to be:

which is exactly the same result found by the stationary observer, who sees the centroid xC
has advanced to a position xC + v t. However, the moving observer obtained the result under
the impression that the Lorentz force had only an electric component, while the stationary
observer thought the force had only a magnetic component.

[edit] Faraday's law of induction


Using Faraday's law of induction, the observer moving with xC sees a changing magnetic
flux, but the loop does not appear to move: the center of the loop xC is fixed because the
moving observer is moving with the loop. The flux is then:

where the minus sign comes from the normal to the surface pointing oppositely to the applied
B-field. The EMF from Faraday's law of induction is now:

the same result. The time derivative passes through the integration because the limits of
integration have no time dependence. Again, the chain rule was used to convert the time
derivative to an x-derivative.

The stationary observer thought the EMF was a motional EMF, while the moving observer
thought it was an induced EMF.[20]

[edit] Electrical generator

Figure 8: Faraday's disc electric generator. The disc rotates with angular rate ω, sweeping the
conducting radius circularly in the static magnetic field B. The magnetic Lorentz force v × B
drives the current along the conducting radius to the conducting rim, and from there the
circuit completes through the lower brush and the axle supporting the disc. Thus, current is
generated from mechanical motion.
Main article: electrical generator

The EMF generated by Faraday's law of induction due to relative movement of a circuit and a
magnetic field is the phenomenon underlying electrical generators. When a permanent
magnet is moved relative to a conductor, or vice versa, an electromotive force is created. If
the wire is connected through an electrical load, current will flow, and thus electrical energy
is generated, converting the mechanical energy of motion to electrical energy. For example,
the drum generator is based upon Figure 4. A different implementation of this idea is the
Faraday's disc, shown in simplified form in Figure 8. Note that either the analysis of Figure 5,
or direct application of the Lorentz force law, shows that a solid conducting disc works the
same way.

In the Faraday's disc example, the disc is rotated in a uniform magnetic field perpendicular to
the disc, causing a current to flow in the radial arm due to the Lorentz force. It is interesting
to understand how it arises that mechanical work is necessary to drive this current. When the
generated current flows through the conducting rim, a magnetic field is generated by this
current through Ampere's circuital law (labeled "induced B" in Figure 8). The rim thus
becomes an electromagnet that resists rotation of the disc (an example of Lenz's law). On the
far side of the figure, the return current flows from the rotating arm through the far side of the
rim to the bottom brush. The B-field induced by this return current opposes the applied B-
field, tending to decrease the flux through that side of the circuit, opposing the increase in
flux due to rotation. On the near side of the figure, the return current flows from the rotating
arm through the near side of the rim to the bottom brush. The induced B-field increases the
flux on this side of the circuit, opposing the decrease in flux due to rotation. Thus, both sides
of the circuit generate an emf opposing the rotation. The energy required to keep the disc
moving, despite this reactive force, is exactly equal to the electrical energy generated (plus
energy wasted due to friction, Joule heating, and other inefficiencies). This behavior is
common to all generators converting mechanical energy to electrical energy.

Although Faraday's law always describes the working of electrical generators, the detailed
mechanism can differ in different cases. When the magnet is rotated around a stationary
conductor, the changing magnetic field creates an electric field, as described by the Maxwell-
Faraday equation, and that electric field pushes the charges through the wire. This case is
called an induced EMF. On the other hand, when the magnet is stationary and the conductor
is rotated, the moving charges experience a magnetic force (as described by the Lorentz force
law), and this magnetic force pushes the charges through the wire. This case is called
motional EMF. (For more information on motional EMF, induced EMF, Faraday's law, and
the Lorentz force, see above example, and see Griffiths.)[21]

[edit] Electrical motor


Main article: electrical motor

An electrical generator can be run "backwards" to become a motor. For example, with the
Faraday disc, suppose a DC current is driven through the conducting radial arm by a voltage.
Then by the Lorentz force law, this traveling charge experiences a force in the magnetic field
B that will turn the disc in a direction given by Fleming's left hand rule. In the absence of
irreversible effects, like friction or Joule heating, the disc turns at the rate necessary to make
d ΦB / dt equal to the voltage driving the current.

[edit] Electrical transformer


Main article: transformer
The EMF predicted by Faraday's law is also responsible for electrical transformers. When the
electric current in a loop of wire changes, the changing current creates a changing magnetic
field. A second wire in reach of this magnetic field will experience this change in magnetic
field as a change in its coupled magnetic flux, a d ΦB / d t. Therefore, an electromotive force
is set up in the second loop called the induced EMF or transformer EMF. If the two ends
of this loop are connected through an electrical load, current will flow.

[edit] Magnetic flow meter


Main article: magnetic flow meter

Faraday's law is used for measuring the flow of electrically conductive liquids and slurries.
Such instruments are called magnetic flow meters. The induced voltage ε generated in the
magnetic field B due to a conductive liquid moving at velocity v is thus given by:

where ℓ is the distance between electrodes in the magnetic flow meter.

[edit] Parasitic induction and waste heating


All metal objects moving in relation to a static magnetic field will experience inductive
power flow, as do all stationary metal objects in relation to a moving magnetic field. These
power flows are occasionally undesirable, resulting in flowing electric current at very low
voltage and heating of the metal.

There are a number of methods employed to control these undesirable inductive effects.

• Electromagnets in electric motors, generators, and transformers do not use solid


metal, but instead use thin sheets of metal plate, called laminations. These thin plates
reduce the parasitic eddy currents, as described below.
• Inductive coils in electronics typically use magnetic cores to minimize parasitic
current flow. They are a mixture of metal powder plus a resin binder that can hold any
shape. The binder prevents parasitic current flow through the powdered metal.

[edit] Electromagnet laminations


Eddy currents occur when a solid metallic mass is rotated in a magnetic field, because the
outer portion of the metal cuts more lines of force than the inner portion, hence the induced
electromotive force not being uniform, tends to set up currents between the points of greatest
and least potential. Eddy currents consume a considerable amount of energy and often cause
a harmful rise in temperature.[22]

Only five laminations or plates are shown in this example, so as to show the subdivision of
the eddy currents. In practical use, the number of laminations or punchings ranges from 40 to
66 per inch, and brings the eddy current loss down to about one percent. While the plates can
be separated by insulation, the voltage is so low that the natural rust/oxide coating of the
plates is enough to prevent current flow across the laminations.[23]

This is a rotor approximately 20mm in diameter from a DC motor used in a CD player. Note
the laminations of the electromagnet pole pieces, used to limit parasitic inductive losses.

[edit] Parasitic induction within inductors

In this illustration, a solid copper bar inductor on a rotating armature is just passing under the
tip of the pole piece N of the field magnet. Note the uneven distribution of the lines of force
across the bar inductor. The magnetic field is more concentrated and thus stronger on the left
edge of the copper bar (a,b) while the field is weaker on the right edge (c,d). Since the two
edges of the bar move with the same velocity, this difference in field strength across the bar
creates whirls or current eddies within the copper bar.[24]
This is one of the reasons why high voltage devices tend to be more efficient than low
voltage devices. High voltage devices use many turns of small-gauge wire in motors,
generators, and transformers. These many small turns of inductor wire in the electromagnet
break up the eddy flows that can form within the large, thick inductors of low voltage, high
current devices.

Gauss's law
From Wikipedia, the free encyclopedia

Jump to: navigation, search


This article is about Gauss's law concerning the electric field. For an analogous law
concerning the magnetic field, see Gauss's law for magnetism. For an analogous law
concerning the gravitational field, see Gauss's law for gravity. For Gauss's theorem, a
general theorem relevant to all of these laws, see Divergence theorem.

Electromagnetism

Electricity · Magnetism

[hide]Electrostatics
Electric charge · Coulomb's law · Electric
field · Electric flux · Gauss's law · Electric
potential · Electrostatic induction · Electric
dipole moment · Polarization density
[show]Magnetostatics
[show]Electrodynamics
[show]Electrical Network
[show]Covariant formulation
[show]Scientists
v•d•e

In physics, Gauss's law, also known as Gauss's flux theorem, is a law relating the
distribution of electric charge to the resulting electric field. Gauss's law states that:

The electric flux through any closed surface is proportional to the enclosed electric charge.[1]

The law was formulated by Carl Friedrich Gauss in 1835, but was not published until 1867.[2]
It is one of four of Maxwell's equations which form the basis of classical electrodynamics,
the other three being Gauss's law for magnetism, Faraday's law of induction, and Ampère's
law with Maxwell's correction. Gauss's law can be used to derive Coulomb's law,[3] and vice
versa.

Gauss's law may be expressed in its integral form:

where the left-hand side of the equation is a surface integral denoting the electric flux through
a closed surface S, and the right-hand side of the equation is the total charge enclosed by S
divided by the electric constant.

Gauss's law also has a differential form:

where ∇ · E is the divergence of the electric field, and ρ is the charge density.

The integral and differential forms are related by the divergence theorem, also called Gauss's
theorem. Each of these forms can also be expressed two ways: In terms of a relation between
the electric field E and the total electric charge, or in terms of the electric displacement field
D and the free electric charge.

Gauss's law has a close mathematical similarity with a number of laws in other areas of
physics, such as Gauss's law for magnetism and Gauss's law for gravity. In fact, any "inverse-
square law" can be formulated in a way similar to Gauss's law: For example, Gauss's law
itself is essentially equivalent to the inverse-square Coulomb's law, and Gauss's law for
gravity is essentially equivalent to the inverse-square Newton's law of gravity.

Gauss's law can be used to demonstrate that all electric fields inside a Faraday cage have an
electric charge. Gauss's law is something of an electrical analogue of Ampère's law, which
deals with magnetism.

Contents
[hide]

• 1 In terms of total charge


o 1.1 Integral form
 1.1.1 Applying the integral form
o 1.2 Differential form
o 1.3 Equivalence of integral and differential forms
• 2 In terms of free charge
o 2.1 Free versus bound charge
o 2.2 Integral form
o 2.3 Differential form
• 3 Equivalence of total and free charge statements
o 3.1 In linear materials
• 4 Relation to Coulomb's law
o 4.1 Deriving Gauss's law from Coulomb's law
o 4.2 Deriving Coulomb's law from Gauss's law
• 5 See also
• 6 Notes
• 7 References

• 8 External links

[edit] In terms of total charge


[edit] Integral form

For a volume V with surface S, Gauss's law states that

where ΦE,S is the electric flux through S, Q is total charge inside V, and ε0 is the electric
constant. The electric flux is given by a surface integral over S:

where E is the electric field, dA is a vector representing an infinitesimal element of area,[note 1]


and · represents the dot product.

[edit] Applying the integral form

Main article: Gaussian surface


See also: Capacitance#Gauss's law

If the electric field is known everywhere, Gauss's law makes it quite easy, in principle, to find
the distribution of electric charge: The charge in any given region can be deduced by
integrating the electric field to find the flux.

However, much more often, it is the reverse problem that needs to be solved: The electric
charge distribution is known, and the electric field needs to be computed. This is much more
difficult, since if you know the total flux through a given surface, that gives almost no
information about the electric field, which (for all you know) could go in and out of the
surface in arbitrarily complicated patterns.

An exception is if there is some symmetry in the situation, which mandates that the electric
field passes through the surface in a uniform way. Then, if the total flux is known, the field
itself can be deduced at every point. Common examples of symmetries which lend
themselves to Gauss's law include cylindrical symmetry, planar symmetry, and spherical
symmetry. See the article Gaussian surface for examples where these symmetries are
exploited to compute electric fields.

[edit] Differential form

In differential form, Gauss's law states:

where ∇ · denotes divergence, E is the electric field, and ρ is the total electric charge density
(including both free and bound charge), and ε0 is the electric constant. This is mathematically
equivalent to the integral form, because of the divergence theorem.

[edit] Equivalence of integral and differential forms

Main article: Divergence theorem

The integral and differential forms are mathematically equivalent, by the divergence theorem.
Here is the argument more specifically:

The integral form of Gauss's law is:

for any closed surface S containing charge Q. By the divergence theorem, this equation is
equivalent to:

for any volume V containing charge Q. By the relation between charge and charge density,
this equation is equivalent to:

for any volume V. In order for this equation to be simultaneously true for every possible
volume V, it is necessary (and sufficient) for the integrands to be equal everywhere.
Therefore, this equation is equivalent to:

Thus the integral and differential forms are equivalent.


[edit] In terms of free charge
[edit] Free versus bound charge

Main article: Electric polarization

The electric charge that arises in the simplest textbook situations would be classified as "free
charge"—for example, the charge which is transferred in static electricity, or the charge on a
capacitor plate. In contrast, "bound charge" arises only in the context of dielectric
(polarizable) materials. (All materials are polarizable to some extent.) When such materials
are placed in an external electric field, the electrons remain bound to their respective atoms,
but shift a microscopic distance in response to the field, so that they're more on one side of
the atom than the other. All these microscopic displacements add up to give a macroscopic
net charge distribution, and this constitutes the "bound charge".

Although microscopically, all charge is fundamentally the same, there are often practical
reasons for wanting to treat bound charge differently from free charge. The result is that the
more "fundamental" Gauss's law, in terms of E, is sometimes put into the equivalent form
below, which is in terms of D and the free charge only.

[edit] Integral form

This formulation of Gauss's law states that, for any volume V in space, with surface S, the
following equation holds:

where ΦD,S is the flux of the electric displacement field D through S, and Qfree is the free
charge contained in V. The flux ΦD,S is defined analogously to the flux ΦE,S of the electric
field E through S. Specifically, it is given by the surface integral

[edit] Differential form

The differential form of Gauss's law, involving free charge only, states:

where ∇ · D is the divergence of the electric displacement field, and ρfree is the free electric
charge density.

The differential form and integral form are mathematically equivalent. The proof primarily
involves the divergence theorem.

[edit] Equivalence of total and free charge statements


[show]Proof that the formulations of Gauss's law in terms of free
charge are equivalent to the formulations involving total charge.

[edit] In linear materials

In homogeneous, isotropic, nondispersive, linear materials, there is a nice, simple relationship


between E and D:

where ε is the permittivity of the material. Under these circumstances, there is yet another
pair of equivalent formulations of Gauss's law:

[edit] Relation to Coulomb's law


[edit] Deriving Gauss's law from Coulomb's law

Gauss's law can be derived from Coulomb's law, which states that the electric field due to a
stationary point charge is:

where

er is the radial unit vector,


r is the radius, |r|,
ε0 is the electric constant,
q is the charge of the particle, which is assumed to be located at the origin.

Using the expression from Coulomb's law, we get the total field at r by using an integral to
sum the field at r due to the infinitesimal charge at each other point s in space, to give

where ρ is the charge density. If we take the divergence of both sides of this equation with
respect to r, and use the known theorem[5]
where δ(s) is the Dirac delta function, the result is

Using the "sifting property" of the Dirac delta function, we arrive at

which is the differential form of Gauss's law, as desired.

Note that since Coulomb's law only applies to stationary charges, there is no reason to expect
Gauss's law to hold for moving charges based on this derivation alone. In fact, Gauss's law
does hold for moving charges, and in this respect Gauss's law is more general than Coulomb's
law.

[edit] Deriving Coulomb's law from Gauss's law

Strictly speaking, Coulomb's law cannot be derived from Gauss's law alone, since Gauss's
law does not give any information regarding the curl of E (see Helmholtz decomposition and
Faraday's law). However, Coulomb's law can be proven from Gauss's law if it is assumed, in
addition, that the electric field from a point charge is spherically-symmetric (this assumption,
like Coulomb's law itself, is exactly true if the charge is stationary, and approximately true if
the charge is in motion).

Taking S in the integral form of Gauss's law to be a spherical surface of radius r, centered at
the point charge Q, we have

By the assumption of spherical symmetry, the integrand is a constant which can be taken out
of the integral. The result is

where is a unit vector pointing radially away from the charge. Again by spherical symmetry,
E points in the radial direction, and so we get

which is essentially equivalent to Coulomb's law. Thus the inverse-square law dependence of
the electric field in Coulomb's law follows from Gauss's law.

[edit] See also


• Method of image charges
• Uniqueness theorem

[edit] Notes
1. ^ More specifically, the infinitesimal area is thought of as planar and with area dA. The vector
dA is normal to this area element and has magnitude dA.[4]

[edit] References
1. ^ Serway, Raymond A. (1996). Physics for Scientists and Engineers with Modern Physics,
4th edition. pp. 687.
2. ^ Bellone, Enrico (1980). A World on Paper: Studies on the Second Scientific Revolution.
3. ^ Halliday, David; Resnick, Robert (1970). Fundamentals of Physics. John Wiley & Sons,
Inc. pp. 452–53.
4. ^ Matthews, Paul (1998). Vector Calculus. Springer. ISBN 3540761802.
5. ^ See, for example, Griffiths, David J. (1998). Introduction to Electrodynamics (3rd ed.).
Prentice Hall. p. 50. ISBN 0-13-805326-X.

Jackson, John David (1999). Classical Electrodynamics, 3rd ed., New York: Wiley. ISBN 0-
471-30932-X.

[edit] External links


• MIT Video Lecture Series (30 x 50 minute lectures)- Electricity and Magnetism
Taught by Professor Walter Lewin.
• section on Gauss's law in an online textbook
• MISN-0-132 Gauss's Law for Spherical Symmetry (PDF file) by Peter Signell for
Project PHYSNET.
• MISN-0-133 Gauss's Law Applied to Cylindrical and Planar Charge Distributions
(PDF file) by Peter Signell for Project PHYSNET.
• The inverse cube law The inverse cube law for dipoles (PDF file) by Eng. Xavier
Borg

Retrieved from "http://en.wikipedia.org/wiki/Gauss%27s_law"


Categories: Electrostatics | Vector calculus | Introductory physics

Personal tools

• New features
• Log in / create account

Namespaces

• Article
• Discussion

Variants

Views
• Read
• Edit
• View history

Actions

Search

Search

Navigation

• Main page
• Contents
• Featured content
• Current events
• Random article

Interaction

• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Donate to Wikipedia
• Help

Toolbox

• What links here


• Related changes
• Upload file
• Special pages
• Permanent link
• Cite this page

Print/export

• Create a book
• Download as PDF
• Printable version

Languages

• ‫العربية‬
• বাংলা
• Беларуская
• Беларуская (тарашкевіца)
• Bosanski
• Български
• Català
• Česky
• Deutsch
• Ελληνικά
• Español
• Esperanto
• Euskara
• ‫فارسی‬
• Français
• Galego
• 한국어
• Italiano
• ‫עברית‬
• ქართული
• Latviešu
• Magyar
• Монгол
• Nederlands
• 日本語
• Norsk (bokmål)
• Polski
• Português
• Русский
• Shqip
• Slovenčina
• Српски / Srpski
• Srpskohrvatski / Српскохрватски
• Suomi
• Svenska
• Türkçe
• Українська
• Tiếng Việt
• 中文

• This page was last modified on 23 June 2010 at 23:25.


• Text is available under the Creative Commons Attribution-ShareAlike License;
additional terms may apply. See Terms of Use for details.
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit
organization.
• Contact us

• Privacy policy
• About Wikipedia
• Disclaimers


Maxwell's equations
From Wikipedia, the free encyclopedia

Jump to: navigation, search


For thermodynamic relations, see Maxwell relations.

Electromagnetism

Electricity · Magnetism

[show]Electrostatics
[show]Magnetostatics
[hide]Electrodynamics
Free space · Lorentz force law · emf ·
Electromagnetic induction · Faraday’s law ·
Lenz's law · Displacement current ·
Maxwell's equations · EM field ·
Electromagnetic radiation · Liénard-
Wiechert Potential · Maxwell tensor · Eddy
current
[show]Electrical Network
[show]Covariant formulation
[show]Scientists
v•d•e

Maxwell's equations are a set of four partial differential equations that relate the electric and
magnetic fields to their sources, charge density and current density. These equations can be
combined to show that light is an electromagnetic wave. Individually, the equations are
known as Gauss's law, Gauss's law for magnetism, Faraday's law of induction, and Ampère's
law with Maxwell's correction. The set of equations is named after James Clerk Maxwell.

These four equations, together with the Lorentz force law are the complete set of laws of
classical electromagnetism. The Lorentz force law itself was derived by Maxwell, under the
name of Equation for Electromotive Force, and was one of an earlier set of eight equations
by Maxwell.
Contents
[hide]

• 1 Conceptual description
• 2 General formulation
• 3 History
o 3.1 The term Maxwell's equations
o 3.2 Maxwell's On Physical Lines of Force (1861)
o 3.3 Maxwell's A Dynamical Theory of the Electromagnetic Field (1864)
o 3.4 A Treatise on Electricity and Magnetism (1873)
• 4 Maxwell's equations and matter
o 4.1 Bound charge and current
 4.1.1 Proof that the two general formulations are equivalent
o 4.2 Constitutive relations
 4.2.1 Case without magnetic or dielectric materials
 4.2.2 Case of linear materials
 4.2.3 General case
 4.2.4 Maxwell's equations in terms of E and B for linear materials
 4.2.5 Calculation of constitutive relations
o 4.3 In vacuum
o 4.4 With magnetic monopoles
• 5 Boundary conditions: using Maxwell's equations
• 6 CGS units
• 7 Special relativity
o 7.1 Historical developments
o 7.2 Covariant formulation of Maxwell's equations
• 8 Potentials
• 9 Four-potential
• 10 Differential forms
o 10.1 Conceptual insight from this formulation
• 11 Classical electrodynamics as the curvature of a line bundle
• 12 Curved spacetime
o 12.1 Traditional formulation
o 12.2 Formulation in terms of differential forms
• 13 See also
• 14 Notes
• 15 References
• 16 Further reading
o 16.1 Journal articles
o 16.2 University level textbooks
 16.2.1 Undergraduate
 16.2.2 Graduate
 16.2.3 Older classics
 16.2.4 Computational techniques
• 17 External links
o 17.1 Modern treatments
o 17.2 Historical
o 17.3 Other

[edit] Conceptual description


This section will conceptually describe each of the four Maxwell's equations, and also how
they link together to explain the origin of electromagnetic radiation such as light. The exact
equations are set out in later sections of this article.

• Gauss' law describes how an electric field is generated by electric charges: The
electric field tends to point away from positive charges and towards negative charges.
More technically, it relates the electric flux through any hypothetical closed
"Gaussian surface" to the electric charge within the surface.

• Gauss' law for magnetism states that there are no "magnetic charges" (also called
magnetic monopoles), analogous to electric charges.[1] Instead the magnetic field is
generated by a configuration called a dipole, which has no magnetic charge but
resembles a positive and negative charge inseparably bound together. Equivalent
technical statements are that the total magnetic flux through any Gaussian surface is
zero, or that the magnetic field is a solenoidal vector field.

An Wang's magnetic core memory (1954) is an application of Ampere's law. Each core stores
one bit of data.

• Faraday's law describes how a changing magnetic field can create ("induce") an
electric field.[1] This aspect of electromagnetic induction is the operating principle
behind many electric generators: A bar magnet is rotated to create a changing
magnetic field, which in turn generates an electric field in a nearby wire. (Note: The
"Faraday's law" that occurs in Maxwell's equations is a bit different than the version
originally written by Michael Faraday. Both versions are equally true laws of physics,
but they have different scope, for example whether "motional EMF" is included. See
Faraday's law of induction for details.)

• Ampère's law with Maxwell's correction states that magnetic fields can be generated
in two ways: by electrical current (this was the original "Ampère's law") and by
changing electric fields (this was "Maxwell's correction").

Maxwell's correction to Ampère's law is particularly important: It means that a changing


magnetic field creates an electric field, and a changing electric field creates a magnetic field.
[1][2]
Therefore, these equations allow self-sustaining "electromagnetic waves" to travel
through empty space (see electromagnetic wave equation).

The speed calculated for electromagnetic waves, which could be predicted from experiments
on charges and currents,[note 1] exactly matches the speed of light; indeed, light is one form of
electromagnetic radiation (as are X-rays, radio waves, and others). Maxwell understood the
connection between electromagnetic waves and light in 1864, thereby unifying the
previously-separate fields of electromagnetism and optics.

[edit] General formulation


The equations in this section are given in SI units. Unlike the equations of mechanics (for
example), Maxwell's equations are not unchanged in other unit systems. Though the general
form remains the same, various definitions get changed and different constants appear at
different places. Other than SI (used in engineering), the units commonly used are Gaussian
units (based on the cgs system and considered to have some theoretical advantages over SI[3]),
Lorentz-Heaviside units (used mainly in particle physics) and Planck units (used in
theoretical physics). See below for CGS-Gaussian units.

Two equivalent, general formulations of Maxwell's equations follow. The first separates
bound charge and bound current (which arise in the context of dielectric and/or magnetized
materials) from free charge and free current (the more conventional type of charge and
current). This separation is useful for calculations involving dielectric or magnetized
materials. The second formulation treats all charge equally, combining free and bound charge
into total charge (and likewise with current). This is the more fundamental or microscopic
point of view, and is particularly useful when no dielectric or magnetic material is present.
More details, and a proof that these two formulations are mathematically equivalent, are
given in section 4.

Symbols in bold represent vector quantities, and symbols in italics represent scalar
quantities. The definitions of terms used in the two tables of equations are given in another
table immediately following.

Formulation in terms of free charge and current

Name Differential form Integral form

Gauss's law

Gauss's law for magnetism

Maxwell–Faraday equation
(Faraday's law of induction)
Ampère's circuital law
(with Maxwell's correction)

Formulation in terms of total charge and current[note 2]
Name Differential form Integral form

Gauss's law

Gauss's law for


magnetism
Maxwell–Faraday
equation
(Faraday's law of
induction)
Ampère's circuital
law
(with Maxwell's
correction)

The following table provides the meaning of each symbol and the SI unit of measure:

Definitions and units


Meaning (first term is the most
Symbol SI Unit of Measure
common)
volt per meter or,
electric field
  equivalently,
also called the electric field intensity
newton per coulomb
magnetic field tesla, or equivalently,
also called the magnetic induction weber per square meter,

also called the magnetic field density volt-second per square
also called the magnetic flux density meter
electric displacement field coulombs per square
  also called the electric induction meter or equivalently,
also called the electric flux density newton per volt-meter
magnetizing field
also called auxiliary magnetic field
  ampere per meter
also called magnetic field intensity
also called magnetic field
  the divergence operator per meter (factor
contributed by applying
  the curl operator
either operator)
per second (factor
partial derivative with respect to time contributed by applying
  the operator)
differential vector element of surface area
  A, with infinitesimally small magnitude square meters
and direction normal to surface S
differential vector element of path length
  meters
tangential to the path/curve
permittivity of free space, also called the
  farads per meter
electric constant, a universal constant
henries per meter, or
permeability of free space, also called the
  newtons per ampere
magnetic constant, a universal constant
squared
free charge density (not including bound coulombs per cubic

charge) meter
total charge density (including both free coulombs per cubic

and bound charge) meter
free current density (not including bound amperes per square
  current) meter
total current density (including both free amperes per square

and bound current) meter
net free electric charge within the three-
  dimensional volume V (not including coulombs
bound charge)
net electric charge within the three-
  dimensional volume V (including both coulombs
free and bound charge)
line integral of the electric field along the
boundary ∂S of a surface S (∂S is always a joules per coulomb
  closed curve).
line integral of the magnetic field over the
tesla-meters
  closed boundary ∂S of the surface S
the electric flux (surface integral of the
electric field) through the (closed) surface joule-meter per coulomb
  (the boundary of the volume V)
the magnetic flux (surface integral of the
magnetic B-field) through the (closed) tesla meters-squared or
  surface (the boundary of the volume webers
V)
magnetic flux through any surface S, not webers or equivalently,
  necessarily closed volt-seconds

electric flux through any surface S, not joule-meters per


  necessarily closed coulomb

flux of electric displacement field through


coulombs
  any surface S, not necessarily closed
net free electrical current passing through
the surface S (not including bound amperes
  current)
net electrical current passing through the
surface S (including both free and bound amperes
  current)
Maxwell's equations are generally applied to macroscopic averages of the fields, which vary
wildly on a microscopic scale in the vicinity of individual atoms (where they undergo
quantum mechanical effects as well). It is only in this averaged sense that one can define
quantities such as the permittivity and permeability of a material. At microscopic level,
Maxwell's equations, ignoring quantum effects, describe fields, charges and currents in free
space—but at this level of detail one must include all charges, even those at an atomic level,
generally an intractable problem.

[edit] History
Although James Clerk Maxwell is said by some not to be the originator of these equations, he
nevertheless derived them independently in conjunction with his molecular vortex model of
Faraday's "lines of force". In doing so, he made an important addition to Ampère's circuital
law.

All four of what are now described as Maxwell's equations can be found in recognizable form
(albeit without any trace of a vector notation, let alone ∇) in his 1861 paper On Physical
Lines of Force, in his 1865 paper A Dynamical Theory of the Electromagnetic Field, and also
in vol. 2 of Maxwell's "A Treatise on Electricity & Magnetism", published in 1873, in
Chapter IX, entitled "General Equations of the Electromagnetic Field". This book by
Maxwell pre-dates publications by Heaviside, Hertz and others.

The physicist Richard Feynman predicted that, "The American Civil War will pale into
provincial insignificance in comparison with this important scientific event of the same
decade."[5]

[edit] The term Maxwell's equations

The term Maxwell's equations originally applied to a set of eight equations published by
Maxwell in 1865, but nowadays applies to modified versions of four of these equations that
were grouped together in 1884 by Oliver Heaviside,[6] concurrently with similar work by
Willard Gibbs and Heinrich Hertz.[7] These equations were also known variously as the
Hertz-Heaviside equations and the Maxwell-Hertz equations,[6] and are sometimes still known
as the Maxwell–Heaviside equations.[8]

Maxwell's contribution to science in producing these equations lies in the correction he made
to Ampère's circuital law in his 1861 paper On Physical Lines of Force. He added the
displacement current term to Ampère's circuital law and this enabled him to derive the
electromagnetic wave equation in his later 1865 paper A Dynamical Theory of the
Electromagnetic Field and demonstrate the fact that light is an electromagnetic wave. This
fact was then later confirmed experimentally by Heinrich Hertz in 1887.

The concept of fields was introduced by, among others, Faraday. Albert Einstein wrote:

The precise formulation of the time-space laws was the work of Maxwell. Imagine his
feelings when the differential equations he had formulated proved to him that electromagnetic
fields spread in the form of polarised waves, and at the speed of light! To few men in the
world has such an experience been vouchsafed . . it took physicists some decades to grasp the
full significance of Maxwell's discovery, so bold was the leap that his genius forced upon the
conceptions of his fellow-workers
—(Science, May 24, 1940)

The equations were called by some the Hertz-Heaviside equations, but later Einstein referred
to them as the Maxwell-Hertz equations.[6] However, in 1940 Einstein referred to the
equations as Maxwell's equations in "The Fundamentals of Theoretical Physics" published in
the Washington periodical Science, May 24, 1940.

Heaviside worked to eliminate the potentials (electrostatic potential and vector potential) that
Maxwell had used as the central concepts in his equations;[6] this effort was somewhat
controversial,[9] though it was understood by 1884 that the potentials must propagate at the
speed of light like the fields, unlike the concept of instantaneous action-at-a-distance like the
then conception of gravitational potential.[7] Modern analysis of, for example, radio antennas,
makes full use of Maxwell's vector and scalar potentials to separate the variables, a common
technique used in formulating the solutions of differential equations. However the potentials
can be introduced by algebraic manipulation of the four fundamental equations.

The net result of Heaviside's work was the symmetrical duplex set of four equations,[6] all of
which originated in Maxwell's previous publications, in particular Maxwell's 1861 paper On
Physical Lines of Force, the 1865 paper A Dynamical Theory of the Electromagnetic Field
and the Treatise. The fourth was a partial time derivative version of Faraday's law of
induction that doesn't include motionally induced EMF; this version is often termed the
Maxwell-Faraday equation or Faraday's law in differential form to keep clear the distinction
from Faraday's law of induction, though it expresses the same law.[10][11]

[edit] Maxwell's On Physical Lines of Force (1861)

The four modern day Maxwell's equations appeared throughout Maxwell's 1861 paper On
Physical Lines of Force:

i. Equation (56) in Maxwell's 1861 paper is .


ii. Equation (112) is Ampère's circuital law with Maxwell's displacement current added.
It is the addition of displacement current that is the most significant aspect of
Maxwell's work in electromagnetism, as it enabled him to later derive the
electromagnetic wave equation in his 1865 paper A Dynamical Theory of the
Electromagnetic Field, and hence show that light is an electromagnetic wave. It is
therefore this aspect of Maxwell's work which gives the equations their full
significance. (Interestingly, Kirchhoff derived the telegrapher's equations in 1857
without using displacement current. But he did use Poisson's equation and the
equation of continuity which are the mathematical ingredients of the displacement
current. Nevertheless, Kirchhoff believed his equations to be applicable only inside an
electric wire and so he is not credited with having discovered that light is an
electromagnetic wave).
iii. Equation (115) is Gauss's law.
iv. Equation (54) is an equation that Oliver Heaviside referred to as 'Faraday's law'. This
equation caters for the time varying aspect of electromagnetic induction, but not for
the motionally induced aspect, whereas Faraday's original flux law caters for both
aspects. Maxwell deals with the motionally dependent aspect of electromagnetic
induction, v × B, at equation (77). Equation (77) which is the same as equation (D) in
the original eight Maxwell's equations listed below, corresponds to all intents and
purposes to the modern day force law F = q ( E + v × B ) which sits adjacent to
Maxwell's equations and bears the name Lorentz force, even though Maxwell derived
it when Lorentz was still a young boy.

The difference between the and the vectors can be traced back to Maxwell's 1855 paper
entitled On Faraday's Lines of Force which was read to the Cambridge Philosophical
Society. The paper presented a simplified model of Faraday's work, and how the two
phenomena were related. He reduced all of the current knowledge into a linked set of
differential equations.

Figure of Maxwell's molecular vortex model. For a uniform magnetic field, the field lines
point outward from the display screen, as can be observed from the black dots in the middle
of the hexagons. The vortex of each hexagonal molecule rotates counter-clockwise. The small
green circles are clockwise rotating particles sandwiching between the molecular vortices.

It is later clarified in his concept of a sea of molecular vortices that appears in his 1861 paper
On Physical Lines of Force - 1861. Within that context, represented pure vorticity (spin),
whereas was a weighted vorticity that was weighted for the density of the vortex sea.
Maxwell considered magnetic permeability µ to be a measure of the density of the vortex sea.
Hence the relationship,

(1) Magnetic induction current causes a magnetic current density

was essentially a rotational analogy to the linear electric current relationship,

(2) Electric convection current


where ρ is electric charge density. was seen as a kind of magnetic current of vortices
aligned in their axial planes, with being the circumferential velocity of the vortices. With µ
representing vortex density, it follows that the product of µ with vorticity leads to the
magnetic field denoted as .

The electric current equation can be viewed as a convective current of electric charge that
involves linear motion. By analogy, the magnetic equation is an inductive current involving
spin. There is no linear motion in the inductive current along the direction of the vector.
The magnetic inductive current represents lines of force. In particular, it represents lines of
inverse square law force.

The extension of the above considerations confirms that where is to , and where is to ρ,
then it necessarily follows from Gauss's law and from the equation of continuity of charge
that is to . i.e. parallels with , whereas parallels with .

[edit] Maxwell's A Dynamical Theory of the Electromagnetic Field (1864)

Main article: A Dynamical Theory of the Electromagnetic Field

In 1864 Maxwell published A Dynamical Theory of the Electromagnetic Field in which he


showed that light was an electromagnetic phenomenon. Confusion over the term "Maxwell's
equations" is exacerbated because it is also sometimes used for a set of eight equations that
appeared in Part III of Maxwell's 1864 paper A Dynamical Theory of the Electromagnetic
Field, entitled "General Equations of the Electromagnetic Field,"[12] a confusion compounded
by the writing of six of those eight equations as three separate equations (one for each of the
Cartesian axes), resulting in twenty equations and twenty unknowns. (As noted above, this
terminology is not common: Modern references to the term "Maxwell's equations" refer to
the Heaviside restatements.)

The eight original Maxwell's equations can be written in modern vector notation as follows:

(A) The law of total currents

(B) The equation of magnetic force

(C) Ampère's circuital law

(D) Electromotive force created by convection, induction, and by static electricity. (This is in
effect the Lorentz force)

(E) The electric elasticity equation

(F) Ohm's law

(G) Gauss's law


(H) Equation of continuity

or

Notation
is the magnetizing field, which Maxwell called the magnetic intensity.
is the electric current density (with being the total current including
displacement current).[note 3]
is the displacement field (called the electric displacement by Maxwell).
is the free charge density (called the quantity of free electricity by Maxwell).
is the magnetic vector potential (called the angular impulse by Maxwell).
is called the electromotive force by Maxwell. The term electromotive force is
nowadays used for voltage, but it is clear from the context that Maxwell's meaning
corresponded more to the modern term electric field.
is the electric potential (which Maxwell also called electric potential).
is the electrical conductivity (Maxwell called the inverse of conductivity the
specific resistance, what is now called the resistivity).

It is interesting to note the term that appears in equation D. Equation D is therefore


effectively the Lorentz force, similarly to equation (77) of his 1861 paper (see above).

When Maxwell derives the electromagnetic wave equation in his 1865 paper, he uses
equation D to cater for electromagnetic induction rather than Faraday's law of induction
which is used in modern textbooks. (Faraday's law itself does not appear among his
equations.) However, Maxwell drops the term from equation D when he is deriving
the electromagnetic wave equation, as he considers the situation only from the rest frame.

[edit] A Treatise on Electricity and Magnetism (1873)

English Wikisource has original text related to this article:


A Treatise on Electricity and Magnetism

In A Treatise on Electricity and Magnetism, an 1873 textbook on electromagnetism written


by James Clerk Maxwell, the equations are compiled into two sets.

The first set is

The second set is


[edit] Maxwell's equations and matter
[edit] Bound charge and current

Main articles: Bound charge#Bound charge and Bound current#Magnetization current

Left: A schematic view of how an assembly of microscopic dipoles appears like a


macroscopically separated pair of charged sheets, as shown at top and bottom (these sheets
are not intended to be viewed as originating the electric field that causes the dipole alignment,
but as a representation equivalent to the dipole array); Right: How an assembly of
microscopic current loops appears as a macroscopically circulating current loop. Inside the
boundaries, the individual contributions tend to cancel, but at the boundaries no cancellation
occurs.

If an electric field is applied to a dielectric material, each of the molecules responds by


forming a microscopic electric dipole—its atomic nucleus will move a tiny distance in the
direction of the field, while its electrons will move a tiny distance in the opposite direction.
This is called polarization of the material. In an idealized situation like that shown in the
figure, the distribution of charge that results from these tiny movements turns out to be
identical (outside the material) to having a layer of positive charge on one side of the
material, and a layer of negative charge on the other side (a macroscopic separation of
charge) even though all of the charges involved are bound to individual molecules. The
volume polarization P is a result of bound charge. (Mathematically, once physical
approximation has established the electric dipole density P based upon the underlying
behavior of atoms, the surface charge that is equivalent to the material with its internal
polarization is provided by the divergence theorem applied to a region straddling the interface
between the material and the surrounding vacuum.)[13][14]

Somewhat similarly, in all materials the constituent atoms exhibit magnetic moments that are
intrinsically linked to the angular momentum of the atoms' components, most notably their
electrons. The connection to angular momentum suggests the picture of an assembly of
microscopic current loops. Outside the material, an assembly of such microscopic current
loops is not different from a macroscopic current circulating around the material's surface,
despite the fact that no individual magnetic moment is traveling a large distance. The bound
currents can be described using M. (Mathematically, once physical approximation has
established the magnetic dipole density based upon the underlying behavior of atoms, the
surface current that is equivalent to the material with its internal magnetization is provided by
Stokes' theorem applied to a path straddling the interface between the material and the
surrounding vacuum.)[15][16]
These ideas suggest that for some situations the microscopic details of the atomic and
electronic behavior can be treated in a simplified fashion that ignores many details on a fine
scale that may be unimportant to understanding matters on a grosser scale. That notion
underlies the bound/free partition of behavior.

[edit] Proof that the two general formulations are equivalent

In this section, a simple proof is outlined which shows that the two alternate general
formulations of Maxwell's equations given in Section 1 are mathematically equivalent.

The relation between polarization, magnetization, bound charge, and bound current is as
follows:

where P and M are polarization and magnetization, and ρb and Jb are bound charge and
current, respectively. Plugging in these relations, it can be easily demonstrated that the two
formulations of Maxwell's equations given in Section 2 are precisely equivalent.

[edit] Constitutive relations

In order to apply Maxwell's equations (the formulation in terms of free/bound charge and
current using D and H), it is necessary to specify the relations between D and E, and H and
B.

Finding relations between these fields is another way to say that to solve Maxwell's equations
by employing the free/bound partition of charges and currents, one needs the properties of the
materials relating the response of bound currents and bound charges to the fields applied to
these materials.[note 4] These relations may be empirical (based directly upon measurements),
or theoretical (based upon statistical mechanics, transport theory or other tools of condensed
matter physics). The detail employed may be macroscopic or microscopic, depending upon
the level necessary to the problem under scrutiny. These material properties specifying the
response of bound charge and current to the field are called constitutive relations, and
correspond physically to how much polarization and magnetization a material acquires in the
presence of electromagnetic fields.

Once the responses of bound currents and charges are related to the fields, Maxwell's
equations can be fully formulated in terms of the E- and B-fields alone, with only the free
charges and currents appearing explicitly in the equations.

[edit] Case without magnetic or dielectric materials

In the absence of magnetic or dielectric materials, the relations are simple:


where ε0 and μ0 are two universal constants, called the permittivity of free space and
permeability of free space, respectively.

[edit] Case of linear materials

In a linear, isotropic, nondispersive, uniform material, the relations are also straightforward:

where ε and μ are constants (which depend on the material), called the permittivity and
permeability, respectively, of the material.

[edit] General case

For real-world materials, the constitutive relations are not simple proportionalities, except
approximately. The relations can usually still be written:

but ε and μ are not, in general, simple constants, but rather functions. For example, ε and μ
can depend upon:

• The strength of the fields (the case of nonlinearity, which occurs when ε and μ are
functions of E and B; see, for example, Kerr and Pockels effects),
• The direction of the fields (the case of anisotropy, birefringence, or dichroism; which
occurs when ε and μ are second-rank tensors),
• The frequency with which the fields vary (the case of dispersion, which occurs when
ε and μ are functions of frequency; see, for example, Kramers–Kronig relations).

If further there are dependencies on:

• The position inside the material (the case of a nonuniform material, which occurs
when the response of the material varies from point to point within the material, an
effect called spatial inhomogeneity; for example in a domained structure,
heterostructure or a liquid crystal, or most commonly in the situation where there are
simply multiple materials occupying different regions of space),
• The history of the fields—in a linear time-invariant material, this is equivalent to the
material dispersion mentioned above (a frequency dependence of the ε and μ), which
after Fourier transforming turns into a convolution with the fields at past times,
expressing a non-instantaneous response of the material to an applied field; in a
nonlinear or time-varying medium, the time-dependent response can be more
complicated, such as the example of a hysteresis response,

then the constitutive relations take a more complicated form:[17][18]


,

in which the permittivity and permeability functions are replaced by integrals over the more
general electric and magnetic susceptibilities.

It may be noted that man-made materials can be designed to have customized permittivity
and permeability, such as metamaterials and photonic crystals.

[edit] Maxwell's equations in terms of E and B for linear materials

Substituting in the constitutive relations above, Maxwell's equations in linear, dispersionless,


time-invariant materials (differential form only) are:

These are formally identical to the general formulation in terms of E and B (given above),
except that the permittivity of free space was replaced with the permittivity of the material
(see also electric displacement field, electric susceptibility and polarization density), the
permeability of free space was replaced with the permeability of the material (see also
magnetization, magnetic susceptibility and magnetic field), and only free charges and
currents are included (instead of all charges and currents). Unless that material is
homogeneous in space, ε and μ cannot be factored out of the derivative expressions on the
left-hand sides.

[edit] Calculation of constitutive relations

See also: Computational electromagnetics

The fields in Maxwell's equations are generated by charges and currents. Conversely, the
charges and currents are affected by the fields through the Lorentz force equation:

where q is the charge on the particle and v is the particle velocity. (It also should be
remembered that the Lorentz force is not the only force exerted upon charged bodies, which
also may be subject to gravitational, nuclear, etc. forces.) Therefore, in both classical and
quantum physics, the precise dynamics of a system form a set of coupled differential
equations, which are almost always too complicated to be solved exactly, even at the level of
statistical mechanics.[note 5] This remark applies to not only the dynamics of free charges and
currents (which enter Maxwell's equations directly), but also the dynamics of bound charges
and currents, which enter Maxwell's equations through the constitutive equations, as
described next.

Commonly, real materials are approximated as continuous media with bulk properties such as
the refractive index, permittivity, permeability, conductivity, and/or various susceptibilities.
These lead to the macroscopic Maxwell's equations, which are written (as given above) in
terms of free charge/current densities and D, H, E, and B ( rather than E and B alone ) along
with the constitutive equations relating these fields. For example, although a real material
consists of atoms whose electronic charge densities can be individually polarized by an
applied field, for most purposes behavior at the atomic scale is not relevant and the material
is approximated by an overall polarization density related to the applied field by an electric
susceptibility.

Continuum approximations of atomic-scale inhomogeneities cannot be determined from


Maxwell's equations alone, but require some type of quantum mechanical analysis such as
quantum field theory as applied to condensed matter physics. See, for example, density
functional theory, Green-Kubo relations and Green's function (many-body theory). Various
approximate transport equations have evolved, for example, the Boltzmann equation or the
Fokker-Planck equation or the Navier-Stokes equations. Some examples where these
equations are applied are magnetohydrodynamics, fluid dynamics, electrohydrodynamics,
superconductivity, plasma modeling. An entire physical apparatus for dealing with these
matters has developed. A different set of homogenization methods (evolving from a tradition
in treating materials such as conglomerates and laminates) are based upon approximation of
an inhomogeneous material by a homogeneous effective medium[19][20] (valid for excitations
with wavelengths much larger than the scale of the inhomogeneity).[21][22][23][24]

Theoretical results have their place, but often require fitting to experiment. Continuum-
approximation properties of many real materials rely upon measurement,[25] for example,
ellipsometry measurements.

In practice, some materials properties have a negligible impact in particular circumstances,


permitting neglect of small effects. For example: optical nonlinearities can be neglected for
low field strengths; material dispersion is unimportant where frequency is limited to a narrow
bandwidth; material absorption can be neglected for wavelengths where a material is
transparent; and metals with finite conductivity often are approximated at microwave or
longer wavelengths as perfect metals with infinite conductivity (forming hard barriers with
zero skin depth of field penetration).

And, of course, some situations demand that Maxwell's equations and the Lorentz force be
combined with other forces that are not electromagnetic. An obvious example is gravity. A
more subtle example, which applies where electrical forces are weakened due to charge
balance in a solid or a molecule, is the Casimir force from quantum electrodynamics.[26]

The connection of Maxwell's equations to the rest of the physical world is via the
fundamental charges and currents. These charges and currents are a response of their sources
to electric and magnetic fields and to other forces. The determination of these responses
involves the properties of physical materials.
[edit] In vacuum

Further information: Electromagnetic wave equation and Sinusoidal plane-wave solutions of


the electromagnetic wave equation

Start with the equations appropriate for the case without dielectric or magnetic materials.
Then assume a vacuum: No charges ( ) and no currents ( ). We then have:

One set of solutions to these equations takes the the form of traveling sinusoidal plane waves,
with the directions of the electric and magnetic fields being orthogonal to one another and the
direction of travel. The two fields are in phase, traveling at the speed:

In fact, Maxwell's equations explain how these waves can physically propagate through
space. The changing magnetic field creates a changing electric field through Faraday's law. In
turn, that electric field creates a changing magnetic field through Maxwell's correction to
Ampère's law. This perpetual cycle allows these waves, now known as electromagnetic
radiation, to move through space at velocity c.

Maxwell knew from an 1856 leyden jar experiment by Wilhelm Eduard Weber and Rudolf
Kohlrausch, that c was very close to the measured speed of light in vacuum (already known
at the time), and concluded (correctly) that light is a form of electromagnetic radiation.

[edit] With magnetic monopoles

Maxwell's equations of electromagnetism relate the electric and magnetic fields to the
motions of electric charges. The standard form of the equations provide for an electric charge,
but posit no magnetic charge. There is no known magnetic analog of an electron, however
recently scientists have described behavior in a crystalline state of matter known as spin-ice
which have macroscopic behavior like magnetic monopoles.[27][28] (in accordance with the fact
that magnetic charge has never been seen and may not exist). Except for this, the equations
are symmetric under interchange of electric and magnetic field. In fact, symmetric equations
can be written when all charges are zero, and this is how the wave equation is derived (see
immediately above).

Fully symmetric equations can also be written if one allows for the possibility of magnetic
charges.[29] With the inclusion of a variable for these magnetic charges, say , there will
also be a "magnetic current" variable in the equations, . The extended Maxwell's
equations, simplified by nondimensionalization via Planck units, are as follows:
Without magnetic With magnetic monopoles
Name
monopoles (hypothetical)
Gauss's law:    
Gauss's law for
   
magnetism:
Maxwell–Faraday
equation
(Faraday's law of    
induction):
Ampère's law  
(with Maxwell's
extension):  
Note: the Bivector notation embodies the sign swap, and these four equations can be written as only one
equation.

If magnetic charges do not exist, or if they exist but where they are not present in a region,
then the new variables are zero, and the symmetric equations reduce to the conventional
equations of electromagnetism such as .

[edit] Boundary conditions: using Maxwell's equations


Although Maxwell's equations apply throughout space and time, practical problems are finite
and solutions to Maxwell's equations inside the solution region are joined to the remainder of
the universe through boundary conditions[30][31][32] and started in time using initial conditions.
[33]

In particular, in a region without any free currents or free charges, the electromagnetic fields
in the region originate elsewhere, and are introduced via boundary and/or initial conditions.
An example of this type is a an electromagnetic scattering problem, where an electromagnetic
wave originating outside the scattering region is scattered by a target, and the scattered
electromagnetic wave is analyzed for the information it contains about the target by virtue of
the interaction with the target during scattering.[34]

In some cases, like waveguides or cavity resonators, the solution region is largely isolated
from the universe, for example, by metallic walls, and boundary conditions at the walls
define the fields with influence of the outside world confined to the input/output ends of the
structure.[35] In other cases, the universe at large sometimes is approximated by an artificial
absorbing boundary,[36][37][38] or, for example for radiating antennas or communication
satellites, these boundary conditions can take the form of asymptotic limits imposed upon the
solution.[39] In addition, for example in an optical fiber or thin-film optics, the solution region
often is broken up into subregions with their own simplified properties, and the solutions in
each subregion must be joined to each other across the subregion interfaces using boundary
conditions.[40][41][42] A particular example of this use of boundary conditions is the replacement
of a material with a volume polarization by a charged surface layer, or of a material with a
volume magnetization by a surface current, as described in the section Bound charge and
current.
Following are some links of a general nature concerning boundary value problems: Examples
of boundary value problems, Sturm-Liouville theory, Dirichlet boundary condition, Neumann
boundary condition, mixed boundary condition, Cauchy boundary condition, Sommerfeld
radiation condition. Needless to say, one must choose the boundary conditions appropriate to
the problem being solved. See also Kempel[43] and the book by Friedman.[44]

[edit] CGS units


The preceding equations are given in the International System of Units, or SI for short. The
related CGS system of units defines the unit of electric current in terms of centimeters, grams
and seconds variously. In one of those variants, called Gaussian units, the equations take the
following form:[45]

where c is the speed of light in a vacuum. For the electromagnetic field in a vacuum,
assuming that there is no current or electric charge present in the vacuum, the equations
become:

In this system of units the relation between electric displacement field, electric field and
polarization density is:

And likewise the relation between magnetic induction, magnetic field and total magnetization
is:

In the linear approximation, the electric susceptibility and magnetic susceptibility can be
defined so that:

(Note that although the susceptibilities are dimensionless numbers in both cgs and SI, they
have different values in the two unit systems, by a factor of 4π.) The permittivity and
permeability are:
,

so that

In vacuum, one has the simple relations , D=E, and B=H.

The force exerted upon a charged particle by the electric field and magnetic field is given by
the Lorentz force equation:

where is the charge on the particle and is the particle velocity. This is slightly different
from the SI-unit expression above. For example, here the magnetic field has the same units
as the electric field .

Some equations in the article are given in Gaussian units but not SI or vice-versa.
Fortunately, there are general rules to convert from one to the other; see the article Gaussian
units for details.

[edit] Special relativity


Main article: Classical electromagnetism and special relativity

Maxwell's equations have a close relation to special relativity: Not only were Maxwell's
equations a crucial part of the historical development of special relativity, but also, special
relativity has motivated a compact mathematical formulation of Maxwell's equations, in
terms of covariant tensors.

[edit] Historical developments

Main article: History of special relativity

Maxwell's electromagnetic wave equation applied only in what he believed to be the rest
frame of the luminiferous medium because he didn't use the v×B term of his equation (D)
when he derived it. Maxwell's idea of the luminiferous medium was that it consisted of
aethereal vortices aligned solenoidally along their rotation axes.

The American scientist A.A. Michelson set out to determine the velocity of the earth through
the luminiferous medium aether using a light wave interferometer that he had invented. When
the Michelson-Morley experiment was conducted by Edward Morley and Albert Abraham
Michelson in 1887, it produced a null result for the change of the velocity of light due to the
Earth's motion through the hypothesized aether. This null result was in line with the theory
that was proposed in 1845 by George Stokes which suggested that the aether was entrained
with the Earth's orbital motion.
Hendrik Lorentz objected to Stokes' aether drag model and, along with George FitzGerald
and Joseph Larmor, he suggested another approach. Both Larmor (1897) and Lorentz (1899,
1904) derived the Lorentz transformation (so named by Henri Poincaré) as one under which
Maxwell's equations were invariant. Poincaré (1900) analyzed the coordination of moving
clocks by exchanging light signals. He also established mathematically the group property of
the Lorentz transformation (Poincaré 1905).

This culminated in Albert Einstein's theory of special relativity, which postulated the absence
of any absolute rest frame, dismissed the aether as unnecessary (a bold idea that occurred to
neither Lorentz nor Poincaré), and established the invariance of Maxwell's equations in all
inertial frames of reference, in contrast to the famous Newtonian equations for classical
mechanics. But the transformations between two different inertial frames had to correspond
to Lorentz' equations and not — as formerly believed — to those of Galileo (called Galilean
transformations).[46] Indeed, Maxwell's equations played a key role in Einstein's famous paper
on special relativity; for example, in the opening paragraph of the paper, he motivated his
theory by noting that a description of a conductor moving with respect to a magnet must
generate a consistent set of fields irrespective of whether the force is calculated in the rest
frame of the magnet or that of the conductor.[47]

General relativity has also had a close relationship with Maxwell's equations. For example,
Kaluza and Klein showed in the 1920s that Maxwell's equations can be derived by extending
general relativity into five dimensions. This strategy of using higher dimensions to unify
different forces remains an active area of research in particle physics.

[edit] Covariant formulation of Maxwell's equations

Main article: Covariant formulation of classical electromagnetism

In special relativity, in order to more clearly express the fact that Maxwell's equations in
vacuo take the same form in any inertial coordinate system, Maxwell's equations are written
in terms of four-vectors and tensors in the "manifestly covariant" form. The purely spatial
components of the following are in SI units.

One ingredient in this formulation is the electromagnetic tensor, a rank-2 covariant


antisymmetric tensor combining the electric and magnetic fields:

and the result of raising its indices


The other ingredient is the four-current: where ρ is the charge density and J is
the current density.

With these ingredients, Maxwell's equations can be written:

and

The first tensor equation is an expression of the two inhomogeneous Maxwell's equations,
Gauss's law and Ampere's law with Maxwell's correction. The second equation is an
expression of the two homogeneous equations, Faraday's law of induction and Gauss's law
for magnetism. The second equation is equivalent to

where is the contravariant version of the Levi-Civita symbol, and

is the 4-gradient. In the tensor equations above, repeated indices are summed over according
to Einstein summation convention. We have displayed the results in several common
notations. Upper and lower components of a vector, vα and vα respectively, are interchanged
with the fundamental tensor g, e.g., g=η=diag(-1,+1,+1,+1).

Alternative covariant presentations of Maxwell's equations also exist, for example in terms of
the four-potential; see Covariant formulation of classical electromagnetism for details.

In Geometric algebra, these equations simplify to:

[edit] Potentials
Main article: Mathematical descriptions of the electromagnetic field

Maxwell's equations can be written in an alternative form, involving the electric potential
(also called scalar potential) and magnetic potential (also called vector potential), as follows.
[18]
(The following equations are valid in the absence of dielectric and magnetic materials; or
if such materials are present, they are valid as long as bound charge and bound current are
included in the total charge and current densities.)
First, Gauss's law for magnetism states:

By Helmholtz's theorem, B can be written in terms of a vector field A, called the magnetic
potential:

Second, plugging this into Faraday's law, we get:

By Helmholtz's theorem, the quantity in parentheses can be written in terms of a scalar


function , called the electric potential:

Combining these with the remaining two Maxwell's equations yields the four relations:

These equations, taken together, are as powerful and complete as Maxwell's equations.
Moreover, if we work only with the potentials and ignore the fields, the problem has been
reduced somewhat, as the electric and magnetic fields each have three components which
need to be solved for (six components altogether), while the electric and magnetic potentials
have only four components altogether.

Many different choices of A and are consistent with a given E and B, making these choices
physically equivalent – a flexibility known as gauge freedom. Suitable choice of A and can
simplify these equations, or can adapt them to suit a particular situation. For more
information, see the article gauge freedom.

[edit] Four-potential
The two equations that represent the potentials can be reduced to one manifestly Lorentz
invariant equation, using four-vectors: the four-current defined by
formed from the current density j and charge density ρ, and the electromagnetic four-
potential defined by

formed from the vector potential A and the scalar potential . The resulting single equation,
due to Arnold Sommerfeld, a generalization of an equation due to Bernhard Riemann and
known as the Riemann-Sommerfeld equation[48] or the covariant form of the Maxwell-Lorentz
equations,[49] is:

where is the d'Alembertian operator, or four-Laplacian, ,


sometimes written , or , where is the four-gradient.

[edit] Differential forms


In free space, where ε = ε0 and μ = μ0 are constant everywhere, Maxwell's equations simplify
considerably once the language of differential geometry and differential forms is used. In
what follows, cgs-Gaussian units, not SI units are used. (To convert to SI, see here.) The
electric and magnetic fields are now jointly described by a 2-form F in a 4-dimensional
spacetime manifold. Maxwell's equations then reduce to the Bianchi identity

where d denotes the exterior derivative — a natural coordinate and metric independent
differential operator acting on forms — and the source equation

where the (dual) Hodge star operator * is a linear transformation from the space of 2-forms to
the space of (4-2)-forms defined by the metric in Minkowski space (in four dimensions even
by any metric conformal to this metric), and the fields are in natural units where 1 / 4πε0 =
1. Here, the 3-form J is called the electric current form or current 3-form satisfying the
continuity equation

The current 3-form can be integrated over a 3-dimensional space-time region. The physical
interpretation of this integral is the charge in that region if it is spacelike, or the amount of
charge that flows through a surface in a certain amount of time if that region is a spacelike
surface cross a timelike interval. As the exterior derivative is defined on any manifold, the
differential form version of the Bianchi identity makes sense for any 4-dimensional manifold,
whereas the source equation is defined if the manifold is oriented and has a Lorentz metric. In
particular the differential form version of the Maxwell equations are a convenient and
intuitive formulation of the Maxwell equations in general relativity.
In a linear, macroscopic theory, the influence of matter on the electromagnetic field is
described through more general linear transformation in the space of 2-forms. We call

the constitutive transformation. The role of this transformation is comparable to the Hodge
duality transformation. The Maxwell equations in the presence of matter then become:

where the current 3-form J still satisfies the continuity equation dJ= 0.

When the fields are expressed as linear combinations (of exterior products) of basis forms ,

the constitutive relation takes the form

where the field coefficient functions are antisymmetric in the indices and the constitutive
coefficients are antisymmetric in the corresponding pairs. In particular, the Hodge duality
transformation leading to the vacuum equations discussed above are obtained by taking

which up to scaling is the only invariant tensor of this type that can be defined with the
metric.

In this formulation, electromagnetism generalises immediately to any 4-dimensional oriented


manifold or with small adaptations any manifold, requiring not even a metric. Thus the
expression of Maxwell's equations in terms of differential forms leads to a further notational
and conceptual simplification. Whereas Maxwell's Equations could be written as two tensor
equations instead of eight scalar equations, from which the propagation of electromagnetic
disturbances and the continuity equation could be derived with a little effort, using
differential forms leads to an even simpler derivation of these results.

[edit] Conceptual insight from this formulation

On the conceptual side, from the point of view of physics, this shows that the second and
third Maxwell equations should be grouped together, be called the homogeneous ones, and be
seen as geometric identities expressing nothing else than: the field F derives from a more
"fundamental" potential A. While the first and last one should be seen as the dynamical
equations of motion, obtained via the Lagrangian principle of least action, from the
"interaction term" A J (introduced through gauge covariant derivatives), coupling the field to
matter.
Often, the time derivative in the third law motivates calling this equation "dynamical", which
is somewhat misleading; in the sense of the preceding analysis, this is rather an artifact of
breaking relativistic covariance by choosing a preferred time direction. To have physical
degrees of freedom propagated by these field equations, one must include a kinetic term F *F
for A; and take into account the non-physical degrees of freedom which can be removed by
gauge transformation A→A' = A-dα: see also gauge fixing and Fadeev-Popov ghosts.

[edit] Classical electrodynamics as the curvature of a line


bundle
An elegant and intuitive way to formulate Maxwell's equations is to use complex line bundles
or principal bundles with fibre U(1). The connection on the line bundle has a curvature
which is a two-form that automatically satisfies and can be interpreted as
a field-strength. If the line bundle is trivial with flat reference connection d we can write
and F = dA with A the 1-form composed of the electric potential and the
magnetic vector potential.

In quantum mechanics, the connection itself is used to define the dynamics of the system.
This formulation allows a natural description of the Aharonov-Bohm effect. In this
experiment, a static magnetic field runs through a long magnetic wire (e.g., an iron wire
magnetized longitudinally). Outside of this wire the magnetic induction is zero, in contrast to
the vector potential, which essentially depends on the magnetic flux through the cross-section
of the wire and does not vanish outside. Since there is no electric field either, the Maxwell
tensor F = 0 throughout the space-time region outside the tube, during the experiment. This
means by definition that the connection is flat there.

However, as mentioned, the connection depends on the magnetic field through the tube since
the holonomy along a non-contractible curve encircling the tube is the magnetic flux through
the tube in the proper units. This can be detected quantum-mechanically with a double-slit
electron diffraction experiment on an electron wave traveling around the tube. The holonomy
corresponds to an extra phase shift, which leads to a shift in the diffraction pattern. (See
Michael Murray, Line Bundles, 2002 (PDF web link) for a simple mathematical review of
this formulation. See also R. Bott, On some recent interactions between mathematics and
physics, Canadian Mathematical Bulletin, 28 (1985) no. 2 pp 129–164.)

[edit] Curved spacetime


Main article: Maxwell's equations in curved spacetime

[edit] Traditional formulation

Matter and energy generate curvature of spacetime. This is the subject of general relativity.
Curvature of spacetime affects electrodynamics. An electromagnetic field having energy and
momentum will also generate curvature in spacetime. Maxwell's equations in curved
spacetime can be obtained by replacing the derivatives in the equations in flat spacetime with
covariant derivatives. (Whether this is the appropriate generalization requires separate
investigation.) The sourced and source-free equations become (cgs-Gaussian units):
and

Here,

is a Christoffel symbol that characterizes the curvature of spacetime and Dγ is the covariant
derivative.

[edit] Formulation in terms of differential forms

The formulation of the Maxwell equations in terms of differential forms can be used without
change in general relativity. The equivalence of the more traditional general relativistic
formulation using the covariant derivative with the differential form formulation can be seen
as follows. Choose local coordinates xα which gives a basis of 1-forms dxα in every point of
the open set where the coordinates are defined. Using this basis and cgs-Gaussian units we
define

• The antisymmetric infinitesimal field tensor Fαβ, corresponding to the field 2-form F

• The current-vector infinitesimal 3-form J

Here g is as usual the determinant of the metric tensor gαβ. A small computation that uses the
symmetry of the Christoffel symbols (i.e., the torsion-freeness of the Levi Civita connection)
and the covariant constantness of the Hodge star operator then shows that in this coordinate
neighborhood we have:

• the Bianchi identity

• the source equation

• the continuity equation


[edit] See also
• Abraham-Lorentz force
• Ampere's law
• Antenna (radio)
• Bremsstrahlung
• Computational electromagnetics
• Electrical generator
• Electromagnetic wave equation
• Finite-difference time-domain method
• Fresnel equations
• Green–Kubo relations
• Green's function (many-body theory)
• Interface conditions for electromagnetic fields
• Jefimenko's equations
• Kramers–Kronig relation
• Laser
• Linear response function
• Lorentz force
• Mathematical descriptions of the electromagnetic field
• Moving magnet and conductor problem
• Nonhomogeneous electromagnetic wave equation
• Photon dynamics in the double-slit experiment
• Photon polarization
• Photonic crystal
• Scattering-matrix method
• Sinusoidal plane-wave solutions of the electromagnetic wave equation
• Theoretical and experimental justification for the Schrödinger equation
• Transformer
• Waveguide
• Wheeler-Feynman time-symmetric theory for electrodynamics

[edit] Notes
1. ^ Using modern SI terminology: The electric constant can be estimated by measuring the
force between two charges and using Coulomb's law; and the magnetic constant can be
estimated by measuring the force between two current-carrying wires, and using Ampere's
force law. The product of these two, to the (-1/2) power, is the speed of electromagnetic
radiation predicted by Maxwell's equations, given in meters per second.
2. ^ In some books *e.g., in [4]), the term effective charge is used instead of total charge, while
free charge is simply called charge.
3. ^ Here it is noted that a quite different quantity, the magnetic polarization, by
decision of an international IUPAP commission has been given the same name . So for the
electric current density, a name with small letters, would be better. But even then the
mathematitians would still use the large-letter-name for the corresponding current-twoform
(see below).
4. ^ The free charges and currents respond to the fields through the Lorentz force law and this
response is calculated at a fundamental level using mechanics. The response of bound charges
and currents is dealt with using grosser methods subsumed under the notions of magnetization
and polarization. Depending upon the problem, one may choose to have no free charges
whatsoever.
5. ^ These complications show there is merit in separating the Lorentz force from the main four
Maxwell equations. The four Maxwell's equations express the fields' dependence upon current
and charge, setting apart the calculation of these currents and charges. As noted in this
subsection, these calculations may well involve the Lorentz force only implicitly. Separating
these complicated considerations from the Maxwell's equations provides a useful framework.

[edit] References
1. ^ a b c J.D. Jackson, "Maxwell's Equations" video glossary entry
2. ^ Principles of physics: a calculus-based text, by R.A. Serway, J.W. Jewett, page 809.
3. ^ David J Griffiths (1999). Introduction to electrodynamics (Third ed.). Prentice Hall.
pp. 559–562. ISBN 013805326X. http://worldcat.org/isbn/013805326X.
4. ^ U. Krey and A. Owen's Basic Theoretical Physics (Springer 2007)
5. ^ Crease, Robert. The Great Equations: Breakthroughs in Science from Pythagoras to
Heisenberg, page 133 (2008).
6. ^ a b c d e but are now universally known as Maxwell's equations. Paul J. Nahin (2002-10-09).
Oliver Heaviside: the life, work, and times of an electrical genius of the Victorian age. JHU
Press. pp. 108–112. ISBN 9780801869099. http://books.google.com/?
id=e9wEntQmA0IC&pg=PA111&dq=nahin+hertz-heaviside+maxwell-hertz.
7. ^ a b Jed Z. Buchwald (1994). The creation of scientific effects: Heinrich Hertz and electric
waves. University of Chicago Press. p. 194. ISBN 9780226078885. http://books.google.com/?
id=2bDEvvGT1EYC&pg=PA194&dq=maxwell+faraday+time-derivative+vector-potential.
8. ^ Myron Evans (2001-10-05). Modern nonlinear optics. John Wiley and Sons. p. 240.
ISBN 9780471389316. http://books.google.com/?
id=9p0kK6IG94gC&pg=PA240&dq=maxwell-heaviside+equations.
9. ^ Oliver J. Lodge (November 1888). "Sketch of the Electrical Papers in Section A, at the
Recent Bath Meeting of the British Association". Electrical Engineer 7: 535.
10.^ J. R. Lalanne, F. Carmona, and L. Servant (1999-11). Optical spectroscopies of electronic
absorption. World Scientific. p. 8. ISBN 9789810238612. http://books.google.com/?
id=7rWD-TdxKkMC&pg=PA8&dq=maxwell-faraday+derivative.
11.^ Roger F. Harrington (2003-10-17). Introduction to Electromagnetic Engineering. Courier
Dover Publications. pp. 49–56. ISBN 9780486432410. http://books.google.com/?
id=ZlC2EV8zvX8C&pg=PR7&dq=maxwell-faraday-equation+law-of-induction.
12.^
http://upload.wikimedia.org/wikipedia/commons/1/19/A_Dynamical_Theory_of_the_Electro
magnetic_Field.pdf page 480.
13.^ MS Longair (2003). Theoretical Concepts in Physics (2 ed.). Cambridge University Press.
p. 127. ISBN 052152878X. http://books.google.com/?id=bA9Lp2GH6OEC&pg=PA127.
14.^ Kenneth Franklin Riley, Michael Paul Hobson, Stephen John Bence (2006). Mathematical
methods for physics and engineering (3 ed.). Cambridge University Press. p. 404.
ISBN 0521861535. http://books.google.com/?id=Mq1nlEKhNcsC&pg=PA404.
15.^ MS Longair (2003). Theoretical Concepts in Physics (2 ed.). Cambridge University Press.
pp. 119 and 127. ISBN 052152878X. http://books.google.com/?
id=bA9Lp2GH6OEC&pg=PA119.
16.^ Kenneth Franklin Riley, Michael Paul Hobson, Stephen John Bence (2006). Mathematical
methods for physics and engineering (3 ed.). Cambridge University Press. p. 406.
ISBN 0521861535. http://books.google.com/?id=Mq1nlEKhNcsC&pg=PA406.
17.^ Halevi, Peter (1992). Spatial dispersion in solids and plasmas. Amsterdam: North-Holland.
ISBN 978-0444874054.
18.^ a b Jackson, John David (1999). Classical Electrodynamics (3rd ed.). New York: Wiley.
ISBN 0-471-30932-X.
19.^ Aspnes, D.E., "Local-field effects and effective-medium theory: A microscopic
perspective," Am. J. Phys. 50, p. 704-709 (1982).
20.^ Habib Ammari & Hyeonbae Kang (2006). Inverse problems, multi-scale analysis and
effective medium theory : workshop in Seoul, Inverse problems, multi-scale analysis, and
homogenization, June 22–24, 2005, Seoul National University, Seoul, Korea. Providence RI:
American Mathematical Society. pp. 282. ISBN 0821839683. http://books.google.com/?
id=dK7JwVPbUkMC&printsec=frontcover&dq=%22effective+medium%22.
21.^ O. C. Zienkiewicz, Robert Leroy Taylor, J. Z. Zhu, Perumal Nithiarasu (2005). The Finite
Element Method (Sixth ed.). Oxford UK: Butterworth-Heinemann. p. 550 ff.
ISBN 0750663219. http://books.google.com/?
id=rvbSmooh8Y4C&printsec=frontcover&dq=finite+element+inauthor:Zienkiewicz.
22.^ N. Bakhvalov and G. Panasenko, Homogenization: Averaging Processes in Periodic Media
(Kluwer: Dordrecht, 1989); V. V. Jikov, S. M. Kozlov and O. A. Oleinik, Homogenization of
Differential Operators and Integral Functionals (Springer: Berlin, 1994).
23.^ Vitaliy Lomakin, Steinberg BZ, Heyman E, & Felsen LB (2003). "Multiresolution
Homogenization of Field and Network Formulations for Multiscale Laminate Dielectric
Slabs". IEEE Transactions on Antennas and Propagation 51 (10): 2761 ff.
doi:10.1109/TAP.2003.816356. http://www.ece.ucsd.edu/~vitaliy/A8.pdf.
24.^ AC Gilbert (Ronald R Coifman, Editor) (2000-05). Topics in Analysis and Its Applications:
Selected Theses. Singapore: World Scientific Publishing Company. p. 155.
ISBN 9810240945. http://books.google.com/?
id=d4MOYN5DjNUC&printsec=frontcover&dq=homogenization+date:2000-2009.
25.^ Edward D. Palik & Ghosh G (1998). Handbook of Optical Constants of Solids. London
UK: Academic Press. pp. 1114. ISBN 0125444222. http://books.google.com/?
id=AkakoCPhDFUC&dq=optical+constants+inauthor:Palik.
26.^ F Capasso, JN Munday, D. Iannuzzi & HB Chen Casimir forces and quantum
electrodynamical torques: physics and nanomechanics
27.^ http://www.sciencemag.org/cgi/content/abstract/1178868
28.^ http://www.nature.com/nature/journal/v461/n7266/full/nature08500.html
29.^ "IEEEGHN: Maxwell's Equations". Ieeeghn.org.
http://www.ieeeghn.org/wiki/index.php/Maxwell%27s_Equations. Retrieved 2008-10-19.
30.^ Peter Monk; ), Peter Monk (Ph.D (2003). Finite Element Methods for Maxwell's Equations.
Oxford UK: Oxford University Press. p. 1 ff. ISBN 0198508883. http://books.google.com/?
id=zI7Y1jT9pCwC&pg=PA1&dq=electromagnetism+%22boundary+conditions%22.
31.^ Thomas B. A. Senior & John Leonidas Volakis (1995-03-01). Approximate Boundary
Conditions in Electromagnetics. London UK: Institution of Electrical Engineers. p. 261 ff.
ISBN 0852968493. http://books.google.com/?
id=eOofBpuyuOkC&pg=PA261&dq=electromagnetism+%22boundary+conditions%22.
32.^ T Hagstrom (Björn Engquist & Gregory A. Kriegsmann, Eds.) (1997). Computational
Wave Propagation. Berlin: Springer. p. 1 ff. ISBN 0387948740. http://books.google.com/?
id=EdZefkIOR5cC&pg=PA1&dq=electromagnetism+%22boundary+conditions%22.
33.^ Henning F. Harmuth & Malek G. M. Hussain (1994). Propagation of Electromagnetic
Signals. Singapore: World Scientific. p. 17. ISBN 9810216890. http://books.google.com/?
id=6_CZBHzfhpMC&pg=PA45&dq=electromagnetism+%22initial+conditions%22.
34.^ Fioralba Cakoni; Colton, David L (2006). "The inverse scattering problem for an imperfect
conductor". Qualitative methods in inverse scattering theory. Springer Science & Business.
p. 61. ISBN 3540288449. http://books.google.com/?
id=7DqqOjPJetYC&pg=PR6#PPA61,M1., Khosrow Chadan et al. (1997). An introduction to
inverse scattering and inverse spectral problems. Society for Industrial and Applied
Mathematics. p. 45. ISBN 0898713870. http://books.google.com/?
id=y2rywYxsDEAC&pg=PA45.
35.^ S. F. Mahmoud (1991). Electromagnetic Waveguides: Theory and Applications
applications. London UK: Institution of Electrical Engineers. Chapter 2. ISBN 0863412327.
http://books.google.com/?id=toehQ7vLwAMC&pg=PA2&dq=Maxwell
%27s+equations+waveguides.
36.^ Jean-Michel Lourtioz (2005-05-23). Photonic Crystals: Towards Nanoscale Photonic
Devices. Berlin: Springer. p. 84. ISBN 354024431X. http://books.google.com/?
id=vSszZ2WuG_IC&pg=PA84&dq=electromagnetism+boundary++-element.
37.^ S. G. Johnson, Notes on Perfectly Matched Layers, online MIT course notes (Aug. 2007).
38.^ Taflove A & Hagness S C (2005). Computational Electrodynamics: The Finite-difference
Time-domain Method. Boston MA: Artech House. Chapters 6 & 7. ISBN 1580538320.
http://www.amazon.com/gp/reader/1580538320/ref=sib_dp_pop_toc?
ie=UTF8&p=S008#reader-link.
39.^ David M Cook (2002). The Theory of the Electromagnetic Field. Mineola NY: Courier
Dover Publications. p. 335 ff. ISBN 0486425673. http://books.google.com/?id=bI-
ZmZWeyhkC&pg=RA1-PA335&dq=electromagnetism+infinity+boundary+conditions.
40.^ Korada Umashankar (1989-09). Introduction to Engineering Electromagnetic Fields.
Singapore: World Scientific. p. §10.7; pp. 359ff. ISBN 9971509210.
http://books.google.com/?id=qp5qHvB_mhcC&pg=PA359&dq=electromagnetism+
%22boundary+conditions%22.
41.^ Joseph V. Stewart (2001). Intermediate Electromagnetic Theory. Singapore: World
Scientific. Chapter III, pp. 111 ff Chapter V, Chapter VI. ISBN 9810244703.
http://books.google.com/?
id=mwLI4nQ0thQC&printsec=frontcover&dq=intitle:Intermediate+intitle:electromagnetic+i
ntitle:theory.
42.^ Tai L. Chow (2006). Electromagnetic theory. Sudbury MA: Jones and Bartlett. p. 333ff and
Chapter 3: pp. 89ff. ISBN 0-7637-3827-1. http://books.google.com/?
id=dpnpMhw1zo8C&pg=PA153&dq=isbn=0763738271.
43.^ John Leonidas Volakis, Arindam Chatterjee & Leo C. Kempel (1998). Finite element
method for electromagnetics : antennas, microwave circuits, and scattering applications.
New York: Wiley IEEE. p. 79 ff. ISBN 0780334256. http://books.google.com/?
id=55q7HqnMZCsC&pg=PA79&dq=electromagnetism+%22boundary+conditions%22.
44.^ Bernard Friedman (1990). Principles and Techniques of Applied Mathematics. Mineola
NY: Dover Publications. ISBN 0486664449. http://www.amazon.com/Principles-Techniques-
Applied-Mathematics-Friedman/dp/0486664449/ref=sr_1_1?
ie=UTF8&s=books&qisbn=1207010487&sr=1-1.
45.^ Littlejohn, Robert (Fall 2007). "Gaussian, SI and Other Systems of Units in
Electromagnetic Theory" (pdf). Physics 221A, University of California, Berkeley lecture
notes. http://bohr.physics.berkeley.edu/classes/221/0708/notes/emunits.pdf. Retrieved 2008-
05-06.
46.^ U. Krey, A. Owen, Basic Theoretical Physics — A Concise Overview, Springer, Berlin and
elsewhere, 2007, ISBN 978-3-540-36804-5
47.^ "On the Electrodynamics of Moving Bodies". Fourmilab.ch.
http://www.fourmilab.ch/etexts/einstein/specrel/www/. Retrieved 2008-10-19.
48.^ Carver A. Mead (2002-08-07). Collective Electrodynamics: Quantum Foundations of
Electromagnetism. MIT Press. pp. 37–38. ISBN 9780262632607. http://books.google.com/?
id=GkDR4e2lo2MC&pg=PA37&dq=Riemann+Summerfeld.
49.^ Frederic V. Hartemann (2002). High-field electrodynamics. CRC Press. p. 102.
ISBN 9780849323782. http://books.google.com/?id=tIkflVrfkG0C&pg=PA102&dq=d
%27Alembertian+covariant-form+maxwell-lorentz.

[edit] Further reading


[edit] Journal articles
• James Clerk Maxwell, "A Dynamical Theory of the Electromagnetic Field",
Philosophical Transactions of the Royal Society of London 155, 459-512 (1865).
(This article accompanied a December 8, 1864 presentation by Maxwell to the Royal
Society.)

The developments before relativity

• Joseph Larmor (1897) "On a dynamical theory of the electric and luminiferous
medium", Phil. Trans. Roy. Soc. 190, 205-300 (third and last in a series of papers with
the same name).
• Hendrik Lorentz (1899) "Simplified theory of electrical and optical phenomena in
moving systems", Proc. Acad. Science Amsterdam, I, 427-43.
• Hendrik Lorentz (1904) "Electromagnetic phenomena in a system moving with any
velocity less than that of light", Proc. Acad. Science Amsterdam, IV, 669-78.
• Henri Poincaré (1900) "La theorie de Lorentz et la Principe de Reaction", Archives
Néerlandaises, V, 253-78.
• Henri Poincaré (1901) Science and Hypothesis
• Henri Poincaré (1905) "Sur la dynamique de l'électron", Comptes rendus de
l'Académie des Sciences, 140, 1504-8.

see

• Macrossan, M. N. (1986) "A note on relativity before Einstein", Brit. J. Phil. Sci., 37,
232-234

[edit] University level textbooks

[edit] Undergraduate

• Feynman, Richard P. (2005). The Feynman Lectures on Physics. 2 (2nd ed.).


Addison-Wesley. ISBN 978-0805390650.
• Fleisch, Daniel (2008). A Student's Guide to Maxwell's Equations. Cambridge
University Press. ISBN 978-0521877619.
• Griffiths, David J. (1998). Introduction to Electrodynamics (3rd ed.). Prentice Hall.
ISBN 0-13-805326-X.
• Hoffman, Banesh (1983). Relativity and Its Roots. W. H. Freeman.
• Krey, U.; Owen, A. (2007). Basic Theoretical Physics: A Concise Overview. Springer.
ISBN 978-3-540-36804-5. See especially part II.
• Purcell, Edward Mills (1985). Electricity and Magnetism. McGraw-Hill. ISBN 0-07-
004908-4.
• Reitz, John R.; Milford, Frederick J.; Christy, Robert W. (2008). Foundations of
Electromagnetic Theory (4th ed.). Addison Wesley. ISBN 978-0321581747.
• Sadiku, Matthew N. O. (2006). Elements of Electromagnetics (4th ed.). Oxford
University Press. ISBN 0-19-5300483.
• Schwarz, Melvin (1987). Principles of Electrodynamics. Dover. ISBN 0-486-65493-
1.
• Stevens, Charles F. (1995). The Six Core Theories of Modern Physics. MIT Press.
ISBN 0-262-69188-4.
• Tipler, Paul; Mosca, Gene (2007). Physics for Scientists and Engineers. 2 (6th ed.).
W. H. Freeman. ISBN 978-1429201339.
• Ulaby, Fawwaz T. (2007). Fundamentals of Applied Electromagnetics (5th ed.).
Pearson Education. ISBN 0-13-241326-4.

[edit] Graduate

• Jackson, J. D. (1999). Classical Electrodynamics (3rd ed.). Wiley. ISBN 0-471-


30932-X.
• Panofsky, Wolfgang K. H.; Phillips, Melba (2005). Classical Electricity and
Magnetism (2nd ed.). Dover. ISBN 978-0486439242.

[edit] Older classics

• Lifshitz, Evgeny; Landau, Lev (1980). The Classical Theory of Fields (4th ed.).
Butterworth-Heinemann. ISBN 0750627689.
• Lifshitz, Evgeny; Landau, Lev; Pitaevskii, L. P. (1984). Electrodynamics of
Continuous Media (2nd ed.). Butterworth-Heinemann. ISBN 0750626348.
• Maxwell, James Clerk (1873). A Treatise on Electricity and Magnetism. Dover.
ISBN 0-486-60637-6.
• Misner, Charles W.; Thorne, Kip; Wheeler, John Archibald (1973). Gravitation. W.
H. Freeman. ISBN 0-7167-0344-0. Sets out the equations using differential forms.

[edit] Computational techniques

• Chew, W. C.; Jin, J.; Michielssen, E. ; Song, J. (2001). Fast and Efficient Algorithms
in Computational Electromagnetics. Artech House. ISBN 1-58053-152-0.
• Harrington, R. F. (1993). Field Computation by Moment Methods. Wiley-IEEE Press.
ISBN 0-78031-014-4.
• Jin, J. (2002). The Finite Element Method in Electromagnetics (2nd ed.). Wiley-IEEE
Press. ISBN 0-47143-818-9.
• Lounesto, Pertti (1997). Clifford Algebras and Spinors. Cambridge University Press..
Chapter 8 sets out several variants of the equations using exterior algebra and
differential forms.
• Taflove, Allen; Hagness, Susan C. (2005). Computational Electrodynamics: The
Finite-Difference Time-Domain Method (3rd ed.). Artech House. ISBN 1-58053-832-
0.

[edit] External links


• Mathematical aspects of Maxwell's equation are discussed on the Dispersive PDE
Wiki.

[edit] Modern treatments

• Electromagnetism, B. Crowell, Fullerton College


• Lecture series: Relativity and electromagnetism, R. Fitzpatrick, University of Texas at
Austin
• Electromagnetic waves from Maxwell's equations on Project PHYSNET.
• MIT Video Lecture Series (36 x 50 minute lectures) (in .mp4 format) - Electricity and
Magnetism Taught by Professor Walter Lewin.
[edit] Historical

• James Clerk Maxwell, A Treatise on Electricity And Magnetism Vols 1 and 2 1904—
most readable edition with all corrections—Antique Books Collection suitable for free
reading online.
• Maxwell, J.C., A Treatise on Electricity And Magnetism - Volume 1 - 1873 - Posner
Memorial Collection - Carnegie Mellon University
• Maxwell, J.C., A Treatise on Electricity And Magnetism - Volume 2 - 1873 - Posner
Memorial Collection - Carnegie Mellon University
• On Faraday's Lines of Force - 1855/56 Maxwell's first paper (Part 1 & 2) - Compiled
by Blaze Labs Research (PDF)
• On Physical Lines of Force - 1861 Maxwell's 1861 paper describing magnetic lines of
Force - Predecessor to 1873 Treatise
• Maxwell, James Clerk, "A Dynamical Theory of the Electromagnetic Field",
Philosophical Transactions of the Royal Society of London 155, 459-512 (1865).
(This article accompanied a December 8, 1864 presentation by Maxwell to the Royal
Society.)
• Catt, Walton and Davidson. "The History of Displacement Current". Wireless World,
March 1979.
• Reprint from Dover Publications (ISBN 0-486-60636-8)
• Full text of 1904 Edition including full text search.
• A Dynamical Theory Of The Electromagnetic Field - 1865 Maxwell's 1865 paper
describing his 20 Equations in 20 Unknowns - Predecessor to the 1873 Treatise

[edit] Other

• Feynman's derivation of Maxwell equations and extra dimensions


• According to an article in Physicsweb, the Maxwell equations rate as "The greatest
equations ever".
• Nature Milestones: Photons -- Milestone 2 (1861) Maxwell's equations

[show]
v•d•e
General subfields within physics

Retrieved from "http://en.wikipedia.org/wiki/Maxwell%27s_equations"


Categories: Electrodynamics | Electromagnetism | Equations | Partial differential equations |
Fundamental physics concepts | James Clerk Maxwell

Personal tools

• New features
• Log in / create account

Namespaces

• Article
• Discussion

Variants
Views

• Read
• Edit
• View history

Actions

Search

Search

Navigation

• Main page
• Contents
• Featured content
• Current events
• Random article

Interaction

• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Donate to Wikipedia
• Help

Toolbox

• What links here


• Related changes
• Upload file
• Special pages
• Permanent link
• Cite this page

Print/export

• Create a book
• Download as PDF
• Printable version

Languages

• Afrikaans
• ‫العربية‬
• বাংলা
• Български
• Català
• Česky
• Dansk
• Deutsch
• Ελληνικά
• Español
• Esperanto
• Euskara
• ‫فارسی‬
• Français
• Galego
• 한국어
• िहनदी
• Hrvatski
• Bahasa Indonesia
• Íslenska
• Italiano
• ‫עברית‬
• ქართული
• Latina
• Latviešu
• Lietuvių
• Limburgs
• Magyar
• Nederlands
• 日本語
• Norsk (bokmål)
• Norsk (nynorsk)
• Polski
• Português
• Română
• Русский
• Shqip
• Simple English
• Slovenčina
• Slovenščina
• Српски / Srpski
• Srpskohrvatski / Српскохрватски
• Suomi
• Svenska
• ไทย
• Türkçe
• Українська
• Tiếng Việt
• 中文

• This page was last modified on 8 July 2010 at 19:49.


• Text is available under the Creative Commons Attribution-ShareAlike License;
additional terms may apply. See Terms of Use for details.
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit
organization.
• Contact us

• Privacy policy
• About Wikipedia
• Disclaimers

Maxwell's equations
From Wikipedia, the free encyclopedia

Jump to: navigation, search


For thermodynamic relations, see Maxwell relations.

Electromagnetism

Electricity · Magnetism

[show]Electrostatics
[show]Magnetostatics
[hide]Electrodynamics
Free space · Lorentz force law · emf ·
Electromagnetic induction · Faraday’s law ·
Lenz's law · Displacement current ·
Maxwell's equations · EM field ·
Electromagnetic radiation · Liénard-
Wiechert Potential · Maxwell tensor · Eddy
current
[show]Electrical Network
[show]Covariant formulation
[show]Scientists
v•d•e
Maxwell's equations are a set of four partial differential equations that relate the electric and
magnetic fields to their sources, charge density and current density. These equations can be
combined to show that light is an electromagnetic wave. Individually, the equations are
known as Gauss's law, Gauss's law for magnetism, Faraday's law of induction, and Ampère's
law with Maxwell's correction. The set of equations is named after James Clerk Maxwell.

These four equations, together with the Lorentz force law are the complete set of laws of
classical electromagnetism. The Lorentz force law itself was derived by Maxwell, under the
name of Equation for Electromotive Force, and was one of an earlier set of eight equations
by Maxwell.

Contents
[hide]

• 1 Conceptual description
• 2 General formulation
• 3 History
o 3.1 The term Maxwell's equations
o 3.2 Maxwell's On Physical Lines of Force (1861)
o 3.3 Maxwell's A Dynamical Theory of the Electromagnetic Field (1864)
o 3.4 A Treatise on Electricity and Magnetism (1873)
• 4 Maxwell's equations and matter
o 4.1 Bound charge and current
 4.1.1 Proof that the two general formulations are equivalent
o 4.2 Constitutive relations
 4.2.1 Case without magnetic or dielectric materials
 4.2.2 Case of linear materials
 4.2.3 General case
 4.2.4 Maxwell's equations in terms of E and B for linear materials
 4.2.5 Calculation of constitutive relations
o 4.3 In vacuum
o 4.4 With magnetic monopoles
• 5 Boundary conditions: using Maxwell's equations
• 6 CGS units
• 7 Special relativity
o 7.1 Historical developments
o 7.2 Covariant formulation of Maxwell's equations
• 8 Potentials
• 9 Four-potential
• 10 Differential forms
o 10.1 Conceptual insight from this formulation
• 11 Classical electrodynamics as the curvature of a line bundle
• 12 Curved spacetime
o 12.1 Traditional formulation
o 12.2 Formulation in terms of differential forms
• 13 See also
• 14 Notes
• 15 References
• 16 Further reading
o 16.1 Journal articles
o 16.2 University level textbooks
 16.2.1 Undergraduate
 16.2.2 Graduate
 16.2.3 Older classics
 16.2.4 Computational techniques
• 17 External links
o 17.1 Modern treatments
o 17.2 Historical

o 17.3 Other

[edit] Conceptual description


This section will conceptually describe each of the four Maxwell's equations, and also how
they link together to explain the origin of electromagnetic radiation such as light. The exact
equations are set out in later sections of this article.

• Gauss' law describes how an electric field is generated by electric charges: The
electric field tends to point away from positive charges and towards negative charges.
More technically, it relates the electric flux through any hypothetical closed
"Gaussian surface" to the electric charge within the surface.

• Gauss' law for magnetism states that there are no "magnetic charges" (also called
magnetic monopoles), analogous to electric charges.[1] Instead the magnetic field is
generated by a configuration called a dipole, which has no magnetic charge but
resembles a positive and negative charge inseparably bound together. Equivalent
technical statements are that the total magnetic flux through any Gaussian surface is
zero, or that the magnetic field is a solenoidal vector field.

An Wang's magnetic core memory (1954) is an application of Ampere's law. Each core stores
one bit of data.

• Faraday's law describes how a changing magnetic field can create ("induce") an
electric field.[1] This aspect of electromagnetic induction is the operating principle
behind many electric generators: A bar magnet is rotated to create a changing
magnetic field, which in turn generates an electric field in a nearby wire. (Note: The
"Faraday's law" that occurs in Maxwell's equations is a bit different than the version
originally written by Michael Faraday. Both versions are equally true laws of physics,
but they have different scope, for example whether "motional EMF" is included. See
Faraday's law of induction for details.)

• Ampère's law with Maxwell's correction states that magnetic fields can be generated
in two ways: by electrical current (this was the original "Ampère's law") and by
changing electric fields (this was "Maxwell's correction").

Maxwell's correction to Ampère's law is particularly important: It means that a changing


magnetic field creates an electric field, and a changing electric field creates a magnetic field.
[1][2]
Therefore, these equations allow self-sustaining "electromagnetic waves" to travel
through empty space (see electromagnetic wave equation).

The speed calculated for electromagnetic waves, which could be predicted from experiments
on charges and currents,[note 1] exactly matches the speed of light; indeed, light is one form of
electromagnetic radiation (as are X-rays, radio waves, and others). Maxwell understood the
connection between electromagnetic waves and light in 1864, thereby unifying the
previously-separate fields of electromagnetism and optics.

[edit] General formulation


The equations in this section are given in SI units. Unlike the equations of mechanics (for
example), Maxwell's equations are not unchanged in other unit systems. Though the general
form remains the same, various definitions get changed and different constants appear at
different places. Other than SI (used in engineering), the units commonly used are Gaussian
units (based on the cgs system and considered to have some theoretical advantages over SI[3]),
Lorentz-Heaviside units (used mainly in particle physics) and Planck units (used in
theoretical physics). See below for CGS-Gaussian units.

Two equivalent, general formulations of Maxwell's equations follow. The first separates
bound charge and bound current (which arise in the context of dielectric and/or magnetized
materials) from free charge and free current (the more conventional type of charge and
current). This separation is useful for calculations involving dielectric or magnetized
materials. The second formulation treats all charge equally, combining free and bound charge
into total charge (and likewise with current). This is the more fundamental or microscopic
point of view, and is particularly useful when no dielectric or magnetic material is present.
More details, and a proof that these two formulations are mathematically equivalent, are
given in section 4.

Symbols in bold represent vector quantities, and symbols in italics represent scalar
quantities. The definitions of terms used in the two tables of equations are given in another
table immediately following.

Formulation in terms of free charge and current

Name Differential form Integral form

Gauss's law
Gauss's law for magnetism

Maxwell–Faraday equation
(Faraday's law of induction)

Ampère's circuital law


(with Maxwell's correction)

Formulation in terms of total charge and current[note 2]
Name Differential form Integral form

Gauss's law

Gauss's law for


magnetism
Maxwell–Faraday
equation
(Faraday's law of
induction)
Ampère's circuital
law
(with Maxwell's
correction)

The following table provides the meaning of each symbol and the SI unit of measure:

Definitions and units


Meaning (first term is the most
Symbol SI Unit of Measure
common)
volt per meter or,
electric field
  equivalently,
also called the electric field intensity
newton per coulomb
magnetic field tesla, or equivalently,
also called the magnetic induction weber per square meter,

also called the magnetic field density volt-second per square
also called the magnetic flux density meter
electric displacement field coulombs per square
  also called the electric induction meter or equivalently,
also called the electric flux density newton per volt-meter
magnetizing field
also called auxiliary magnetic field
  ampere per meter
also called magnetic field intensity
also called magnetic field
  the divergence operator per meter (factor
  the curl operator contributed by applying
either operator)
per second (factor
partial derivative with respect to time contributed by applying
  the operator)
differential vector element of surface area
  A, with infinitesimally small magnitude square meters
and direction normal to surface S
differential vector element of path length
  meters
tangential to the path/curve
permittivity of free space, also called the
  farads per meter
electric constant, a universal constant
henries per meter, or
permeability of free space, also called the
  newtons per ampere
magnetic constant, a universal constant
squared
free charge density (not including bound coulombs per cubic

charge) meter
total charge density (including both free coulombs per cubic

and bound charge) meter
free current density (not including bound amperes per square
  current) meter
total current density (including both free amperes per square

and bound current) meter
net free electric charge within the three-
  dimensional volume V (not including coulombs
bound charge)
net electric charge within the three-
  dimensional volume V (including both coulombs
free and bound charge)
line integral of the electric field along the
boundary ∂S of a surface S (∂S is always a joules per coulomb
  closed curve).
line integral of the magnetic field over the
tesla-meters
  closed boundary ∂S of the surface S
the electric flux (surface integral of the
electric field) through the (closed) surface joule-meter per coulomb
  (the boundary of the volume V)
the magnetic flux (surface integral of the
magnetic B-field) through the (closed) tesla meters-squared or
  surface (the boundary of the volume webers
V)
magnetic flux through any surface S, not webers or equivalently,
  necessarily closed volt-seconds

electric flux through any surface S, not joule-meters per


  necessarily closed coulomb
flux of electric displacement field through
coulombs
  any surface S, not necessarily closed
net free electrical current passing through
the surface S (not including bound amperes
  current)
net electrical current passing through the
surface S (including both free and bound amperes
  current)

Maxwell's equations are generally applied to macroscopic averages of the fields, which vary
wildly on a microscopic scale in the vicinity of individual atoms (where they undergo
quantum mechanical effects as well). It is only in this averaged sense that one can define
quantities such as the permittivity and permeability of a material. At microscopic level,
Maxwell's equations, ignoring quantum effects, describe fields, charges and currents in free
space—but at this level of detail one must include all charges, even those at an atomic level,
generally an intractable problem.

[edit] History
Although James Clerk Maxwell is said by some not to be the originator of these equations, he
nevertheless derived them independently in conjunction with his molecular vortex model of
Faraday's "lines of force". In doing so, he made an important addition to Ampère's circuital
law.

All four of what are now described as Maxwell's equations can be found in recognizable form
(albeit without any trace of a vector notation, let alone ∇) in his 1861 paper On Physical
Lines of Force, in his 1865 paper A Dynamical Theory of the Electromagnetic Field, and also
in vol. 2 of Maxwell's "A Treatise on Electricity & Magnetism", published in 1873, in
Chapter IX, entitled "General Equations of the Electromagnetic Field". This book by
Maxwell pre-dates publications by Heaviside, Hertz and others.

The physicist Richard Feynman predicted that, "The American Civil War will pale into
provincial insignificance in comparison with this important scientific event of the same
decade."[5]

[edit] The term Maxwell's equations

The term Maxwell's equations originally applied to a set of eight equations published by
Maxwell in 1865, but nowadays applies to modified versions of four of these equations that
were grouped together in 1884 by Oliver Heaviside,[6] concurrently with similar work by
Willard Gibbs and Heinrich Hertz.[7] These equations were also known variously as the
Hertz-Heaviside equations and the Maxwell-Hertz equations,[6] and are sometimes still known
as the Maxwell–Heaviside equations.[8]

Maxwell's contribution to science in producing these equations lies in the correction he made
to Ampère's circuital law in his 1861 paper On Physical Lines of Force. He added the
displacement current term to Ampère's circuital law and this enabled him to derive the
electromagnetic wave equation in his later 1865 paper A Dynamical Theory of the
Electromagnetic Field and demonstrate the fact that light is an electromagnetic wave. This
fact was then later confirmed experimentally by Heinrich Hertz in 1887.

The concept of fields was introduced by, among others, Faraday. Albert Einstein wrote:

The precise formulation of the time-space laws was the work of Maxwell. Imagine his
feelings when the differential equations he had formulated proved to him that electromagnetic
fields spread in the form of polarised waves, and at the speed of light! To few men in the
world has such an experience been vouchsafed . . it took physicists some decades to grasp the
full significance of Maxwell's discovery, so bold was the leap that his genius forced upon the
conceptions of his fellow-workers
—(Science, May 24, 1940)

The equations were called by some the Hertz-Heaviside equations, but later Einstein referred
to them as the Maxwell-Hertz equations.[6] However, in 1940 Einstein referred to the
equations as Maxwell's equations in "The Fundamentals of Theoretical Physics" published in
the Washington periodical Science, May 24, 1940.

Heaviside worked to eliminate the potentials (electrostatic potential and vector potential) that
Maxwell had used as the central concepts in his equations;[6] this effort was somewhat
controversial,[9] though it was understood by 1884 that the potentials must propagate at the
speed of light like the fields, unlike the concept of instantaneous action-at-a-distance like the
then conception of gravitational potential.[7] Modern analysis of, for example, radio antennas,
makes full use of Maxwell's vector and scalar potentials to separate the variables, a common
technique used in formulating the solutions of differential equations. However the potentials
can be introduced by algebraic manipulation of the four fundamental equations.

The net result of Heaviside's work was the symmetrical duplex set of four equations,[6] all of
which originated in Maxwell's previous publications, in particular Maxwell's 1861 paper On
Physical Lines of Force, the 1865 paper A Dynamical Theory of the Electromagnetic Field
and the Treatise. The fourth was a partial time derivative version of Faraday's law of
induction that doesn't include motionally induced EMF; this version is often termed the
Maxwell-Faraday equation or Faraday's law in differential form to keep clear the distinction
from Faraday's law of induction, though it expresses the same law.[10][11]

[edit] Maxwell's On Physical Lines of Force (1861)

The four modern day Maxwell's equations appeared throughout Maxwell's 1861 paper On
Physical Lines of Force:

i. Equation (56) in Maxwell's 1861 paper is .


ii. Equation (112) is Ampère's circuital law with Maxwell's displacement current added.
It is the addition of displacement current that is the most significant aspect of
Maxwell's work in electromagnetism, as it enabled him to later derive the
electromagnetic wave equation in his 1865 paper A Dynamical Theory of the
Electromagnetic Field, and hence show that light is an electromagnetic wave. It is
therefore this aspect of Maxwell's work which gives the equations their full
significance. (Interestingly, Kirchhoff derived the telegrapher's equations in 1857
without using displacement current. But he did use Poisson's equation and the
equation of continuity which are the mathematical ingredients of the displacement
current. Nevertheless, Kirchhoff believed his equations to be applicable only inside an
electric wire and so he is not credited with having discovered that light is an
electromagnetic wave).
iii. Equation (115) is Gauss's law.
iv. Equation (54) is an equation that Oliver Heaviside referred to as 'Faraday's law'. This
equation caters for the time varying aspect of electromagnetic induction, but not for
the motionally induced aspect, whereas Faraday's original flux law caters for both
aspects. Maxwell deals with the motionally dependent aspect of electromagnetic
induction, v × B, at equation (77). Equation (77) which is the same as equation (D) in
the original eight Maxwell's equations listed below, corresponds to all intents and
purposes to the modern day force law F = q ( E + v × B ) which sits adjacent to
Maxwell's equations and bears the name Lorentz force, even though Maxwell derived
it when Lorentz was still a young boy.

The difference between the and the vectors can be traced back to Maxwell's 1855 paper
entitled On Faraday's Lines of Force which was read to the Cambridge Philosophical
Society. The paper presented a simplified model of Faraday's work, and how the two
phenomena were related. He reduced all of the current knowledge into a linked set of
differential equations.

Figure of Maxwell's molecular vortex model. For a uniform magnetic field, the field lines
point outward from the display screen, as can be observed from the black dots in the middle
of the hexagons. The vortex of each hexagonal molecule rotates counter-clockwise. The small
green circles are clockwise rotating particles sandwiching between the molecular vortices.

It is later clarified in his concept of a sea of molecular vortices that appears in his 1861 paper
On Physical Lines of Force - 1861. Within that context, represented pure vorticity (spin),
whereas was a weighted vorticity that was weighted for the density of the vortex sea.
Maxwell considered magnetic permeability µ to be a measure of the density of the vortex sea.
Hence the relationship,

(1) Magnetic induction current causes a magnetic current density


was essentially a rotational analogy to the linear electric current relationship,

(2) Electric convection current

where ρ is electric charge density. was seen as a kind of magnetic current of vortices
aligned in their axial planes, with being the circumferential velocity of the vortices. With µ
representing vortex density, it follows that the product of µ with vorticity leads to the
magnetic field denoted as .

The electric current equation can be viewed as a convective current of electric charge that
involves linear motion. By analogy, the magnetic equation is an inductive current involving
spin. There is no linear motion in the inductive current along the direction of the vector.
The magnetic inductive current represents lines of force. In particular, it represents lines of
inverse square law force.

The extension of the above considerations confirms that where is to , and where is to ρ,
then it necessarily follows from Gauss's law and from the equation of continuity of charge
that is to . i.e. parallels with , whereas parallels with .

[edit] Maxwell's A Dynamical Theory of the Electromagnetic Field (1864)

Main article: A Dynamical Theory of the Electromagnetic Field

In 1864 Maxwell published A Dynamical Theory of the Electromagnetic Field in which he


showed that light was an electromagnetic phenomenon. Confusion over the term "Maxwell's
equations" is exacerbated because it is also sometimes used for a set of eight equations that
appeared in Part III of Maxwell's 1864 paper A Dynamical Theory of the Electromagnetic
Field, entitled "General Equations of the Electromagnetic Field,"[12] a confusion compounded
by the writing of six of those eight equations as three separate equations (one for each of the
Cartesian axes), resulting in twenty equations and twenty unknowns. (As noted above, this
terminology is not common: Modern references to the term "Maxwell's equations" refer to
the Heaviside restatements.)

The eight original Maxwell's equations can be written in modern vector notation as follows:

(A) The law of total currents

(B) The equation of magnetic force

(C) Ampère's circuital law

(D) Electromotive force created by convection, induction, and by static electricity. (This is in
effect the Lorentz force)
(E) The electric elasticity equation

(F) Ohm's law

(G) Gauss's law

(H) Equation of continuity

or

Notation
is the magnetizing field, which Maxwell called the magnetic intensity.
is the electric current density (with being the total current including
[note 3]
displacement current).
is the displacement field (called the electric displacement by Maxwell).
is the free charge density (called the quantity of free electricity by Maxwell).
is the magnetic vector potential (called the angular impulse by Maxwell).
is called the electromotive force by Maxwell. The term electromotive force is
nowadays used for voltage, but it is clear from the context that Maxwell's meaning
corresponded more to the modern term electric field.
is the electric potential (which Maxwell also called electric potential).
is the electrical conductivity (Maxwell called the inverse of conductivity the
specific resistance, what is now called the resistivity).

It is interesting to note the term that appears in equation D. Equation D is therefore


effectively the Lorentz force, similarly to equation (77) of his 1861 paper (see above).

When Maxwell derives the electromagnetic wave equation in his 1865 paper, he uses
equation D to cater for electromagnetic induction rather than Faraday's law of induction
which is used in modern textbooks. (Faraday's law itself does not appear among his
equations.) However, Maxwell drops the term from equation D when he is deriving
the electromagnetic wave equation, as he considers the situation only from the rest frame.

[edit] A Treatise on Electricity and Magnetism (1873)

English Wikisource has original text related to this article:


A Treatise on Electricity and Magnetism

In A Treatise on Electricity and Magnetism, an 1873 textbook on electromagnetism written


by James Clerk Maxwell, the equations are compiled into two sets.

The first set is


The second set is

[edit] Maxwell's equations and matter


[edit] Bound charge and current

Main articles: Bound charge#Bound charge and Bound current#Magnetization current

Left: A schematic view of how an assembly of microscopic dipoles appears like a


macroscopically separated pair of charged sheets, as shown at top and bottom (these sheets
are not intended to be viewed as originating the electric field that causes the dipole alignment,
but as a representation equivalent to the dipole array); Right: How an assembly of
microscopic current loops appears as a macroscopically circulating current loop. Inside the
boundaries, the individual contributions tend to cancel, but at the boundaries no cancellation
occurs.

If an electric field is applied to a dielectric material, each of the molecules responds by


forming a microscopic electric dipole—its atomic nucleus will move a tiny distance in the
direction of the field, while its electrons will move a tiny distance in the opposite direction.
This is called polarization of the material. In an idealized situation like that shown in the
figure, the distribution of charge that results from these tiny movements turns out to be
identical (outside the material) to having a layer of positive charge on one side of the
material, and a layer of negative charge on the other side (a macroscopic separation of
charge) even though all of the charges involved are bound to individual molecules. The
volume polarization P is a result of bound charge. (Mathematically, once physical
approximation has established the electric dipole density P based upon the underlying
behavior of atoms, the surface charge that is equivalent to the material with its internal
polarization is provided by the divergence theorem applied to a region straddling the interface
between the material and the surrounding vacuum.)[13][14]
Somewhat similarly, in all materials the constituent atoms exhibit magnetic moments that are
intrinsically linked to the angular momentum of the atoms' components, most notably their
electrons. The connection to angular momentum suggests the picture of an assembly of
microscopic current loops. Outside the material, an assembly of such microscopic current
loops is not different from a macroscopic current circulating around the material's surface,
despite the fact that no individual magnetic moment is traveling a large distance. The bound
currents can be described using M. (Mathematically, once physical approximation has
established the magnetic dipole density based upon the underlying behavior of atoms, the
surface current that is equivalent to the material with its internal magnetization is provided by
Stokes' theorem applied to a path straddling the interface between the material and the
surrounding vacuum.)[15][16]

These ideas suggest that for some situations the microscopic details of the atomic and
electronic behavior can be treated in a simplified fashion that ignores many details on a fine
scale that may be unimportant to understanding matters on a grosser scale. That notion
underlies the bound/free partition of behavior.

[edit] Proof that the two general formulations are equivalent

In this section, a simple proof is outlined which shows that the two alternate general
formulations of Maxwell's equations given in Section 1 are mathematically equivalent.

The relation between polarization, magnetization, bound charge, and bound current is as
follows:

where P and M are polarization and magnetization, and ρb and Jb are bound charge and
current, respectively. Plugging in these relations, it can be easily demonstrated that the two
formulations of Maxwell's equations given in Section 2 are precisely equivalent.

[edit] Constitutive relations

In order to apply Maxwell's equations (the formulation in terms of free/bound charge and
current using D and H), it is necessary to specify the relations between D and E, and H and
B.

Finding relations between these fields is another way to say that to solve Maxwell's equations
by employing the free/bound partition of charges and currents, one needs the properties of the
materials relating the response of bound currents and bound charges to the fields applied to
these materials.[note 4] These relations may be empirical (based directly upon measurements),
or theoretical (based upon statistical mechanics, transport theory or other tools of condensed
matter physics). The detail employed may be macroscopic or microscopic, depending upon
the level necessary to the problem under scrutiny. These material properties specifying the
response of bound charge and current to the field are called constitutive relations, and
correspond physically to how much polarization and magnetization a material acquires in the
presence of electromagnetic fields.

Once the responses of bound currents and charges are related to the fields, Maxwell's
equations can be fully formulated in terms of the E- and B-fields alone, with only the free
charges and currents appearing explicitly in the equations.

[edit] Case without magnetic or dielectric materials

In the absence of magnetic or dielectric materials, the relations are simple:

where ε0 and μ0 are two universal constants, called the permittivity of free space and
permeability of free space, respectively.

[edit] Case of linear materials

In a linear, isotropic, nondispersive, uniform material, the relations are also straightforward:

where ε and μ are constants (which depend on the material), called the permittivity and
permeability, respectively, of the material.

[edit] General case

For real-world materials, the constitutive relations are not simple proportionalities, except
approximately. The relations can usually still be written:

but ε and μ are not, in general, simple constants, but rather functions. For example, ε and μ
can depend upon:

• The strength of the fields (the case of nonlinearity, which occurs when ε and μ are
functions of E and B; see, for example, Kerr and Pockels effects),
• The direction of the fields (the case of anisotropy, birefringence, or dichroism; which
occurs when ε and μ are second-rank tensors),
• The frequency with which the fields vary (the case of dispersion, which occurs when
ε and μ are functions of frequency; see, for example, Kramers–Kronig relations).

If further there are dependencies on:

• The position inside the material (the case of a nonuniform material, which occurs
when the response of the material varies from point to point within the material, an
effect called spatial inhomogeneity; for example in a domained structure,
heterostructure or a liquid crystal, or most commonly in the situation where there are
simply multiple materials occupying different regions of space),
• The history of the fields—in a linear time-invariant material, this is equivalent to the
material dispersion mentioned above (a frequency dependence of the ε and μ), which
after Fourier transforming turns into a convolution with the fields at past times,
expressing a non-instantaneous response of the material to an applied field; in a
nonlinear or time-varying medium, the time-dependent response can be more
complicated, such as the example of a hysteresis response,

then the constitutive relations take a more complicated form:[17][18]

in which the permittivity and permeability functions are replaced by integrals over the more
general electric and magnetic susceptibilities.

It may be noted that man-made materials can be designed to have customized permittivity
and permeability, such as metamaterials and photonic crystals.

[edit] Maxwell's equations in terms of E and B for linear materials

Substituting in the constitutive relations above, Maxwell's equations in linear, dispersionless,


time-invariant materials (differential form only) are:

These are formally identical to the general formulation in terms of E and B (given above),
except that the permittivity of free space was replaced with the permittivity of the material
(see also electric displacement field, electric susceptibility and polarization density), the
permeability of free space was replaced with the permeability of the material (see also
magnetization, magnetic susceptibility and magnetic field), and only free charges and
currents are included (instead of all charges and currents). Unless that material is
homogeneous in space, ε and μ cannot be factored out of the derivative expressions on the
left-hand sides.

[edit] Calculation of constitutive relations


See also: Computational electromagnetics

The fields in Maxwell's equations are generated by charges and currents. Conversely, the
charges and currents are affected by the fields through the Lorentz force equation:

where q is the charge on the particle and v is the particle velocity. (It also should be
remembered that the Lorentz force is not the only force exerted upon charged bodies, which
also may be subject to gravitational, nuclear, etc. forces.) Therefore, in both classical and
quantum physics, the precise dynamics of a system form a set of coupled differential
equations, which are almost always too complicated to be solved exactly, even at the level of
statistical mechanics.[note 5] This remark applies to not only the dynamics of free charges and
currents (which enter Maxwell's equations directly), but also the dynamics of bound charges
and currents, which enter Maxwell's equations through the constitutive equations, as
described next.

Commonly, real materials are approximated as continuous media with bulk properties such as
the refractive index, permittivity, permeability, conductivity, and/or various susceptibilities.
These lead to the macroscopic Maxwell's equations, which are written (as given above) in
terms of free charge/current densities and D, H, E, and B ( rather than E and B alone ) along
with the constitutive equations relating these fields. For example, although a real material
consists of atoms whose electronic charge densities can be individually polarized by an
applied field, for most purposes behavior at the atomic scale is not relevant and the material
is approximated by an overall polarization density related to the applied field by an electric
susceptibility.

Continuum approximations of atomic-scale inhomogeneities cannot be determined from


Maxwell's equations alone, but require some type of quantum mechanical analysis such as
quantum field theory as applied to condensed matter physics. See, for example, density
functional theory, Green-Kubo relations and Green's function (many-body theory). Various
approximate transport equations have evolved, for example, the Boltzmann equation or the
Fokker-Planck equation or the Navier-Stokes equations. Some examples where these
equations are applied are magnetohydrodynamics, fluid dynamics, electrohydrodynamics,
superconductivity, plasma modeling. An entire physical apparatus for dealing with these
matters has developed. A different set of homogenization methods (evolving from a tradition
in treating materials such as conglomerates and laminates) are based upon approximation of
an inhomogeneous material by a homogeneous effective medium[19][20] (valid for excitations
with wavelengths much larger than the scale of the inhomogeneity).[21][22][23][24]

Theoretical results have their place, but often require fitting to experiment. Continuum-
approximation properties of many real materials rely upon measurement,[25] for example,
ellipsometry measurements.

In practice, some materials properties have a negligible impact in particular circumstances,


permitting neglect of small effects. For example: optical nonlinearities can be neglected for
low field strengths; material dispersion is unimportant where frequency is limited to a narrow
bandwidth; material absorption can be neglected for wavelengths where a material is
transparent; and metals with finite conductivity often are approximated at microwave or
longer wavelengths as perfect metals with infinite conductivity (forming hard barriers with
zero skin depth of field penetration).

And, of course, some situations demand that Maxwell's equations and the Lorentz force be
combined with other forces that are not electromagnetic. An obvious example is gravity. A
more subtle example, which applies where electrical forces are weakened due to charge
balance in a solid or a molecule, is the Casimir force from quantum electrodynamics.[26]

The connection of Maxwell's equations to the rest of the physical world is via the
fundamental charges and currents. These charges and currents are a response of their sources
to electric and magnetic fields and to other forces. The determination of these responses
involves the properties of physical materials.

[edit] In vacuum

Further information: Electromagnetic wave equation and Sinusoidal plane-wave solutions of


the electromagnetic wave equation

Start with the equations appropriate for the case without dielectric or magnetic materials.
Then assume a vacuum: No charges ( ) and no currents ( ). We then have:

One set of solutions to these equations takes the the form of traveling sinusoidal plane waves,
with the directions of the electric and magnetic fields being orthogonal to one another and the
direction of travel. The two fields are in phase, traveling at the speed:

In fact, Maxwell's equations explain how these waves can physically propagate through
space. The changing magnetic field creates a changing electric field through Faraday's law. In
turn, that electric field creates a changing magnetic field through Maxwell's correction to
Ampère's law. This perpetual cycle allows these waves, now known as electromagnetic
radiation, to move through space at velocity c.

Maxwell knew from an 1856 leyden jar experiment by Wilhelm Eduard Weber and Rudolf
Kohlrausch, that c was very close to the measured speed of light in vacuum (already known
at the time), and concluded (correctly) that light is a form of electromagnetic radiation.

[edit] With magnetic monopoles

Maxwell's equations of electromagnetism relate the electric and magnetic fields to the
motions of electric charges. The standard form of the equations provide for an electric charge,
but posit no magnetic charge. There is no known magnetic analog of an electron, however
recently scientists have described behavior in a crystalline state of matter known as spin-ice
which have macroscopic behavior like magnetic monopoles.[27][28] (in accordance with the fact
that magnetic charge has never been seen and may not exist). Except for this, the equations
are symmetric under interchange of electric and magnetic field. In fact, symmetric equations
can be written when all charges are zero, and this is how the wave equation is derived (see
immediately above).

Fully symmetric equations can also be written if one allows for the possibility of magnetic
charges.[29] With the inclusion of a variable for these magnetic charges, say , there will
also be a "magnetic current" variable in the equations, . The extended Maxwell's
equations, simplified by nondimensionalization via Planck units, are as follows:

Without magnetic With magnetic monopoles


Name
monopoles (hypothetical)
Gauss's law:    
Gauss's law for
   
magnetism:
Maxwell–Faraday
equation
(Faraday's law of    
induction):
Ampère's law  
(with Maxwell's
extension):  
Note: the Bivector notation embodies the sign swap, and these four equations can be written as only one
equation.

If magnetic charges do not exist, or if they exist but where they are not present in a region,
then the new variables are zero, and the symmetric equations reduce to the conventional
equations of electromagnetism such as .

[edit] Boundary conditions: using Maxwell's equations


Although Maxwell's equations apply throughout space and time, practical problems are finite
and solutions to Maxwell's equations inside the solution region are joined to the remainder of
the universe through boundary conditions[30][31][32] and started in time using initial conditions.
[33]

In particular, in a region without any free currents or free charges, the electromagnetic fields
in the region originate elsewhere, and are introduced via boundary and/or initial conditions.
An example of this type is a an electromagnetic scattering problem, where an electromagnetic
wave originating outside the scattering region is scattered by a target, and the scattered
electromagnetic wave is analyzed for the information it contains about the target by virtue of
the interaction with the target during scattering.[34]

In some cases, like waveguides or cavity resonators, the solution region is largely isolated
from the universe, for example, by metallic walls, and boundary conditions at the walls
define the fields with influence of the outside world confined to the input/output ends of the
structure.[35] In other cases, the universe at large sometimes is approximated by an artificial
absorbing boundary,[36][37][38] or, for example for radiating antennas or communication
satellites, these boundary conditions can take the form of asymptotic limits imposed upon the
solution.[39] In addition, for example in an optical fiber or thin-film optics, the solution region
often is broken up into subregions with their own simplified properties, and the solutions in
each subregion must be joined to each other across the subregion interfaces using boundary
conditions.[40][41][42] A particular example of this use of boundary conditions is the replacement
of a material with a volume polarization by a charged surface layer, or of a material with a
volume magnetization by a surface current, as described in the section Bound charge and
current.

Following are some links of a general nature concerning boundary value problems: Examples
of boundary value problems, Sturm-Liouville theory, Dirichlet boundary condition, Neumann
boundary condition, mixed boundary condition, Cauchy boundary condition, Sommerfeld
radiation condition. Needless to say, one must choose the boundary conditions appropriate to
the problem being solved. See also Kempel[43] and the book by Friedman.[44]

[edit] CGS units


The preceding equations are given in the International System of Units, or SI for short. The
related CGS system of units defines the unit of electric current in terms of centimeters, grams
and seconds variously. In one of those variants, called Gaussian units, the equations take the
following form:[45]

where c is the speed of light in a vacuum. For the electromagnetic field in a vacuum,
assuming that there is no current or electric charge present in the vacuum, the equations
become:

In this system of units the relation between electric displacement field, electric field and
polarization density is:
And likewise the relation between magnetic induction, magnetic field and total magnetization
is:

In the linear approximation, the electric susceptibility and magnetic susceptibility can be
defined so that:

(Note that although the susceptibilities are dimensionless numbers in both cgs and SI, they
have different values in the two unit systems, by a factor of 4π.) The permittivity and
permeability are:

so that

In vacuum, one has the simple relations , D=E, and B=H.

The force exerted upon a charged particle by the electric field and magnetic field is given by
the Lorentz force equation:

where is the charge on the particle and is the particle velocity. This is slightly different
from the SI-unit expression above. For example, here the magnetic field has the same units
as the electric field .

Some equations in the article are given in Gaussian units but not SI or vice-versa.
Fortunately, there are general rules to convert from one to the other; see the article Gaussian
units for details.

[edit] Special relativity


Main article: Classical electromagnetism and special relativity

Maxwell's equations have a close relation to special relativity: Not only were Maxwell's
equations a crucial part of the historical development of special relativity, but also, special
relativity has motivated a compact mathematical formulation of Maxwell's equations, in
terms of covariant tensors.

[edit] Historical developments

Main article: History of special relativity


Maxwell's electromagnetic wave equation applied only in what he believed to be the rest
frame of the luminiferous medium because he didn't use the v×B term of his equation (D)
when he derived it. Maxwell's idea of the luminiferous medium was that it consisted of
aethereal vortices aligned solenoidally along their rotation axes.

The American scientist A.A. Michelson set out to determine the velocity of the earth through
the luminiferous medium aether using a light wave interferometer that he had invented. When
the Michelson-Morley experiment was conducted by Edward Morley and Albert Abraham
Michelson in 1887, it produced a null result for the change of the velocity of light due to the
Earth's motion through the hypothesized aether. This null result was in line with the theory
that was proposed in 1845 by George Stokes which suggested that the aether was entrained
with the Earth's orbital motion.

Hendrik Lorentz objected to Stokes' aether drag model and, along with George FitzGerald
and Joseph Larmor, he suggested another approach. Both Larmor (1897) and Lorentz (1899,
1904) derived the Lorentz transformation (so named by Henri Poincaré) as one under which
Maxwell's equations were invariant. Poincaré (1900) analyzed the coordination of moving
clocks by exchanging light signals. He also established mathematically the group property of
the Lorentz transformation (Poincaré 1905).

This culminated in Albert Einstein's theory of special relativity, which postulated the absence
of any absolute rest frame, dismissed the aether as unnecessary (a bold idea that occurred to
neither Lorentz nor Poincaré), and established the invariance of Maxwell's equations in all
inertial frames of reference, in contrast to the famous Newtonian equations for classical
mechanics. But the transformations between two different inertial frames had to correspond
to Lorentz' equations and not — as formerly believed — to those of Galileo (called Galilean
transformations).[46] Indeed, Maxwell's equations played a key role in Einstein's famous paper
on special relativity; for example, in the opening paragraph of the paper, he motivated his
theory by noting that a description of a conductor moving with respect to a magnet must
generate a consistent set of fields irrespective of whether the force is calculated in the rest
frame of the magnet or that of the conductor.[47]

General relativity has also had a close relationship with Maxwell's equations. For example,
Kaluza and Klein showed in the 1920s that Maxwell's equations can be derived by extending
general relativity into five dimensions. This strategy of using higher dimensions to unify
different forces remains an active area of research in particle physics.

[edit] Covariant formulation of Maxwell's equations

Main article: Covariant formulation of classical electromagnetism

In special relativity, in order to more clearly express the fact that Maxwell's equations in
vacuo take the same form in any inertial coordinate system, Maxwell's equations are written
in terms of four-vectors and tensors in the "manifestly covariant" form. The purely spatial
components of the following are in SI units.

One ingredient in this formulation is the electromagnetic tensor, a rank-2 covariant


antisymmetric tensor combining the electric and magnetic fields:
and the result of raising its indices

The other ingredient is the four-current: where ρ is the charge density and J is
the current density.

With these ingredients, Maxwell's equations can be written:

and

The first tensor equation is an expression of the two inhomogeneous Maxwell's equations,
Gauss's law and Ampere's law with Maxwell's correction. The second equation is an
expression of the two homogeneous equations, Faraday's law of induction and Gauss's law
for magnetism. The second equation is equivalent to

where is the contravariant version of the Levi-Civita symbol, and

is the 4-gradient. In the tensor equations above, repeated indices are summed over according
to Einstein summation convention. We have displayed the results in several common
notations. Upper and lower components of a vector, vα and vα respectively, are interchanged
with the fundamental tensor g, e.g., g=η=diag(-1,+1,+1,+1).

Alternative covariant presentations of Maxwell's equations also exist, for example in terms of
the four-potential; see Covariant formulation of classical electromagnetism for details.
In Geometric algebra, these equations simplify to:

[edit] Potentials
Main article: Mathematical descriptions of the electromagnetic field

Maxwell's equations can be written in an alternative form, involving the electric potential
(also called scalar potential) and magnetic potential (also called vector potential), as follows.
[18]
(The following equations are valid in the absence of dielectric and magnetic materials; or
if such materials are present, they are valid as long as bound charge and bound current are
included in the total charge and current densities.)

First, Gauss's law for magnetism states:

By Helmholtz's theorem, B can be written in terms of a vector field A, called the magnetic
potential:

Second, plugging this into Faraday's law, we get:

By Helmholtz's theorem, the quantity in parentheses can be written in terms of a scalar


function , called the electric potential:

Combining these with the remaining two Maxwell's equations yields the four relations:

These equations, taken together, are as powerful and complete as Maxwell's equations.
Moreover, if we work only with the potentials and ignore the fields, the problem has been
reduced somewhat, as the electric and magnetic fields each have three components which
need to be solved for (six components altogether), while the electric and magnetic potentials
have only four components altogether.

Many different choices of A and are consistent with a given E and B, making these choices
physically equivalent – a flexibility known as gauge freedom. Suitable choice of A and can
simplify these equations, or can adapt them to suit a particular situation. For more
information, see the article gauge freedom.

[edit] Four-potential
The two equations that represent the potentials can be reduced to one manifestly Lorentz
invariant equation, using four-vectors: the four-current defined by

formed from the current density j and charge density ρ, and the electromagnetic four-
potential defined by

formed from the vector potential A and the scalar potential . The resulting single equation,
due to Arnold Sommerfeld, a generalization of an equation due to Bernhard Riemann and
known as the Riemann-Sommerfeld equation[48] or the covariant form of the Maxwell-Lorentz
equations,[49] is:

where is the d'Alembertian operator, or four-Laplacian, ,


sometimes written , or , where is the four-gradient.

[edit] Differential forms


In free space, where ε = ε0 and μ = μ0 are constant everywhere, Maxwell's equations simplify
considerably once the language of differential geometry and differential forms is used. In
what follows, cgs-Gaussian units, not SI units are used. (To convert to SI, see here.) The
electric and magnetic fields are now jointly described by a 2-form F in a 4-dimensional
spacetime manifold. Maxwell's equations then reduce to the Bianchi identity

where d denotes the exterior derivative — a natural coordinate and metric independent
differential operator acting on forms — and the source equation

where the (dual) Hodge star operator * is a linear transformation from the space of 2-forms to
the space of (4-2)-forms defined by the metric in Minkowski space (in four dimensions even
by any metric conformal to this metric), and the fields are in natural units where 1 / 4πε0 =
1. Here, the 3-form J is called the electric current form or current 3-form satisfying the
continuity equation

The current 3-form can be integrated over a 3-dimensional space-time region. The physical
interpretation of this integral is the charge in that region if it is spacelike, or the amount of
charge that flows through a surface in a certain amount of time if that region is a spacelike
surface cross a timelike interval. As the exterior derivative is defined on any manifold, the
differential form version of the Bianchi identity makes sense for any 4-dimensional manifold,
whereas the source equation is defined if the manifold is oriented and has a Lorentz metric. In
particular the differential form version of the Maxwell equations are a convenient and
intuitive formulation of the Maxwell equations in general relativity.

In a linear, macroscopic theory, the influence of matter on the electromagnetic field is


described through more general linear transformation in the space of 2-forms. We call

the constitutive transformation. The role of this transformation is comparable to the Hodge
duality transformation. The Maxwell equations in the presence of matter then become:

where the current 3-form J still satisfies the continuity equation dJ= 0.

When the fields are expressed as linear combinations (of exterior products) of basis forms ,

the constitutive relation takes the form

where the field coefficient functions are antisymmetric in the indices and the constitutive
coefficients are antisymmetric in the corresponding pairs. In particular, the Hodge duality
transformation leading to the vacuum equations discussed above are obtained by taking

which up to scaling is the only invariant tensor of this type that can be defined with the
metric.

In this formulation, electromagnetism generalises immediately to any 4-dimensional oriented


manifold or with small adaptations any manifold, requiring not even a metric. Thus the
expression of Maxwell's equations in terms of differential forms leads to a further notational
and conceptual simplification. Whereas Maxwell's Equations could be written as two tensor
equations instead of eight scalar equations, from which the propagation of electromagnetic
disturbances and the continuity equation could be derived with a little effort, using
differential forms leads to an even simpler derivation of these results.

[edit] Conceptual insight from this formulation

On the conceptual side, from the point of view of physics, this shows that the second and
third Maxwell equations should be grouped together, be called the homogeneous ones, and be
seen as geometric identities expressing nothing else than: the field F derives from a more
"fundamental" potential A. While the first and last one should be seen as the dynamical
equations of motion, obtained via the Lagrangian principle of least action, from the
"interaction term" A J (introduced through gauge covariant derivatives), coupling the field to
matter.

Often, the time derivative in the third law motivates calling this equation "dynamical", which
is somewhat misleading; in the sense of the preceding analysis, this is rather an artifact of
breaking relativistic covariance by choosing a preferred time direction. To have physical
degrees of freedom propagated by these field equations, one must include a kinetic term F *F
for A; and take into account the non-physical degrees of freedom which can be removed by
gauge transformation A→A' = A-dα: see also gauge fixing and Fadeev-Popov ghosts.

[edit] Classical electrodynamics as the curvature of a line


bundle
An elegant and intuitive way to formulate Maxwell's equations is to use complex line bundles
or principal bundles with fibre U(1). The connection on the line bundle has a curvature
which is a two-form that automatically satisfies and can be interpreted as
a field-strength. If the line bundle is trivial with flat reference connection d we can write
and F = dA with A the 1-form composed of the electric potential and the
magnetic vector potential.

In quantum mechanics, the connection itself is used to define the dynamics of the system.
This formulation allows a natural description of the Aharonov-Bohm effect. In this
experiment, a static magnetic field runs through a long magnetic wire (e.g., an iron wire
magnetized longitudinally). Outside of this wire the magnetic induction is zero, in contrast to
the vector potential, which essentially depends on the magnetic flux through the cross-section
of the wire and does not vanish outside. Since there is no electric field either, the Maxwell
tensor F = 0 throughout the space-time region outside the tube, during the experiment. This
means by definition that the connection is flat there.

However, as mentioned, the connection depends on the magnetic field through the tube since
the holonomy along a non-contractible curve encircling the tube is the magnetic flux through
the tube in the proper units. This can be detected quantum-mechanically with a double-slit
electron diffraction experiment on an electron wave traveling around the tube. The holonomy
corresponds to an extra phase shift, which leads to a shift in the diffraction pattern. (See
Michael Murray, Line Bundles, 2002 (PDF web link) for a simple mathematical review of
this formulation. See also R. Bott, On some recent interactions between mathematics and
physics, Canadian Mathematical Bulletin, 28 (1985) no. 2 pp 129–164.)

[edit] Curved spacetime


Main article: Maxwell's equations in curved spacetime

[edit] Traditional formulation

Matter and energy generate curvature of spacetime. This is the subject of general relativity.
Curvature of spacetime affects electrodynamics. An electromagnetic field having energy and
momentum will also generate curvature in spacetime. Maxwell's equations in curved
spacetime can be obtained by replacing the derivatives in the equations in flat spacetime with
covariant derivatives. (Whether this is the appropriate generalization requires separate
investigation.) The sourced and source-free equations become (cgs-Gaussian units):

and

Here,

is a Christoffel symbol that characterizes the curvature of spacetime and Dγ is the covariant
derivative.

[edit] Formulation in terms of differential forms

The formulation of the Maxwell equations in terms of differential forms can be used without
change in general relativity. The equivalence of the more traditional general relativistic
formulation using the covariant derivative with the differential form formulation can be seen
as follows. Choose local coordinates xα which gives a basis of 1-forms dxα in every point of
the open set where the coordinates are defined. Using this basis and cgs-Gaussian units we
define

• The antisymmetric infinitesimal field tensor Fαβ, corresponding to the field 2-form F

• The current-vector infinitesimal 3-form J


Here g is as usual the determinant of the metric tensor gαβ. A small computation that uses the
symmetry of the Christoffel symbols (i.e., the torsion-freeness of the Levi Civita connection)
and the covariant constantness of the Hodge star operator then shows that in this coordinate
neighborhood we have:

• the Bianchi identity

• the source equation

• the continuity equation

[edit] See also


• Abraham-Lorentz force
• Ampere's law
• Antenna (radio)
• Bremsstrahlung
• Computational electromagnetics
• Electrical generator
• Electromagnetic wave equation
• Finite-difference time-domain method
• Fresnel equations
• Green–Kubo relations
• Green's function (many-body theory)
• Interface conditions for electromagnetic fields
• Jefimenko's equations
• Kramers–Kronig relation
• Laser
• Linear response function
• Lorentz force
• Mathematical descriptions of the electromagnetic field
• Moving magnet and conductor problem
• Nonhomogeneous electromagnetic wave equation
• Photon dynamics in the double-slit experiment
• Photon polarization
• Photonic crystal
• Scattering-matrix method
• Sinusoidal plane-wave solutions of the electromagnetic wave equation
• Theoretical and experimental justification for the Schrödinger equation
• Transformer
• Waveguide
• Wheeler-Feynman time-symmetric theory for electrodynamics

[edit] Notes
1. ^ Using modern SI terminology: The electric constant can be estimated by measuring the
force between two charges and using Coulomb's law; and the magnetic constant can be
estimated by measuring the force between two current-carrying wires, and using Ampere's
force law. The product of these two, to the (-1/2) power, is the speed of electromagnetic
radiation predicted by Maxwell's equations, given in meters per second.
2. ^ In some books *e.g., in [4]), the term effective charge is used instead of total charge, while
free charge is simply called charge.
3. ^ Here it is noted that a quite different quantity, the magnetic polarization, by
decision of an international IUPAP commission has been given the same name . So for the
electric current density, a name with small letters, would be better. But even then the
mathematitians would still use the large-letter-name for the corresponding current-twoform
(see below).
4. ^ The free charges and currents respond to the fields through the Lorentz force law and this
response is calculated at a fundamental level using mechanics. The response of bound charges
and currents is dealt with using grosser methods subsumed under the notions of magnetization
and polarization. Depending upon the problem, one may choose to have no free charges
whatsoever.
5. ^ These complications show there is merit in separating the Lorentz force from the main four
Maxwell equations. The four Maxwell's equations express the fields' dependence upon current
and charge, setting apart the calculation of these currents and charges. As noted in this
subsection, these calculations may well involve the Lorentz force only implicitly. Separating
these complicated considerations from the Maxwell's equations provides a useful framework.

[edit] References
1. ^ a b c J.D. Jackson, "Maxwell's Equations" video glossary entry
2. ^ Principles of physics: a calculus-based text, by R.A. Serway, J.W. Jewett, page 809.
3. ^ David J Griffiths (1999). Introduction to electrodynamics (Third ed.). Prentice Hall.
pp. 559–562. ISBN 013805326X. http://worldcat.org/isbn/013805326X.
4. ^ U. Krey and A. Owen's Basic Theoretical Physics (Springer 2007)
5. ^ Crease, Robert. The Great Equations: Breakthroughs in Science from Pythagoras to
Heisenberg, page 133 (2008).
6. ^ a b c d e but are now universally known as Maxwell's equations. Paul J. Nahin (2002-10-09).
Oliver Heaviside: the life, work, and times of an electrical genius of the Victorian age. JHU
Press. pp. 108–112. ISBN 9780801869099. http://books.google.com/?
id=e9wEntQmA0IC&pg=PA111&dq=nahin+hertz-heaviside+maxwell-hertz.
7. ^ a b Jed Z. Buchwald (1994). The creation of scientific effects: Heinrich Hertz and electric
waves. University of Chicago Press. p. 194. ISBN 9780226078885. http://books.google.com/?
id=2bDEvvGT1EYC&pg=PA194&dq=maxwell+faraday+time-derivative+vector-potential.
8. ^ Myron Evans (2001-10-05). Modern nonlinear optics. John Wiley and Sons. p. 240.
ISBN 9780471389316. http://books.google.com/?
id=9p0kK6IG94gC&pg=PA240&dq=maxwell-heaviside+equations.
9. ^ Oliver J. Lodge (November 1888). "Sketch of the Electrical Papers in Section A, at the
Recent Bath Meeting of the British Association". Electrical Engineer 7: 535.
10.^ J. R. Lalanne, F. Carmona, and L. Servant (1999-11). Optical spectroscopies of electronic
absorption. World Scientific. p. 8. ISBN 9789810238612. http://books.google.com/?
id=7rWD-TdxKkMC&pg=PA8&dq=maxwell-faraday+derivative.
11.^ Roger F. Harrington (2003-10-17). Introduction to Electromagnetic Engineering. Courier
Dover Publications. pp. 49–56. ISBN 9780486432410. http://books.google.com/?
id=ZlC2EV8zvX8C&pg=PR7&dq=maxwell-faraday-equation+law-of-induction.
12.^
http://upload.wikimedia.org/wikipedia/commons/1/19/A_Dynamical_Theory_of_the_Electro
magnetic_Field.pdf page 480.
13.^ MS Longair (2003). Theoretical Concepts in Physics (2 ed.). Cambridge University Press.
p. 127. ISBN 052152878X. http://books.google.com/?id=bA9Lp2GH6OEC&pg=PA127.
14.^ Kenneth Franklin Riley, Michael Paul Hobson, Stephen John Bence (2006). Mathematical
methods for physics and engineering (3 ed.). Cambridge University Press. p. 404.
ISBN 0521861535. http://books.google.com/?id=Mq1nlEKhNcsC&pg=PA404.
15.^ MS Longair (2003). Theoretical Concepts in Physics (2 ed.). Cambridge University Press.
pp. 119 and 127. ISBN 052152878X. http://books.google.com/?
id=bA9Lp2GH6OEC&pg=PA119.
16.^ Kenneth Franklin Riley, Michael Paul Hobson, Stephen John Bence (2006). Mathematical
methods for physics and engineering (3 ed.). Cambridge University Press. p. 406.
ISBN 0521861535. http://books.google.com/?id=Mq1nlEKhNcsC&pg=PA406.
17.^ Halevi, Peter (1992). Spatial dispersion in solids and plasmas. Amsterdam: North-Holland.
ISBN 978-0444874054.
18.^ a b Jackson, John David (1999). Classical Electrodynamics (3rd ed.). New York: Wiley.
ISBN 0-471-30932-X.
19.^ Aspnes, D.E., "Local-field effects and effective-medium theory: A microscopic
perspective," Am. J. Phys. 50, p. 704-709 (1982).
20.^ Habib Ammari & Hyeonbae Kang (2006). Inverse problems, multi-scale analysis and
effective medium theory : workshop in Seoul, Inverse problems, multi-scale analysis, and
homogenization, June 22–24, 2005, Seoul National University, Seoul, Korea. Providence RI:
American Mathematical Society. pp. 282. ISBN 0821839683. http://books.google.com/?
id=dK7JwVPbUkMC&printsec=frontcover&dq=%22effective+medium%22.
21.^ O. C. Zienkiewicz, Robert Leroy Taylor, J. Z. Zhu, Perumal Nithiarasu (2005). The Finite
Element Method (Sixth ed.). Oxford UK: Butterworth-Heinemann. p. 550 ff.
ISBN 0750663219. http://books.google.com/?
id=rvbSmooh8Y4C&printsec=frontcover&dq=finite+element+inauthor:Zienkiewicz.
22.^ N. Bakhvalov and G. Panasenko, Homogenization: Averaging Processes in Periodic Media
(Kluwer: Dordrecht, 1989); V. V. Jikov, S. M. Kozlov and O. A. Oleinik, Homogenization of
Differential Operators and Integral Functionals (Springer: Berlin, 1994).
23.^ Vitaliy Lomakin, Steinberg BZ, Heyman E, & Felsen LB (2003). "Multiresolution
Homogenization of Field and Network Formulations for Multiscale Laminate Dielectric
Slabs". IEEE Transactions on Antennas and Propagation 51 (10): 2761 ff.
doi:10.1109/TAP.2003.816356. http://www.ece.ucsd.edu/~vitaliy/A8.pdf.
24.^ AC Gilbert (Ronald R Coifman, Editor) (2000-05). Topics in Analysis and Its Applications:
Selected Theses. Singapore: World Scientific Publishing Company. p. 155.
ISBN 9810240945. http://books.google.com/?
id=d4MOYN5DjNUC&printsec=frontcover&dq=homogenization+date:2000-2009.
25.^ Edward D. Palik & Ghosh G (1998). Handbook of Optical Constants of Solids. London
UK: Academic Press. pp. 1114. ISBN 0125444222. http://books.google.com/?
id=AkakoCPhDFUC&dq=optical+constants+inauthor:Palik.
26.^ F Capasso, JN Munday, D. Iannuzzi & HB Chen Casimir forces and quantum
electrodynamical torques: physics and nanomechanics
27.^ http://www.sciencemag.org/cgi/content/abstract/1178868
28.^ http://www.nature.com/nature/journal/v461/n7266/full/nature08500.html
29.^ "IEEEGHN: Maxwell's Equations". Ieeeghn.org.
http://www.ieeeghn.org/wiki/index.php/Maxwell%27s_Equations. Retrieved 2008-10-19.
30.^ Peter Monk; ), Peter Monk (Ph.D (2003). Finite Element Methods for Maxwell's Equations.
Oxford UK: Oxford University Press. p. 1 ff. ISBN 0198508883. http://books.google.com/?
id=zI7Y1jT9pCwC&pg=PA1&dq=electromagnetism+%22boundary+conditions%22.
31.^ Thomas B. A. Senior & John Leonidas Volakis (1995-03-01). Approximate Boundary
Conditions in Electromagnetics. London UK: Institution of Electrical Engineers. p. 261 ff.
ISBN 0852968493. http://books.google.com/?
id=eOofBpuyuOkC&pg=PA261&dq=electromagnetism+%22boundary+conditions%22.
32.^ T Hagstrom (Björn Engquist & Gregory A. Kriegsmann, Eds.) (1997). Computational
Wave Propagation. Berlin: Springer. p. 1 ff. ISBN 0387948740. http://books.google.com/?
id=EdZefkIOR5cC&pg=PA1&dq=electromagnetism+%22boundary+conditions%22.
33.^ Henning F. Harmuth & Malek G. M. Hussain (1994). Propagation of Electromagnetic
Signals. Singapore: World Scientific. p. 17. ISBN 9810216890. http://books.google.com/?
id=6_CZBHzfhpMC&pg=PA45&dq=electromagnetism+%22initial+conditions%22.
34.^ Fioralba Cakoni; Colton, David L (2006). "The inverse scattering problem for an imperfect
conductor". Qualitative methods in inverse scattering theory. Springer Science & Business.
p. 61. ISBN 3540288449. http://books.google.com/?
id=7DqqOjPJetYC&pg=PR6#PPA61,M1., Khosrow Chadan et al. (1997). An introduction to
inverse scattering and inverse spectral problems. Society for Industrial and Applied
Mathematics. p. 45. ISBN 0898713870. http://books.google.com/?
id=y2rywYxsDEAC&pg=PA45.
35.^ S. F. Mahmoud (1991). Electromagnetic Waveguides: Theory and Applications
applications. London UK: Institution of Electrical Engineers. Chapter 2. ISBN 0863412327.
http://books.google.com/?id=toehQ7vLwAMC&pg=PA2&dq=Maxwell
%27s+equations+waveguides.
36.^ Jean-Michel Lourtioz (2005-05-23). Photonic Crystals: Towards Nanoscale Photonic
Devices. Berlin: Springer. p. 84. ISBN 354024431X. http://books.google.com/?
id=vSszZ2WuG_IC&pg=PA84&dq=electromagnetism+boundary++-element.
37.^ S. G. Johnson, Notes on Perfectly Matched Layers, online MIT course notes (Aug. 2007).
38.^ Taflove A & Hagness S C (2005). Computational Electrodynamics: The Finite-difference
Time-domain Method. Boston MA: Artech House. Chapters 6 & 7. ISBN 1580538320.
http://www.amazon.com/gp/reader/1580538320/ref=sib_dp_pop_toc?
ie=UTF8&p=S008#reader-link.
39.^ David M Cook (2002). The Theory of the Electromagnetic Field. Mineola NY: Courier
Dover Publications. p. 335 ff. ISBN 0486425673. http://books.google.com/?id=bI-
ZmZWeyhkC&pg=RA1-PA335&dq=electromagnetism+infinity+boundary+conditions.
40.^ Korada Umashankar (1989-09). Introduction to Engineering Electromagnetic Fields.
Singapore: World Scientific. p. §10.7; pp. 359ff. ISBN 9971509210.
http://books.google.com/?id=qp5qHvB_mhcC&pg=PA359&dq=electromagnetism+
%22boundary+conditions%22.
41.^ Joseph V. Stewart (2001). Intermediate Electromagnetic Theory. Singapore: World
Scientific. Chapter III, pp. 111 ff Chapter V, Chapter VI. ISBN 9810244703.
http://books.google.com/?
id=mwLI4nQ0thQC&printsec=frontcover&dq=intitle:Intermediate+intitle:electromagnetic+i
ntitle:theory.
42.^ Tai L. Chow (2006). Electromagnetic theory. Sudbury MA: Jones and Bartlett. p. 333ff and
Chapter 3: pp. 89ff. ISBN 0-7637-3827-1. http://books.google.com/?
id=dpnpMhw1zo8C&pg=PA153&dq=isbn=0763738271.
43.^ John Leonidas Volakis, Arindam Chatterjee & Leo C. Kempel (1998). Finite element
method for electromagnetics : antennas, microwave circuits, and scattering applications.
New York: Wiley IEEE. p. 79 ff. ISBN 0780334256. http://books.google.com/?
id=55q7HqnMZCsC&pg=PA79&dq=electromagnetism+%22boundary+conditions%22.
44.^ Bernard Friedman (1990). Principles and Techniques of Applied Mathematics. Mineola
NY: Dover Publications. ISBN 0486664449. http://www.amazon.com/Principles-Techniques-
Applied-Mathematics-Friedman/dp/0486664449/ref=sr_1_1?
ie=UTF8&s=books&qisbn=1207010487&sr=1-1.
45.^ Littlejohn, Robert (Fall 2007). "Gaussian, SI and Other Systems of Units in
Electromagnetic Theory" (pdf). Physics 221A, University of California, Berkeley lecture
notes. http://bohr.physics.berkeley.edu/classes/221/0708/notes/emunits.pdf. Retrieved 2008-
05-06.
46.^ U. Krey, A. Owen, Basic Theoretical Physics — A Concise Overview, Springer, Berlin and
elsewhere, 2007, ISBN 978-3-540-36804-5
47.^ "On the Electrodynamics of Moving Bodies". Fourmilab.ch.
http://www.fourmilab.ch/etexts/einstein/specrel/www/. Retrieved 2008-10-19.
48.^ Carver A. Mead (2002-08-07). Collective Electrodynamics: Quantum Foundations of
Electromagnetism. MIT Press. pp. 37–38. ISBN 9780262632607. http://books.google.com/?
id=GkDR4e2lo2MC&pg=PA37&dq=Riemann+Summerfeld.
49.^ Frederic V. Hartemann (2002). High-field electrodynamics. CRC Press. p. 102.
ISBN 9780849323782. http://books.google.com/?id=tIkflVrfkG0C&pg=PA102&dq=d
%27Alembertian+covariant-form+maxwell-lorentz.

[edit] Further reading


[edit] Journal articles

• James Clerk Maxwell, "A Dynamical Theory of the Electromagnetic Field",


Philosophical Transactions of the Royal Society of London 155, 459-512 (1865).
(This article accompanied a December 8, 1864 presentation by Maxwell to the Royal
Society.)

The developments before relativity

• Joseph Larmor (1897) "On a dynamical theory of the electric and luminiferous
medium", Phil. Trans. Roy. Soc. 190, 205-300 (third and last in a series of papers with
the same name).
• Hendrik Lorentz (1899) "Simplified theory of electrical and optical phenomena in
moving systems", Proc. Acad. Science Amsterdam, I, 427-43.
• Hendrik Lorentz (1904) "Electromagnetic phenomena in a system moving with any
velocity less than that of light", Proc. Acad. Science Amsterdam, IV, 669-78.
• Henri Poincaré (1900) "La theorie de Lorentz et la Principe de Reaction", Archives
Néerlandaises, V, 253-78.
• Henri Poincaré (1901) Science and Hypothesis
• Henri Poincaré (1905) "Sur la dynamique de l'électron", Comptes rendus de
l'Académie des Sciences, 140, 1504-8.

see

• Macrossan, M. N. (1986) "A note on relativity before Einstein", Brit. J. Phil. Sci., 37,
232-234

[edit] University level textbooks

[edit] Undergraduate
• Feynman, Richard P. (2005). The Feynman Lectures on Physics. 2 (2nd ed.).
Addison-Wesley. ISBN 978-0805390650.
• Fleisch, Daniel (2008). A Student's Guide to Maxwell's Equations. Cambridge
University Press. ISBN 978-0521877619.
• Griffiths, David J. (1998). Introduction to Electrodynamics (3rd ed.). Prentice Hall.
ISBN 0-13-805326-X.
• Hoffman, Banesh (1983). Relativity and Its Roots. W. H. Freeman.
• Krey, U.; Owen, A. (2007). Basic Theoretical Physics: A Concise Overview. Springer.
ISBN 978-3-540-36804-5. See especially part II.
• Purcell, Edward Mills (1985). Electricity and Magnetism. McGraw-Hill. ISBN 0-07-
004908-4.
• Reitz, John R.; Milford, Frederick J.; Christy, Robert W. (2008). Foundations of
Electromagnetic Theory (4th ed.). Addison Wesley. ISBN 978-0321581747.
• Sadiku, Matthew N. O. (2006). Elements of Electromagnetics (4th ed.). Oxford
University Press. ISBN 0-19-5300483.
• Schwarz, Melvin (1987). Principles of Electrodynamics. Dover. ISBN 0-486-65493-
1.
• Stevens, Charles F. (1995). The Six Core Theories of Modern Physics. MIT Press.
ISBN 0-262-69188-4.
• Tipler, Paul; Mosca, Gene (2007). Physics for Scientists and Engineers. 2 (6th ed.).
W. H. Freeman. ISBN 978-1429201339.
• Ulaby, Fawwaz T. (2007). Fundamentals of Applied Electromagnetics (5th ed.).
Pearson Education. ISBN 0-13-241326-4.

[edit] Graduate

• Jackson, J. D. (1999). Classical Electrodynamics (3rd ed.). Wiley. ISBN 0-471-


30932-X.
• Panofsky, Wolfgang K. H.; Phillips, Melba (2005). Classical Electricity and
Magnetism (2nd ed.). Dover. ISBN 978-0486439242.

[edit] Older classics

• Lifshitz, Evgeny; Landau, Lev (1980). The Classical Theory of Fields (4th ed.).
Butterworth-Heinemann. ISBN 0750627689.
• Lifshitz, Evgeny; Landau, Lev; Pitaevskii, L. P. (1984). Electrodynamics of
Continuous Media (2nd ed.). Butterworth-Heinemann. ISBN 0750626348.
• Maxwell, James Clerk (1873). A Treatise on Electricity and Magnetism. Dover.
ISBN 0-486-60637-6.
• Misner, Charles W.; Thorne, Kip; Wheeler, John Archibald (1973). Gravitation. W.
H. Freeman. ISBN 0-7167-0344-0. Sets out the equations using differential forms.

[edit] Computational techniques

• Chew, W. C.; Jin, J.; Michielssen, E. ; Song, J. (2001). Fast and Efficient Algorithms
in Computational Electromagnetics. Artech House. ISBN 1-58053-152-0.
• Harrington, R. F. (1993). Field Computation by Moment Methods. Wiley-IEEE Press.
ISBN 0-78031-014-4.
• Jin, J. (2002). The Finite Element Method in Electromagnetics (2nd ed.). Wiley-IEEE
Press. ISBN 0-47143-818-9.
• Lounesto, Pertti (1997). Clifford Algebras and Spinors. Cambridge University Press..
Chapter 8 sets out several variants of the equations using exterior algebra and
differential forms.
• Taflove, Allen; Hagness, Susan C. (2005). Computational Electrodynamics: The
Finite-Difference Time-Domain Method (3rd ed.). Artech House. ISBN 1-58053-832-
0.

[edit] External links


• Mathematical aspects of Maxwell's equation are discussed on the Dispersive PDE
Wiki.

[edit] Modern treatments

• Electromagnetism, B. Crowell, Fullerton College


• Lecture series: Relativity and electromagnetism, R. Fitzpatrick, University of Texas at
Austin
• Electromagnetic waves from Maxwell's equations on Project PHYSNET.
• MIT Video Lecture Series (36 x 50 minute lectures) (in .mp4 format) - Electricity and
Magnetism Taught by Professor Walter Lewin.

[edit] Historical

• James Clerk Maxwell, A Treatise on Electricity And Magnetism Vols 1 and 2 1904—
most readable edition with all corrections—Antique Books Collection suitable for free
reading online.
• Maxwell, J.C., A Treatise on Electricity And Magnetism - Volume 1 - 1873 - Posner
Memorial Collection - Carnegie Mellon University
• Maxwell, J.C., A Treatise on Electricity And Magnetism - Volume 2 - 1873 - Posner
Memorial Collection - Carnegie Mellon University
• On Faraday's Lines of Force - 1855/56 Maxwell's first paper (Part 1 & 2) - Compiled
by Blaze Labs Research (PDF)
• On Physical Lines of Force - 1861 Maxwell's 1861 paper describing magnetic lines of
Force - Predecessor to 1873 Treatise
• Maxwell, James Clerk, "A Dynamical Theory of the Electromagnetic Field",
Philosophical Transactions of the Royal Society of London 155, 459-512 (1865).
(This article accompanied a December 8, 1864 presentation by Maxwell to the Royal
Society.)
• Catt, Walton and Davidson. "The History of Displacement Current". Wireless World,
March 1979.
• Reprint from Dover Publications (ISBN 0-486-60636-8)
• Full text of 1904 Edition including full text search.
• A Dynamical Theory Of The Electromagnetic Field - 1865 Maxwell's 1865 paper
describing his 20 Equations in 20 Unknowns - Predecessor to the 1873 Treatise

[edit] Other

• Feynman's derivation of Maxwell equations and extra dimensions


• According to an article in Physicsweb, the Maxwell equations rate as "The greatest
equations ever".
• Nature Milestones: Photons -- Milestone 2 (1861) Maxwell's equations

[show]
v•d•e
General subfields within physics

Retrieved from "http://en.wikipedia.org/wiki/Maxwell%27s_equations"


Categories: Electrodynamics | Electromagnetism | Equations | Partial differential equations |
Fundamental physics concepts | James Clerk Maxwell

Personal tools

• New features
• Log in / create account

Namespaces

• Article
• Discussion

Variants

Views

• Read
• Edit
• View history

Actions

Search

Search

Navigation

• Main page
• Contents
• Featured content
• Current events
• Random article

Interaction

• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Donate to Wikipedia
• Help

Toolbox

• What links here


• Related changes
• Upload file
• Special pages
• Permanent link
• Cite this page

Print/export

• Create a book
• Download as PDF
• Printable version

Languages

• Afrikaans
• ‫العربية‬
• বাংলা
• Български
• Català
• Česky
• Dansk
• Deutsch
• Ελληνικά
• Español
• Esperanto
• Euskara
• ‫فارسی‬
• Français
• Galego
• 한국어
• िहनदी
• Hrvatski
• Bahasa Indonesia
• Íslenska
• Italiano
• ‫עברית‬
• ქართული
• Latina
• Latviešu
• Lietuvių
• Limburgs
• Magyar
• Nederlands
• 日本語
• Norsk (bokmål)
• Norsk (nynorsk)
• Polski
• Português
• Română
• Русский
• Shqip
• Simple English
• Slovenčina
• Slovenščina
• Српски / Srpski
• Srpskohrvatski / Српскохрватски
• Suomi
• Svenska
• ไทย
• Türkçe
• Українська
• Tiếng Việt
• 中文

• This page was last modified on 8 July 2010 at 19:49.


• Text is available under the Creative Commons Attribution-ShareAlike License;
additional terms may apply. See Terms of Use for details.
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit
organization.
• Contact us

• Privacy policy
• About Wikipedia
• Disclaimers

Conservation of energy
From Wikipedia, the free encyclopedia

Jump to: navigation, search


This article is about the law of conservation of energy in physics. For sustainable energy
resources, see Energy conservation.

The law of conservation of energy is an empirical law of physics. It states that the total
amount of energy in an isolated system remains constant over time (is said to be conserved
over time). A consequence of this law is that energy can neither be created nor destroyed, it
can only be transformed from one state to another. The only thing that can happen to energy
in a closed system is that it can change form, for instance chemical energy can become
kinetic energy.
Albert Einstein's theory of relativity shows that energy and mass are the same thing, and that
neither one appears without the other. Thus in closed systems, both mass and energy are
conserved separately, just as was understood in pre-relativistic physics. The new feature of
relativistic physics is that "matter" particles (such as those constituting atoms) could be
converted to non-matter forms of energy, such as light; or kinetic and potential energy
(example: heat). However, this conversion does not affect the total mass of systems, since the
latter forms of non-matter energy still retain their mass through any such conversion.[1]

Today, conservation of “energy” refers to the conservation of the total system energy over
time. This energy includes the energy associated with the rest mass of particles and all other
forms of energy in the system. In addition the invariant mass of systems of particles (the mass
of the system as seen in its center of mass inertial frame, such as the frame in which it would
need to be weighed), is also conserved over time for any single observer, and (unlike the total
energy) is the same value for all observers. Therefore, in an isolated system, although matter
(particles with rest mass) and "pure energy" (heat and light) can be converted to one another,
both the total amount of energy and the total amount of mass of such systems remain constant
over time, as seen by any single observer. If energy in any form is allowed to escape such
systems (see binding energy) the mass of the system will decrease in correspondence with the
loss.

A consequence of the law of energy conservation is that perpetual motion machines can only
work perpetually if they deliver no energy to their surroundings. If such machines produce
more energy than is put into them, they must lose mass and thus eventually disappear over
perpetual time, and are therefore not possible.

Contents
[hide]

• 1 History
• 2 The first law of thermodynamics
• 3 Mechanics
o 3.1 Noether's theorem
o 3.2 Relativity
o 3.3 Quantum theory
• 4 See also
• 5 Notes
• 6 References
o 6.1 Modern accounts
o 6.2 History of ideas

• 7 External links

[edit] History
Ancient philosophers as far back as Thales of Miletus had inklings of the conservation of
which everything is made. However, there is no particular reason to identify this with what
we know today as "mass-energy" (for example, Thales thought it was water). In 1638,
Galileo published his analysis of several situations—including the celebrated "interrupted
pendulum"—which can be described (in modern language) as conservatively converting
potential energy to kinetic energy and back again. It was Gottfried Wilhelm Leibniz during
1676–1689 who first attempted a mathematical formulation of the kind of energy which is
connected with motion (kinetic energy). Leibniz noticed that in many mechanical systems (of
several masses, mi each with velocity vi ),

was conserved so long as the masses did not interact. He called this quantity the vis viva or
living force of the system. The principle represents an accurate statement of the approximate
conservation of kinetic energy in situations where there is no friction. Many physicists at that
time held that the conservation of momentum, which holds even in systems with friction, as
defined by the momentum:

was the conserved vis viva. It was later shown that, under the proper conditions, both
quantities are conserved simultaneously such as in elastic collisions.

It was largely engineers such as John Smeaton, Peter Ewart, Karl Hotzmann, Gustave-
Adolphe Hirn and Marc Seguin who objected that conservation of momentum alone was not
adequate for practical calculation and who made use of Leibniz's principle. The principle was
also championed by some chemists such as William Hyde Wollaston. Academics such as
John Playfair were quick to point out that kinetic energy is clearly not conserved. This is
obvious to a modern analysis based on the second law of thermodynamics but in the 18th and
19th centuries, the fate of the lost energy was still unknown. Gradually it came to be
suspected that the heat inevitably generated by motion under friction, was another form of vis
viva. In 1783, Antoine Lavoisier and Pierre-Simon Laplace reviewed the two competing
theories of vis viva and caloric theory.[2] Count Rumford's 1798 observations of heat
generation during the boring of cannons added more weight to the view that mechanical
motion could be converted into heat, and (as importantly) that the conversion was
quantitative and could be predicted (allowing for a universal conversion constant between
kinetic energy and heat). Vis viva now started to be known as energy, after the term was first
used in that sense by Thomas Young in 1807.

The recalibration of vis viva to

which can be understood as finding the exact value for the kinetic energy to work conversion
constant, was largely the result of the work of Gaspard-Gustave Coriolis and Jean-Victor
Poncelet over the period 1819–1839. The former called the quantity quantité de travail
(quantity of work) and the latter, travail mécanique (mechanical work), and both championed
its use in engineering calculation.
In a paper Über die Natur der Wärme, published in the Zeitschrift für Physik in 1837, Karl
Friedrich Mohr gave one of the earliest general statements of the doctrine of the conservation
of energy in the words: "besides the 54 known chemical elements there is in the physical
world one agent only, and this is called Kraft [energy or work]. It may appear, according to
circumstances, as motion, chemical affinity, cohesion, electricity, light and magnetism; and
from any one of these forms it can be transformed into any of the others."

A key stage in the development of the modern conservation principle was the demonstration
of the mechanical equivalent of heat. The caloric theory maintained that heat could neither be
created nor destroyed but conservation of energy entails the contrary principle that heat and
mechanical work are interchangeable.

The mechanical equivalence principle was first stated in its modern form by the German
surgeon Julius Robert von Mayer.[3] Mayer reached his conclusion on a voyage to the Dutch
East Indies, where he found that his patients' blood was a deeper red because they were
consuming less oxygen, and therefore less energy, to maintain their body temperature in the
hotter climate. He had discovered that heat and mechanical work were both forms of energy,
and later, after improving his knowledge of physics, he calculated a quantitative relationship
between them.

Joule's apparatus for measuring the mechanical equivalent of heat. A descending weight
attached to a string causes a paddle immersed in water to rotate.

Meanwhile, in 1843 James Prescott Joule independently discovered the mechanical


equivalent in a series of experiments. In the most famous, now called the "Joule apparatus", a
descending weight attached to a string caused a paddle immersed in water to rotate. He
showed that the gravitational potential energy lost by the weight in descending was equal to
the thermal energy (heat) gained by the water by friction with the paddle.

Over the period 1840–1843, similar work was carried out by engineer Ludwig A. Colding
though it was little known outside his native Denmark.

Both Joule's and Mayer's work suffered from resistance and neglect but it was Joule's that,
perhaps unjustly, eventually drew the wider recognition.

For the dispute between Joule and Mayer over priority, see Mechanical equivalent of
heat: Priority

In 1844, William Robert Grove postulated a relationship between mechanics, heat, light,
electricity and magnetism by treating them all as manifestations of a single "force" (energy in
modern terms). Grove published his theories in his book The Correlation of Physical Forces.
[4]
In 1847, drawing on the earlier work of Joule, Sadi Carnot and Émile Clapeyron, Hermann
von Helmholtz arrived at conclusions similar to Grove's and published his theories in his
book Über die Erhaltung der Kraft (On the Conservation of Force, 1847). The general
modern acceptance of the principle stems from this publication.

In 1877, Peter Guthrie Tait claimed that the principle originated with Sir Isaac Newton, based
on a creative reading of propositions 40 and 41 of the Philosophiae Naturalis Principia
Mathematica. This is now generally regarded as nothing more than an example of Whig
history.

[edit] The first law of thermodynamics


Laws of thermodynamics
Zeroth law
First law
Second law
Third law
Fundamental relation
v•d•e

Entropy is a function of a quantity of heat which shows the possibility of conversion of that
heat into work.

Main article: First law of thermodynamics

For a thermodynamic system with a fixed number of particles, the first law of
thermodynamics may be stated as:

, or equivalently,

where δQ is the amount of energy added to the system by a heating process, δW is the
amount of energy lost by the system due to work done by the system on its surroundings and
dU is the change in the internal energy of the system.
The δ's before the heat and work terms are used to indicate that they describe an increment of
energy which is to be interpreted somewhat differently than the dU increment of internal
energy (see Inexact differential). Work and heat are processes which add or subtract energy,
while the internal energy U is a particular form of energy associated with the system. Thus
the term "heat energy" for δQ means "that amount of energy added as the result of heating"
rather than referring to a particular form of energy. Likewise, the term "work energy" for δW
means "that amount of energy lost as the result of work". The most significant result of this
distinction is the fact that one can clearly state the amount of internal energy possessed by a
thermodynamic system, but one cannot tell how much energy has flowed into or out of the
system as a result of its being heated or cooled, nor as the result of work being performed on
or by the system. In simple terms, this means that energy cannot be created or destroyed, only
converted from one form to another.
For a simple compressible system, the work performed by the system may be written

where P is the pressure and dV is a small change in the volume of the system, each of which
are system variables. The heat energy may be written

where T is the temperature and dS is a small change in the entropy of the system.
Temperature and entropy are also system variables.

[edit] Mechanics
In mechanics, conservation of energy is usually stated as

where T is kinetic and V potential energy.

Actually this is the particular case of the more general conservation law

and

where L is the Lagrangian function. For this particular form to be valid, the following must be
true:

• The system is scleronomous (neither kinetic nor potential energy are explicit
functions of time)
• The kinetic energy is a quadratic form with regard to velocities.
• The potential energy doesn't depend on velocities.

[edit] Noether's theorem

Main article: Noether's theorem

The conservation of energy is a common feature in many physical theories. From a


mathematical point of view it is understood as a consequence of Noether's theorem, which
states every symmetry of a physical theory has an associated conserved quantity; if the
theory's symmetry is time invariance then the conserved quantity is called "energy". The
energy conservation law is a consequence of the shift symmetry of time; energy conservation
is implied by the empirical fact that the laws of physics do not change with time itself.
Philosophically this can be stated as "nothing depends on time per se". In other words, if the
theory is invariant under the continuous symmetry of time translation then its energy (which
is canonical conjugate quantity to time) is conserved. Conversely, theories which are not
invariant under shifts in time (for example, systems with time dependent potential energy) do
not exhibit conservation of energy – unless we consider them to exchange energy with
another, external system so that the theory of the enlarged system becomes time invariant
again. Since any time-varying theory can be embedded within a time-invariant meta-theory
energy conservation can always be recovered by a suitable re-definition of what energy is.
Thus conservation of energy for finite systems is valid in such modern physical theories as
special relativity and quantum theory (including QED) in the flat space-time.

[edit] Relativity

With the discovery of special relativity by Albert Einstein, energy was proposed to be one
component of an energy-momentum 4-vector. Each of the four components (one of energy
and three of momentum) of this vector is separately conserved across time, in any closed
system, as seen from any given inertial reference frame. Also conserved is the vector length
(Minkowski norm), which is the rest mass for single particles, and the invariant mass for
systems of particles (where momenta and energy are separately summed before the length is
calculated—see the article on invariant mass).

The relativistic energy of a single massive particle contains a term related to its rest mass in
addition to its kinetic energy of motion. In the limit of zero kinetic energy (or equivalently in
the rest frame) of a massive particle; or else in the center of momentum frame for objects or
systems which retain kinetic energy, the total energy of particle or object (including internal
kinetic energy in systems) is related to its rest mass or its invariant mass via the famous
equation E = mc2.

Thus, the rule of conservation of energy over time in special relativity continues to hold, so
long as the reference frame of the observer is unchanged. This applies to the total energy of
systems, although different observers disagree as to the energy value. Also conserved, and
invariant to all observers, is the invariant mass, which is the minimal system mass and energy
that can be seen by any observer, and which is defined by the energy–momentum relation.

In general relativity conservation of energy-momentum is expressed with the aid of a stress-


energy-momentum pseudotensor.

[edit] Quantum theory

In quantum mechanics, energy is defined as proportional to the time derivative of the wave
function. Lack of commutativity of the time derivative operator with the time operator itself
mathematically results in an uncertainty principle for time and energy: the longer the period
of time, the more precisely energy can be defined (energy and time become a conjugate
Fourier pair).

[edit] See also


• Chaos theory
• Conservation law
• Conservation of mass
• Groundwater energy balance
• Laws of thermodynamics
• Principles of energetics
• Uncertainty principle
• Energy transformation
• Energy quality

[edit] Notes
1. ^ Taylor, Edwin F.; Wheeler, John A. (1992), Spacetime Physics, W.H. Freeman and Co.,
NY., pp. 248–9, ISBN 0-7167-2327-1 Discussion of mass remaining constant after detonation
of nuclear bombs, for example, until heat is allowed to escape.
2. ^ Lavoisier, A.L. & Laplace, P.S. (1780) "Memoir on Heat", Académie Royale des Sciences
pp4-355
3. ^ von Mayer, J.R. (1842) "Remarks on the forces of inorganic nature" in Annalen der Chemie
und Pharmacie, 43, 233
4. ^ Grove, W. R. (1874). The Correlation of Physical Forces (6th ed.). London: Longmans,
Green.

[edit] References
[edit] Modern accounts

• Goldstein, Martin, and Inge F., 1993. The Refrigerator and the Universe. Harvard
Univ. Press. A gentle introduction.
• Kroemer, Herbert; Kittel, Charles (1980). Thermal Physics (2nd ed.). W. H. Freeman
Company. ISBN 0-7167-1088-9.
• Nolan, Peter J. (1996). Fundamentals of College Physics, 2nd ed.. William C. Brown
Publishers.
• Oxtoby & Nachtrieb (1996). Principles of Modern Chemistry, 3rd ed.. Saunders
College Publishing.
• Papineau, D. (2002). Thinking about Consciousness. Oxford: Oxford University
Press.
• Serway, Raymond A.; Jewett, John W. (2004). Physics for Scientists and Engineers
(6th ed.). Brooks/Cole. ISBN 0-534-40842-7.
• Stenger, Victor J. (2000). Timeless Reality. Prometheus Books. Especially chpt. 12.
Nontechnical.
• Tipler, Paul (2004). Physics for Scientists and Engineers: Mechanics, Oscillations
and Waves, Thermodynamics (5th ed.). W. H. Freeman. ISBN 0-7167-0809-4.
• Lanczos, Cornelius (1970). The Variational Principles of Mechanics. Toronto:
University of Toronto Press. ISBN 0-8020-1743-6.

[edit] History of ideas

• Brown, T.M. (1965). "Resource letter EEC-1 on the evolution of energy concepts
from Galileo to Helmholtz". American Journal of Physics 33: 759–765.
doi:10.1119/1.1970980.
• Cardwell, D.S.L. (1971). From Watt to Clausius: The Rise of Thermodynamics in the
Early Industrial Age. London: Heinemann. ISBN 0-435-54150-1.
• Guillen, M. (1999). Five Equations That Changed the World. New York: Abacus.
ISBN 0-349-11064-6.
• Hiebert, E.N. (1981). Historical Roots of the Principle of Conservation of Energy.
Madison, Wis.: Ayer Co Pub. ISBN 0-405-13880-6.
• Kuhn, T.S. (1957) “Energy conservation as an example of simultaneous discovery”,
in M. Clagett (ed.) Critical Problems in the History of Science pp.321–56
• Sarton, G. (1929). "The discovery of the law of conservation of energy". Isis 13: 18–
49. doi:10.1086/346430.
• Smith, C. (1998). The Science of Energy: Cultural History of Energy Physics in
Victorian Britain. London: Heinemann. ISBN 0-485-11431-3.
• Mach, E. (1872). History and Root of the Principles of the Conservation of Energy.
Open Court Pub. Co., IL.
• Poincaré, H. (1905). Science and Hypothesis. Walter Scott Publishing Co. Ltd; Dover
reprint, 1952. ISBN 0-486-60221-4., Chapter 8, "Energy and Thermo-dynamics"

[edit] External links


• MISN-0-158The First Law of Thermodynamics (PDF file) by Jerzy Borysowicz for
Project PHYSNET.

Retrieved from "http://en.wikipedia.org/wiki/Conservation_of_energy"


Categories: Energy in physics | Laws of thermodynamics | Conservation laws | History of
physics | History of ideas

Personal tools

• New features
• Log in / create account

Namespaces

• Article
• Discussion

Variants

Views

• Read
• Edit
• View history

Actions

Search

Search

Navigation

• Main page
• Contents
• Featured content
• Current events
• Random article

Interaction

• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Donate to Wikipedia
• Help

Toolbox

• What links here


• Related changes
• Upload file
• Special pages
• Permanent link
• Cite this page

Print/export

• Create a book
• Download as PDF
• Printable version

Languages

• ‫العربية‬
• Aragonés
• Беларуская (тарашкевіца)
• Български
• Català
• Česky
• Cymraeg
• Deutsch
• Eesti
• Español
• Esperanto
• ‫فارسی‬
• Français
• Galego
• 한국어
• िहनदी
• Hrvatski
• Bahasa Indonesia
• Íslenska
• Italiano
• ‫עברית‬
• Latviešu
• Lietuvių
• Magyar
• Latina
• Македонски
• മലയാളം
• Nederlands
• 日本語
• Norsk (bokmål)
• Polski
• Português
• Română
• Русский
• Simple English
• Suomi
• Svenska
• தமிழ்
• ไทย
• Türkçe
• Українська
• ‫اردو‬
• Tiếng Việt
• Winaray
• 中文

• This page was last modified on 25 June 2010 at 01:32.


• Text is available under the Creative Commons Attribution-ShareAlike License;
additional terms may apply. See Terms of Use for details.
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit
organization.
• Contact us

• Privacy policy
• About Wikipedia
• Disclaimers

Mesh analysis
From Wikipedia, the free encyclopedia

Jump to: navigation, search


Figure 1: Essential Meshes of the planar circuit labeled 1, 2, and 3. R1, R2, R3, 1/sc, and Ls
represent the impedance of the resistors, capacitor, and inductor values in the s-domain. Vs
and Is are the values of the voltage source and current source respectively.

Mesh analysis (sometimes referred to as loop analysis or mesh current method) is a


method that is used to solve planar circuits for the voltage and currents at any place in the
circuit. Planar circuits are circuits that can be drawn on a plane with no wires overlapping
each other. Mesh analysis uses Kirchhoff’s voltage law to solve these planar circuits. The
advantage of using mesh analysis is that it creates a systematic approach to solving planar
circuits and reduces the number of equations needed to solve the circuit for all of the voltages
and currents.[1]

Contents
[hide]

• 1 Mesh currents and essential meshes


• 2 Setting up the equations
• 3 Special cases
o 3.1 Supermesh
o 3.2 Dependent sources
• 4 See also
• 5 External links

• 6 References

[edit] Mesh currents and essential meshes

Figure 2: Circuit with Mesh Currents Labeled as i1, i2, and i3. The arrows show the direction
of the mesh current.
Mesh analysis works by arbitrarily assigning mesh currents in the essential meshes. An
essential mesh is a loop in the circuit that does not contain any other loop. When looking at a
circuit schematic, the essential meshes look like a “window pane”. Figure 1 labels the
essential meshes with one, two, and three. Once the essential meshes are found, the mesh
currents need to be labeled.[2]

A mesh current is a current that loops around the essential mesh. The mesh current might not
have a physical meaning but it is used to set up the mesh analysis equations.[1] When
assigning the mesh currents it is important to have all the mesh currents loop in the same
direction. This will help prevent errors when writing out the equations. The convention is to
have all the mesh currents looping in a clockwise direction.[2] Figure 2 shows the same circuit
shown before but with the mesh currents labeled.

The reason to use mesh currents instead of just using KCL and KVL to solve a problem is
that the mesh currents can account for any unnecessary currents that may be drawn in when
using KCL and KVL. Mesh analysis ensures that the least possible number of equations
regarding currents is used, greatly simplifying the problem.

[edit] Setting up the equations

Figure 3: Simple Circuit using Mesh Analysis

After labeling the mesh currents, one only needs to write one equation per mesh in order to
solve for all the currents in the circuit. These equations are the sum of the voltage drops in a
complete loop of the mesh current.[2] For other than current and voltage sources, the voltage
drops will be the impedance of the electronic component multiplied by the mesh current in
that loop. It is important to note that if a component exists between two essential meshes, the
component's voltage drop will be the impedance of the component times the present mesh
current minus the neighboring mesh current (computing the subtraction first).[3]

If a voltage source is present within the mesh loop, the voltage at the source is either added or
subtracted depending on if it is a voltage drop or a voltage rise in the direction of the mesh
current. For a current source that is not contained between two meshes, the mesh current will
take the positive or negative value of the current source depending on if the mesh current is in
the same or opposite direction of the current source.[2] The following is the same circuit from
above with the equations needed to solve for all the currents in the circuit.
Once the equations are found, the system of linear equations can be solved by using any
technique to solve linear equations.

[edit] Special cases


There are two special cases in mesh current: supermesh and dependent sources.

[edit] Supermesh

Figure 4: Circuit with a supermesh. Supermesh occurs because the current source is in
between the essential meshes.

A supermesh occurs when a current source is contained between two essential meshes. To
handle the supermesh, first treat the circuit as if the current source is not there. This leads to
one equation that incorporates two mesh currents. Once this equation is formed, an equation
is needed that relates the two mesh currents with the current source. This will be an equation
where the current source is equal to one of the mesh currents minus the other. The following
is a simple example of dealing with a supermesh.[1]

[edit] Dependent sources

Figure 5: Circuit with dependent source. ix is the current that the dependent voltage source
depends on.
A dependent source is a current source or voltage source that depends on the voltage or
current on another element in the circuit. When a dependent source is contained within an
essential mesh, the dependent source should be treated like a normal source. After the mesh
equation is formed, a dependent source equation is needed. This equation is generally called a
constraint equation. This is an equation that relates the dependent source’s variable to the
voltage or current that the source depends on in the circuit. The following is a simple
example of a dependent source.[1]

[edit] See also


• Ohm's law
• Analysis of resistive circuits
• Nodal analysis
• Kirchhoff's circuit laws
• Source transformation

[edit] External links


• Mesh current method
• Three-mesh problem solver

[edit] References
1. ^ a b c d Nilsson, James W., & Riedel, Susan A. (2002). Introductory Circuits for Electrical
and Computer Engineering. New Jersey: Prentice Hall.
2. ^ a b c d Lueg, Russell E., & Reinhard, Erwin A. (1972). Basic Electronics for Engineers and
Scientists (2nd ed.). New York: International Textbook Company.
3. ^ Puckett, Russell E., & Romanowitz, Harry A. (1976). Introduction to Electronics (2nd ed.).
San Francisco: John Wiley and Sons, Inc.

Retrieved from "http://en.wikipedia.org/wiki/Mesh_analysis"


Categories: Electrical engineering | Electronic engineering | Electrical circuits | Electronic
circuits | Electronic design

Personal tools

• New features
• Log in / create account

Namespaces

• Article
• Discussion
Variants

Views

• Read
• Edit
• View history

Actions

Search

Search

Navigation

• Main page
• Contents
• Featured content
• Current events
• Random article

Interaction

• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Donate to Wikipedia
• Help

Toolbox

• What links here


• Related changes
• Upload file
• Special pages
• Permanent link
• Cite this page

Print/export

• Create a book
• Download as PDF
• Printable version

Languages

• Català
• Deutsch
• Español
• Euskara
• Italiano

• This page was last modified on 1 June 2010 at 07:25.


• Text is available under the Creative Commons Attribution-ShareAlike License;
additional terms may apply. See Terms of Use for details.
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit
organization.
• Contact us

• Privacy policy
• About Wikipedia
• Disclaimers

SPICE
From Wikipedia, the free encyclopedia

Jump to: navigation, search


For other uses, see SPICE (disambiguation).

SPICE (Simulation Program with Integrated Circuit Emphasis)[1][2] is a general-purpose


open source analog electronic circuit simulator. It is a powerful program that is used in IC
and board-level design to check the integrity of circuit designs and to predict circuit behavior.

Contents
[hide]

• 1 Introduction
• 2 Origins
• 3 Program features and structure
o 3.1 Analyses
o 3.2 Device models
o 3.3 Input and output: Netlists, schematic capture and plotting
• 4 See also
• 5 References
• 6 External links
o 6.1 Histories, original papers
o 6.2 Versions with source code available
o 6.3 Tutorials, user information

o 6.4 Applications
[edit] Introduction
Integrated circuits, unlike board-level designs composed of discrete parts, are impossible to
breadboard before manufacture. Further, the high costs of photolithographic masks and other
manufacturing prerequisites make it essential to design the circuit to be as close to perfect as
possible before the integrated circuit is first built. Simulating the circuit with SPICE is the
industry-standard way to verify circuit operation at the transistor level before committing to
manufacturing an integrated circuit.

Board-level circuit designs can often be breadboarded for testing. Even with a breadboard,
some circuit properties may not be accurate compared to the final printed wiring board, such
as parasitic resistances and capacitances. These parasitic components can often be estimated
more accurately using SPICE simulation. Also, designers may want more information about
the circuit than is available from a single mock-up. For instance, circuit performance is
affected by component manufacturing tolerances. In these cases it is common to use SPICE
to perform Monte Carlo simulations of the effect of component variations on performance, a
task which is impractical using calculations by hand for a circuit of any appreciable
complexity.

Circuit simulation programs, of which SPICE and derivatives are the most prominent, take a
text netlist describing the circuit elements (transistors, resistors, capacitors, etc.) and their
connections, and translate[3] this description into equations to be solved. The general
equations produced are nonlinear differential algebraic equations which are solved using
implicit integration methods, Newton's method and sparse matrix techniques.

[edit] Origins
SPICE was developed at the Electronics Research Laboratory of the University of California,
Berkeley by Laurence Nagel with direction from his research advisor, Prof. Donald Pederson.
SPICE1 was largely a derivative of the CANCER program,[4] which Nagel had worked on
under Prof. Ronald Rohrer. CANCER was an acronym for "Computer Analysis of Nonlinear
Circuits, Excluding Radiation," a hint to Berkeley's liberalism of 1960s: at these times many
circuit simulators were developed under the United States Department of Defense contracts
that required the capability to evaluate the radiation hardness of a circuit. When Nagel's
original advisor, Prof. Rohrer, left Berkeley, Prof. Pederson became his advisor. Pederson
insisted that CANCER, a proprietary program, be rewritten enough that restrictions could be
removed and the program could be put in the public domain.[5]

SPICE1 was first presented at a conference in 1973.[1] SPICE1 was coded in FORTRAN and
used nodal analysis to construct the circuit equations. Nodal analysis has limitations in
representing inductors, floating voltage sources and the various forms of controlled sources.
SPICE1 had relatively few circuit elements available and used a fixed-timestep transient
analysis. The real popularity of SPICE started with SPICE2[2] in 1975. SPICE2, also coded in
FORTRAN, was a much-improved program with more circuit elements, variable timestep
transient analysis using either trapezoidal or the Gear integration method (also known as
BDF), equation formulation via modified nodal analysis[6] (avoiding the limitations of nodal
analysis), and an innovative FORTRAN-based memory allocation system developed by
another graduate student, Ellis Cohen. The last FORTRAN version of SPICE was 2G.6 in
1983. SPICE3[7] was developed by Thomas Quarles (with A. Richard Newton as advisor) in
1989. It is written in C, uses the same netlist syntax, and added X Window System plotting.

As an early open source program, SPICE was widely distributed and used. Its ubiquity
became such that "to SPICE a circuit" remains synonymous with circuit simulation.[8] SPICE
source code was from the beginning distributed by UC Berkeley for a nominal charge (to
cover the cost of magnetic tape). The license originally included distribution restrictions for
countries not considered friendly to the USA, but the source code is currently covered by the
BSD license.

SPICE inspired and served as a basis for many other circuit simulation programs, in
academia, in industry, and in commercial products. The first commercial version of SPICE
was ISPICE[9], an interactive version on a timeshare service, National CSS. The most
prominent commercial versions of SPICE include HSPICE (originally commercialized by
Shawn and Kim Hailey of Meta Software, but now owned by Synopsys) and PSPICE (now
owned by Cadence Design Systems). The academic spinoffs of SPICE include XSPICE,
developed at Georgia Tech, which added mixed analog/digital "code models" for behavioral
simulation, and Cider (previously CODECS, from UC Berkeley/Oregon State Univ.) which
added semiconductor device simulation. The integrated circuit industry adopted SPICE
quickly, and until commercial versions became well developed many IC design houses had
proprietary versions of SPICE.[10] Today a few IC manufacturers, typically the larger
companies, have groups continuing to develop SPICE-based circuit simulation programs.
Among these are ADICE at Analog Devices, LTspice at Linear Technology, Mica at
Freescale Semiconductor, and TISPICE at Texas Instruments. (Other companies maintain
internal circuit simulators which are not directly based upon SPICE, among them PowerSpice
at IBM, Titan at Qimonda, Lynx at Intel Corporation, and Pstar at NXP Semiconductor.)

[edit] Program features and structure


SPICE became popular because it contained the analyses and models needed to design
integrated circuits of the time, and was robust enough and fast enough to be practical to use.
[11]
Precursors to SPICE often had a single purpose: The BIAS[12] program, for example, did
simulation of bipolar transistor circuit operating points; the SLIC[13] program did only small-
signal analyses. SPICE combined operating point solutions, transient analysis, and various
small-signal analyses with the circuit elements and device models needed to successfully
simulate many circuits.

[edit] Analyses

SPICE2 included these analyses:

• AC analysis (linear small-signal frequency domain analysis)


• DC analysis (nonlinear quiescent point calculation)
• DC transfer curve analysis (a sequence of nonlinear operating points calculated while
sweeping an input voltage or current, or a circuit parameter)
• Noise analysis (a small signal analysis done using an adjoint matrix technique which
sums uncorrelated noise currents at a chosen output point)
• Transfer function analysis (a small-signal input/output gain and impedance
calculation)
• Transient analysis (time-domain large-signal solution of nonlinear differential
algebraic equations)

Since SPICE is generally used to model nonlinear circuits, the small signal analyses are
necessarily preceded by a quiescent point calculation at which the circuit is linearized.
SPICE2 also contained code for other small-signal analyses: sensitivity analysis, pole-zero
analysis, and small-signal distortion analysis. Analysis at various temperatures was done by
automatically updating semiconductor model parameters for temperature, allowing the circuit
to be simulated at temperature extremes.

Other circuit simulators have since added many analyses beyond those in SPICE2 to address
changing industry requirements. Parametric sweeps were added to analyze circuit
performance with changing manufacturing tolerances or operating conditions. Loop gain and
stability calculations were added for analog circuits. Harmonic balance or time-domain
steady state analyses were added for RF and switched-capacitor circuit design. However, a
public-domain circuit simulator containing the modern analyses and features needed to
become a successor in popularity to SPICE has not yet emerged.[11]

[edit] Device models

SPICE2 included many semiconductor device compact models: three levels of MOSFET
model, a combined Ebers–Moll and Gummel-Poon bipolar model, a JFET model, and a
model for a junction diode. In addition, it had many other elements: resistors, capacitors,
inductors (including coupling), independent voltage and current sources, ideal transmission
lines, and voltage and current controlled sources.

SPICE3 added more sophisticated MOSFET models, which were required due to advances in
semiconductor technology. In particular, the BSIM family of models were added, which were
also developed at UC Berkeley.

Commercial and industrial SPICE simulators have added many other device models as
technology advanced and earlier models became inaccurate. To attempt standardization of
these models so that a set of model parameters may be used in different simulators, an
industry working group was formed, the Compact Model Council[14], to choose, maintain and
promote the use of standard models. The standard models today include BSIM3, BSIM4,
BSIMSOI, PSP, HICUM, and MEXTRAM.

[edit] Input and output: Netlists, schematic capture and plotting

SPICE2 took a text netlist as input and produced line-printer listings as output, which fit with
the computing environment in 1975. These listings were either columns of numbers
corresponding to calculated outputs (typically voltages or currents), or line-printer character
"plots". SPICE3 retained the netlist for circuit description, but allowed analyses to be
controlled from a command-line interface similar to the C shell. SPICE3 also added basic X-
Window plotting, as UNIX and engineering workstations became common.

Vendors and various free software projects have added schematic capture front-ends to
SPICE, allowing a schematic diagram of the circuit to be drawn and the netlist to be
automatically generated. Also, graphical user interfaces were added for selecting the
simulations to be done and manipulating the voltage and current output vectors. In addition,
very capable graphing utilities have been added to see waveforms and graphs of parametric
dependencies. Several free versions of these extended programs are available, some as
introductory limited packages, and some without restrictions.

[edit] See also


• Articles on SPICE-like and other circuit simulators are listed in Category:Electronic
circuit simulators
• Input Output Buffer Information Specification (IBIS)
• List of free electronics circuit simulators
• Transistor models

[edit] References
1. ^ a b Nagel, L. W, and Pederson, D. O., SPICE (Simulation Program with Integrated Circuit
Emphasis), Memorandum No. ERL-M382, University of California, Berkeley, Apr. 1973
2. ^ a b Nagel, Laurence W., SPICE2: A Computer Program to Simulate Semiconductor Circuits,
Memorandum No. ERL-M520, University of California, Berkeley, May 1975
3. ^ Warwick, Colin (May 2009). "Everything you always wanted to know about SPICE* (*But
were afraid to ask)" (PDF). EMC Journal (Nutwood UK Limited) (82): 27–29. ISSN 1748-
9253. http://www.nutwooduk.co.uk/pdf/Issue82.PDF#page=27.
4. ^ Nagel, L. W., and Rohrer, R. A. (August 1971). "Computer Analysis of Nonlinear Circuits,
Excluding Radiation". IEEE Journal of Solid State Circuits SC-6: 166–182.
doi:10.1109/JSSC.1971.1050166. http://ieeexplore.ieee.org/xpls/abs_all.jsp?
arnumber=1050166.
5. ^ Perry, T. (June 1998). "Donald O. Pederson". IEEE Spectrum 35: 22–27.
doi:10.1109/6.681968. http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=681968.
6. ^ Ho, Ruehli, and Brennan (April 1974). "The Modified Nodal Approach to Network
Analysis". Proc. 1974 Int. Symposium on Circuits and Systems, San Francisco. pp. 505–509.
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1084079.
7. ^ Quarles, Thomas L., Analysis of Performance and Convergence Issues for Circuit
Simulation, Memorandum No. UCB/ERL M89/42, University of California, Berkeley, Apr.
1989.
8. ^ Pescovitz, David (2002-05-02). "1972: The release of SPICE, still the industry standard tool
for integrated circuit design". Lab Notes: Research from the Berkeley College of Engineering.
http://www.coe.berkeley.edu/labnotes/0502/history.html. Retrieved 2007-03-10.
9. ^ Vladimirescu, Andrei, SPICE -- The Third Decade, Proc. 1990 IEEE Bipolar Circuits and
Technology Meeting, Minneapolis, Sept. 1990, pp. 96–101
10.^ K. S. Kundert, The Designer’s Guide to SPICE and SPECTRE, Kluwer. Academic
Publishers, Boston , 1998
11.^ a b Nagel, L., Is it Time for SPICE4?, 2004 Numerical Aspects of Device and Circuit
Modeling Workshop, June 23-25, 2004, Santa Fe, New Mexico. Retrieved on 2007-11-10
12.^ McCalla and Howard (February 1971). "BIAS-3 – A program for nonlinear D.C. analysis of
bipolar transistor circuits". IEEE J. of Solid State Circuits 6 (1): 14–19.
doi:10.1109/JSSC.1971.1050153. http://ieeexplore.ieee.org/xpls/abs_all.jsp?
arnumber=1050153.
13.^ Idleman, Jenkins, McCalla and Pederson (August 1971). "SLIC—a simulator for linear
integrated circuits". IEEE J. of Solid State Circuits 6 (4): 188–203.
doi:10.1109/JSSC.1971.1050168. http://ieeexplore.ieee.org/xpls/abs_all.jsp?
arnumber=1050168.
14.^ "CMC - Compact Model Council". GEIA. http://www.geia.org/index.asp?bid=597.
[edit] External links
[edit] Histories, original papers

• The original SPICE1 paper


• L. W. Nagel's dissertation (SPICE2)
• Thomas Quarles' dissertation (SPICE3)
• Larry Nagel, "The Life of SPICE"
• A brief history of SPICE

[edit] Versions with source code available

• SPICE2 and SPICE3 at UC Berkeley


• Cider at UC Berkeley
• ngspice: SPICE3 with updates and XSPICE extensions
• tclspice: ngspice and Tcl scripting
• XSPICE at Georgia Tech

[edit] Tutorials, user information

• Comprehensive, detailed PSPICE tutorial and user guide at Wilfrid Laurier


University, Canada
• The Spice Page
• SPICE on gEDA HOWTO
• Spice 3 Userguide
• Spice 3 Quickstart Tutorial
• Designer's Guide Community
• SPICE Simulation tutorial

[edit] Applications

• Sample Spice code and output for various circuits


• NanoDotTek Report NDT14-08-2007, 12 August 2007

Retrieved from "http://en.wikipedia.org/wiki/SPICE"


Categories: Electronic design automation software | Free software programmed in C | Free
software programmed in Fortran | Simulation programming languages | Electronic circuit
simulators | Free simulation software

Personal tools

• New features
• Log in / create account

Namespaces

• Article
• Discussion
Variants

Views

• Read
• Edit
• View history

Actions

Search

Search

Navigation

• Main page
• Contents
• Featured content
• Current events
• Random article

Interaction

• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
• Donate to Wikipedia
• Help

Toolbox

• What links here


• Related changes
• Upload file
• Special pages
• Permanent link
• Cite this page

Print/export

• Create a book
• Download as PDF
• Printable version

Languages

• Deutsch
• Español
• Français
• िहनदी
• Bahasa Indonesia
• Italiano
• 日本語
• Português
• Русский
• Svenska
• 中文

• This page was last modified on 10 July 2010 at 18:15.


• Text is available under the Creative Commons Attribution-ShareAlike License;
additional terms may apply. See Terms of Use for details.
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit
organization.
• Contact us

• Privacy policy
• About Wikipedia
• Disclaimers

You might also like