You are on page 1of 9

Chapter C

Properties of Legendre Polynomials


C1 Denitions
The Legendre Polynomials are the everywhere regular solutions of Legendres Equation,
(1 x
2
)u

2xu

+ mu = [(1 x
2
)u

+ mu = 0, (C.1)
which are possible only if
m = n(n + 1), n = 0, 1, 2, . (C.2)
We write the solution for a particular value of n as P
n
(x). It is a polynomial of degree n. If n is
even/odd then the polynomial is even/odd. They are normalised such that P
n
(1) = 1.
P
0
(x) = 1,
P
1
(x) = x,
P
2
(x) = (3x
2
1)/2,
P
0
(x) = (5x
3
3x)/2.
C2 Rodrigues Formula
They can also be represented using Rodrigues Formula
P
n
(x) =
1
2
n
n!
d
n
dx
n
(x
2
1)
n
. (C.3)
This can be demonstrated through the following observations
C2.1 Its a polynomial
The right hand side of (C.3) is a polynomial.
C2.2 It takes the value 1 at 1.
If
v(x) =
1
2
n
n!
d
n
dx
n
(x
2
1)
n
,
26
then, treating (x
2
1)
n
= (x 1)
n
(x +1)
n
as a product and using Leibnitz rule to dierentiate n
times, we have
v(x) =
1
2
n
n!
(n!(x + 1)
n
+ terms with (x 1) as a factor) ,
so that
v(1) =
n!2
n
2
n
n!
= 1.
C2.3 It satises the equation
Finally
(1 x
2
)v

2xv

+ n(n + 1)v = 0,
since, if h(x) = (1 x
2
)
n
, then h

= 2nx(1 x
2
)
n1
, so that
(1 x
2
)h

+ 2nxh = 0.
Now dierentiate n + 1 times, using Leibnitz, to get
(1 x
2
)h
n+2
2(n + 1)xh
n+1
2
(n + 1)n
2
h
n
+ 2nxh
n+1
+ 2n(n + 1)h
n
= 0,
or
(1 x
2
)h
n+2
2xh
n+1
+ n(n + 1)h
n
= 0.
As the equation is linear and v h, v satises the equation also.
C2.4 And thats it
Thus v(x) is proportional to the regular solution of Legendres equation and v(1) = P(1) = 1, so
v(x) = P(x).
C3 Orthogonality of Legendre Polynomials
The dierential equation and boundary conditions satises by the Legendre Polynomials forms a
Sturm-Liouiville system (actually a generalised system where the boundary condition amounts
to insisting on regularity of the solutions at the boundaries). They should therefore satisfy the
orthogonality relation
_
1
1
P
n
(x) P
m
(x) dx = 0, n = m. (C.4)
If P
n
and P
m
are solutions of Legendres equation then
[(1 x
2
) P

n
]

+ n(n + 1) P
n
= 0, (C.5)
[(1 x
2
) P

m
]

+ m(m + 1) P
m
= 0. (C.6)
Integrating the combination P
m
(C.5)P
n
(C.6) gives
_
1
1
P
m
[(1 x
2
) P

n
]

P
n
[(1 x
2
) P

m
]

dx + [n(n + 1) m(m + 1)]


_
1
1
P
n
P
m
dx = 0.
27
Using integration by parts gives, for the rst integral,
_
P
m
(1 x
2
) P

1
1

_
P
n
(1 x
2
) P

1
1

_
1
1
P

m
(1 x
2)
P

n
P

n
(1 x
2
) P

m
. .
=0
dx = 0,
as P
m,n
and their derivatives are nite at x = 1 (i.e. they are regular there). Hence, if n = m
_
1
1
P
n
P
m
dx = 0. (C.7)
C4 What is
_
1
1
P
2
n
dx?
We can evaluate this integral using Rodrigues formula. We have
I
n
=
_
1
1
P
2
n
dx =
1
2
2n
(n!)
2
_
1
1
d
n
dx
n
(x
2
1)
n
d
n
dx
n
(x
2
1)
n
dx.
Integrating by parts gives
(2
2n
(n!)
2
)I
n
=
_
d
n1
dx
n1
(x
2
1)
n
_
1
1

_
1
1
d
n1
dx
n1
(x
2
1)
n
d
n+1
dx
n+1
(x
2
1)
n
dx.
Note that dierentiating (x
2
1)
n
anything less than n times leaves an expression that has (x
2
1)
as a factor so that the rst of these two terms vanishes. Similarly, integrating by parts n times
gives
I
n
=
(1)
n
2
2n
(n!)
2
_
1
1
(x
2
1)
n
d
2n
dx
2n
(x
2
1)
n
dx.
The (2n)th derivative of the polynomial (x
2
1)
n
, which has degree 2n is (2n)!. Thus
I
n
=
(1)
n
(2n)!
2
2n
(n!)
2
_
1
1
(x
2
1)
n
dx.
The completion of this argument is left as an exercise. One way to proceed is to use the transfor-
mation s = (x + 1)/2 to transform the integral and then use a reduction formula to show that
_
1
0
s
n
(1 s)
n
ds =
(n!)
2
(2n + 1)!
.
The nal result is
_
1
1
P
2
n
dx =
2
2n + 1
. (C.8)
C5 Generalised Fourier Series
Sturm-Liouiville theory does more than guarantee the orthogonality of Legendre polynomials, it
also shows that we can represent functions on [1, 1] as a sum of Legendre Polynomials. Thus for
suitable f(x) on [1, 1] we have the generalized Fourier series
f(x) =

0
a
n
P
n
(x). (C.9)
28
To nd the coecients a
n
, we multiply both sides of this expression by P
m
(x) and integrate to
obtain
_
1
1
P
m
(x)f(x) dx =

0
a
n
_
1
1
P
n
(x) P
m
(x) dx = a
m
2
2m + 1
,
so that
a
n
=
_
n +
1
2
__
1
1
f(x) P
n
(x) dx. (C.10)
C6 A Generating Function for Legendre Polynomials
C6.1 Denition
We consider a function of two variables G(x, t) such that
G(x, t) =

n=0
P
n
(x)t
n
, (C.11)
so that the Legendre Polynomials are the coecients in the Taylor series of G(x, t) about t = 0.
Our rst task is to identify what the function G(x, t) actually is.
C6.2 Derivation of the generating function.
We know that, in spherical polar coordinates, the function r
1
is harmonic, away from r = 0, i.e.

2
1
r
=
2
1
|x|
= 0.
It is a harmonic function independent of . Similarly 1/(|x x
0
|) is harmonic away from x = x
0
.
If x
0
is a unit vector in the z-direction, then
|x x
0
|
2
= (x x
0
).(x x
0
)
= x.x 2x.x
0
+ x
0
.x
0
= r
2
2r cos + 1. (C.12)
So the function
1

r
2
2r cos + 1
is harmonic, regular at the origin and independent of . We should therefore be able to write it in
the form,
1

r
2
2r cos + 1
=

n=0
A
n
r
n
P
n
(cos ). (C.13)
29

x
0
x
x x
0

r
x
y
z
If we can show that A
n
= 1, then the replacement of cos by x and r by t gives the required result.
To nd A
n
, we evaluate the function along the positive z-axis, putting cos = 1, noting that
1

r
2
2r + 1
=
1
_
(r 1)
2
=
1
|r 1|
=
1
1 r
= 1 + r + r
2
+ r
3
+ , |r| < 1.
So

n=0
r
n
=

n=0
A
n
r
n
P
n
(1) =

n=0
A
n
r
n
,
so that A
n
= 1. Thus, with x = cos and t = r,
G(x, t) =
1

1 2xt + t
2
=

n=0
t
n
P
n
(x). (C.14)
C6.3 Applications of the Generating Function.
Generating functions can be applied in many ingenious ways, somethimes best left for examination
questions. As an example, we can dierentiate G(x, t) with respect to t to show that
G
t
=
x t
(1 2xt + t
2
)
3/2
= (1 2xt + t
2
)
G
t
= (x t)G.
Now write G(x, t) as a sum of Legendre polynomials to get
(1 2xt + t
2
)

n=0
nP
n
(x)t
n1
= (x t)

n=0
P
n
(x)t
n
.
Now comparing the coecients of t
0
gives
P
1
(x) = xP
0
(x),
so that, as P
0
= 1, we have P
1
= x, as expected. Comparing the general coecient of t
n
, n > 0,
gives
(n + 1) P
n+1
2xnP
n
+(n 1) P
n1
= xP
n
P
n1
,
30
or, rearranging
(n + 1) P
n+1
(2n + 1)xP
n
+nP
n1
= 0, (C.15)
a recursion relation for Legendre polynomials.
Dierentiating G(x, t) with respect to x, and proceeding in a similat way yields the result
P

n+1
2xP

n
+P

n1
= P
n
, n 1. (C.16)
Combining (C.15) and (C.16), or obtaining a relationship between G
x
and G
t
, shows
P

n+1
P

n1
= (2n + 1) P
n
. (C.17)
These need not be learnt.
C6.4 Solution of Laplaces equation.
Remember where we rst came across Lagrange polynomials. If
2
u = 0 and u is regular at = 0,
in spherical polar coordinates and with / = 0 then
u(r, ) =

n=0
_
A
n
r
n
+
B
n
r
n+1
_
P
n
(cos ). (C.18)
C7 Example: Temperatures in a Sphere
The steady temperature distribution u(x) inside the sphere r = a, in spherical polar coordinates,
satises
2
u = 0. If we heat the surface of the sphere so that u = f() on r = a for some function
f(), what is the temperature distribution within the sphere?
The equation and boundary conditions do not depend on so we know that u is of the form
(C.18). Further more we expect u to be nite at r = 0 so that B
n
= 0. We nd the coecients A
n
by evaluating this on r = a. We require
f() =

n=0
A
n
a
n
P
n
(cos ).
We can nd A
n
using the orthogonality of the polynomials (C.7). However in (C.7), the integration
is with respect to x and not cos . If x = cos , then dx = sin d. The interval 1 x 1 is
the interval 0. Multiply through by sin P
m
() and integrate in to obtain
_
0

sin f() P
m
(cos ) d =

n=0
(a
n
A
n
)
_
0

sin P
n
(cos ) P
m
(cos ) d
=

n=0
(a
n
A
n
)
_
1
1
P
n
(x) P
m
(x) dx =
2a
m
A
m
2m + 1
. (C.19)
So
u(r, ) =

n=0
_
n +
1
2
_
_
r
a
_
n
P
n
(cos )
_

0
f() P
n
(cos ) sin d. (C.20)
Let us heat the northern hemisphere and leave the southern half cold, so that f() = 1 for 0
/2 and f() = 0 for /2 < . Then the integral in (C.20) is
_
/2
0
sin P
n
(cos ) d =
_
1
0
P
n
(x) dx.
31
An integration of (C.17) gives
(2n + 1)
_
1
x
P
n
(q) dq = [P
n+1
P
n1
]
1
x
= P
n1
(x) P
n+1
(x), n > 1.
We know that
_
1
0
P
0
(q) dq = 1 so
u(r, ) =
1
2
+
1
2

n=1
_
r
a
_
n
P
n
(cos )(P
n1
(0) P
n+1
(0)).
Note that the temperature at the centre of the hemisphere (r = 0) is 1/2, which might be expected.
The Legendre polynomials of odd degree are odd and will be zero at the origin so that the coecients
in the sum will be zero for even values of n. Hence
u(r, ) =
1
2
+
1
2
_
r
a
_

m=0
_
r
a
_
2m
P
2m+1
(cos )(P
2m
(0) P
2(m+1)
(0)). (C.21)
We need the values at the origin of the polynomials of even degree. Putting x = 0 in (C.14) gives

n=0
t
n
P
n
(0) =
1

1 + t
2
(C.22)
= 1
1
2
t
2
+
1.3
2.2
t
4
2!

1.3.5
2.2.2
t
6
3!
+
=

m=0
(1)
m
(2m1)(2m3) 3.1
2
m
m!
t
2m
=

m=0
(1)
m
(2m)!
2
2m
(m!)
2
t
2m
.
Therefore
P
2m
(0) P
2(m+1)
(0) = (1)
m
(2m)!
2
2m
(m!)
2
(1)
m+1
(2m + 2)!
2
2m+2
((m + 1)!)
2
= (1)
m
(2m)!
2
2m
(m!)
2
_
1 +
2m + 1
2m + 2
_
,
and
u(r, ) =
1
2
+
1
2
_
r
a
_

m=0
_
r
a
_
2m
P
2m+1
(cos )(1)
m
(2m)!
2
2m
(m!)
2
_
1 +
2m + 1
2m + 2
_
.
We will evaluate this expression along the axis of the sphere. Here cos = 1, depending if we are
in the northern or southern hemisphere. The polynomials in appearing are odd so take the value
1 at 1. This can be accounted for by allowing r to be negative so that it measures the directed
distance from the centre in a northerly direction.
u(r) =
1
2
+
1
2
_
r
a
_

m=0
_
r
a
_
2m
(1)
m
(2m)!
2
2m
(m!)
2
_
1 +
2m + 1
2m + 2
_
.
A graph of the solution obtained by summing this series to 1,4,7,10,13 terms is shown below
32
-1 -0.5 0.5 1
0.25
0.5
0.75
1
1.25
1.5
We can see that the convergence is not good near the poles. The line that actually attains the
values 0 and 1 at the poles is the exact solution
u(r) =
1
2
+
(r/a)
2
+
_
1 + (r/a)
2
1
2(r/a)
_
1 + (r/a)
2
.
This is attained as follows. Equation (C.21) tells us
u(r) =
1
2
+
1
2
_
r
a
_

m=0
_
r
a
_
2m
(P
2m
(0) P
2(m+1)
(0)).
Identifying t with (r/a), and using (C.22), recognising that the sum only contains the even powers
of t, gives

m=0
(r/a)
2m
P
2m
(0)

m=0
(r/a)
2m2
P
2m
(0) =
1
_
1 + (r/a)
2

1
(r/a)
2
1
_
1 + (r/a)
2
Changing the index in the second sum, using P
0
= 1, we nd

m=0
(r/a)
2m
(P
2m
(0) P
2m+2
(0))
1
(r/a)
2
=
1
_
1 + (r/a)
2

1
(r/a)
2
1
_
1 + (r/a)
2
.
And so
u(r) =
1
2
+
1
2
r
a
_
1
_
1 + (r/a)
2

1
(r/a)
2
_
1 + (r/a)
2
+
1
(r/a)
2
_
,
which simplies to the result above.
33
Chapter D
Oscillation of a circular membrane
D1 The Problem and the initial steps in its solution
D1.1 The problem
If we have a circular drum, radius a, and hit it, we will set the drum vibrating. To study this
vibration, we need to solve the wave equation
c
2

2
=
tt
, 0 r a (D.1)
with c the speed of wave motion in the drums material and the displacement of the drums
surface. We have the boundary condition
(a, ) = 0, 0 < 2, (D.2)
corresponding to the drum being xed at its circular edge. We also expect to be nite at r = 0,
the drums centre. It we hit the drum, so that, at t = 0, = 0,
t
(r, ) = f(r), then we must
also impose this initial condition.
D1.2 Separating out the time dependence
We look for oscilliatory solutions, writing
(r, , t) = u(r, ) exp(it). (D.3)
This is equivalent to looking for solutions of the type = u(x)T(t) and knowing in advance that
the equation for T will have the form T

+
2
T = 0 where is related to the separation constant.
This has solutions proportional to cos t and sin t and we choose to write these in exponential
form. The value of is the frequency of the disturbance. Doing so leads to
c
2

2
u =
2
u, or
2
u +
2
u = 0, (D.4)
(after dividing through by the exponential factor). Here = c and we need to solve Helmholtz
equation for the spatial part of the solution. We should expand a little on the shorthand, exp(it),
that we are using to describe the temporal part of the solution. The solution of the equation for
T is T = A

cos(t) + B

sin(t). To write this in exponential form we can take the real part of
(A

iB

)(cos t + i sin t), i.e. of (A

iB

) exp(it). We therefore have a complex constant


multiplying (D.3) which we have omitted. We can write this complex constant in modulus-argument
form as R

exp(i

t).
34
Note that, dening
2
= c
2
, we would have ended up with rather than
2
in (D.4), but that
would lead to lots of

s in what follows. Note now that to nd the frequency of oscillation we
need to solve the eigenvalue problem (D.4), with u(a, ) = 0 (from (D.2) and u nite at r = 0.
u
rr
+
1
r
u
r
+
1
r
2
u

+
2
u = 0. (D.5)
D1.3 Separating out the dependence
We are looking for solitions that are 2-periodic in as we have a circular drum. We know that if we
look for solutions of the form u(r, ) = R(r)() then the -dependence will satisfy

+p
2
= 0,
where to further ensure periodicity in , p
2
0 and nally to ensure 2-periodicity p = n for integer
n, n = 0, 1, 2, 3, . . .. The case n = 0 corresponds to solutions with no -dependence. In general the
-dependence of the solution is like sin(n) and cos(n). We choose to write both of these together
as exp(in) (strictly R
n
exp(in) exp(i
n
)) and look for solutions of the type u = R(r) exp(in).
Subsitution into (D.5), dividing out the exponential factor and multiply by r
2
gives
r
2
R

+ rR

+ (
2
r
2
n
2
)R = 0, (D.6)
and if we write z = r, R = w(z),
z
2
w

+ zw

+ (z
2
n
2
)w = 0. (D.7)
We have obtained solutions of this equation for integer n through series and found that it has two
independent solutions. We can then write
w(z) = A
n
J
n
(z) + B
n
Y
n
(z). (D.8)
The point z = 0 corresponds to the centre of the membrane and we wish our solution to be
analytic here. This means that we must set B
n
= 0 and we consider only solutions nite at z = 0,
z = A
n
J
n
(z).
At this stage our solution is of the form
(r, , t) =

n=0
A
n
J
n
(r) exp(in) exp(ict) (D.9)
where A
n
= R
n
exp(i

) exp(i
n
) are (complex) constants that we need to nd so as to satisfy the
initial conditions. In relating this to our denitions above we have used R
n
= R

R
n
. We return
to this solution later, but for the present we look more closely at the properties of the solutions of
Bessels equation so as to rstly enable us to x possible values of and secondly to express the
initial conditions as generalised Fourier Series.
D1.4 On Bessel Functions
D1.4a Expression as a series
For general p we have the series solution
J
p
(x) =

j=0
(1)
j
(j + 1)(j + p + 1)
_
x
2
_
2j+p
(D.10)
35
with (z) the Gamma function, extending the factorial function to non-integer argument and with
n! = (n +1). The second independent solution is J
p
(x) if p is not an integer. However if p is an
integer then as J
n
= (1)
n
J
n
, these solutions are linearly dependent. This can be seen as follows.
We have the series
J

r=0
(1)
r
r!(r )!
_
z
2
_
2r
, = 0, 1, 2, .
If we look at the solution J

, and recall that x! = (1+x) has singularities at x = 1, 2, 3,


then we realise that the rst few terms in the sum become zero as n an integer. These terms
correspond to r n = 1, 2, 3, , n, corresponding to r = n 1, n 2, , 0. Thus, if is
an integer we need start the series at r = n for J
n
. This gives
J
n
(z) =

r=n
(1)
r
r!(r n)!
_
z
2
_
2rn
=

r=0
(1)
r
(1)
n
(r + n)!(r)!
_
z
2
_
2r+n
= (1)
n
J
n
(z),
using r
old
= r
new
+n. To cover this case also a second linearly independent solution Y
p
(x) is taken.
This is dened as
Y
p
(x) =
J
p
cos p J
p
(x)
sin p
(D.11)
and the limit p n considered if p = n. (Note Y
p
is often written N
p
.) For small x
J
p
(x)
1
(p + 1)
_
x
2
_
p
(D.12)
Y
0
(x) (2/) (ln(x/2) + + . . .) (D.13)
Y
p
(x)
(p)

_
2
x
_
p
(D.14)
D1.4b Behaviour for large x.
For large x all these solutions behave like a damped sinusoidal function with y(x) Asin(x+)/

x.
This is easy to demonstrate, writing w(x) = f(x)y(x) and substituting into (D.7)
x
2
(f

y + 2f

+ fy

) + x(f

y + fy

) + (x
2
n
2
)fy = 0
The coecient of y

can be made zero if 2x


2
f

+xf = 0 giving f x
1/2
. Choosing f = x
1/2
and
dividing by x
3/2
gives
y

+
_
1 +
1/4 n
2
x
2
_
y = 0.
For large x, this is well approximated by y

+ y = 0 so that y = Asin(x + ) and the result.


2 4 6 8 10 12 14
-0.4
-0.2
0.2
0.4
0.6
0.8
1
36
Table[Plot[BesselJ[i, x], {x, 0, 15}, PlotPoints -> 50], {i, 0, 5}];Show[%]
2 4 6 8 10 12 14
-3
-2.5
-2
-1.5
-1
-0.5
0.5
1
Table[Plot[BesselY[i, x], {x, 0, 15}, PlotPoints -> 50], {i, 0, 5}];
Show[%,PlotRange -> {-3, 1}]
All the solutions have an innite number of zeros. The zeros of J
n
are denoted j
nm
so that
J
n
(j
nm
) = 0 and 0 < j
n1
< j
n2
< j
n3
< . . ..
j
n1
j
n2
j
n3
j
n4
j
n5

n = 0 2.4048 5.5201 8.6537 11.7915 14.9309
n = 1 3.8317 7.0156 10.173 13.323 16.471
n = 2 5.1356 8.4172 11.6198 14.796 17.960
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
D1.5 Determination of
We have the boundary condtion (a, ) = 0, i.e. w(a) = 0. Thus is determined so that a = j
nm
or = j
nm
/a, m = 1, 2, 3, . Thus the possible frequencies of the drums vibration are determined
as the doubly innite family
=
nm
= cj
nm
/a (D.15)
For each frequency the vibration is described by
J
n
(j
nm
r/a) exp(icj
nm
t/a) exp(in) (D.16)
The general solution is an arbitray linear combination of all these modes so that (D.9) becomes the
double sum
(r, , t) =

n=0

m=1
A
nm
J
n
(j
nm
r/a) exp(in) exp(icj
nm
t/a). (D.17)
Here now A
nm
= R
nm
exp(i
n
) exp(i
m
). The constants R
nm
can be related to the amplitude of a
particular mode,
n
to its orientation realtive to the line = 0 and
m
to its phase (where in the
sinusoidal temporal cycle it started).
37
n = 0, m = 1
n = 0, m = 2
n = 1, m = 2
38
n = 3, m = 4
<< NumericalMathBesselZeros;n = 1; m = 2;jnm = BesselJZeros[n, m][[m]];
radial[r_] = BesselJ[n, jnm r];azimuth[theta_] = Re[Exp[I n theta]];
wrap[f_, r_, theta_] = If[r < 1, f[r, theta], 0];polarr[x_, y_] = Sqrt[x^2 + y^2];
polartheta[x_, y_] = If[x > 0, ArcTan[y/x], If[y > 0, Pi/2 + ArcTan[-x/y],
-Pi/2 - ArcTan[x/y]]];mode[r_, theta_] = radial[r]azimuth[theta];
surf = Plot3D[wrap[mode, polarr[x, y], polartheta[x,y]], {x, -1, 1},{y, -1, 1},
PlotRange -> All, PlotPoints -> {100,100}, Lighting -> False, Mesh ->False,
Axes -> False, Boxed -> False];cont = ContourPlot[wrap[mode, polarr[x, y],
polartheta[x, y]], {x, -1,1}, {y, -1, 1}, PlotRange -> All, PlotPoints-> {100, 100},
FrameTicks->None];Show[GraphicsArray[{surf, cont}]]
D1.6 Incorporating the Initial Conditions
Our initital conditions are that, at t = 0, = 0 and
t
= f(r). Putting t = 0 in (D.17) gives
0 =

n=0

m=1
A
nm
J
n
(j
nm
r/a) exp(in).
Dierentiating and putting t = 0 in (D.17)
f(r) =

n=0

m=1
(icj
nm
/a)A
nm
J
n
(j
nm
r/a) exp(in)
where A
nm
are complex constants (as above) and it is understood that we take the real part of the
sum. The right hand side has no -dependence and we can deduce that we need only the component
n = 0 from the rst sum. Thus we must nd A
0m
= A
m
= R
m
exp(i
m
), say, such that, taking real
parts,
0 =

m=1
R
m
exp(i
m
) J
n
(j
0m
r/a). (D.18)
f(r) =

m=1
(icj
0m
/a)R
m
exp(i
m
) J
n
(j
0m
r/a). (D.19)
This can be achieved by setting exp(i
m
) = i, remembering that R
m
is real. The choice i is
driven by a desire for neatness in (D.19).
39
Note that we have been a little overgeneral in our presentation here. Right at the start we
could have described the time-dependence by the solution sin(t). This is zero but has a non-zero
time derivative at t = 0 and is obviously that which is required for our particular initial conditions.
This corresponds to our current choice of a purely imaginary value for exp(i
m
). Similarly we could
have realised that, as the initial and boundary conditions had no -dependence, neither the nal
solution and included only the n = 0 mode from the start. If there was some -dependence in
the initial condition, then we would deal with each Fourier component (in ) separately. We are
looking to represent f(r) as a generalised Fourier Series in r, otherwise known as a Fourier-Bessel
Series. This is possible as the dierential equation satised by the Bessel functions, together with
the boundary conditions - niteness at r = 0 and 0 at r = a are a Sturm-Liouvuille problem.
D1.7 Orthogonality of Bessel Functions
The set of functions J
n
(r) in 0 r a) with = j
0m
/a are eigenfunctions of the Sturm-Liouiville
problem
r
2
R

+ rR

+ (
2
r
2
n
2
)R = 0, R(0) nite, R(a) = 0 (D.20)
We can rewite this as
_
rR

+
_

2
r
n
2
r
_
R = 0,
with solution R = J
n
(r). Let =
i
be such that J
n
(
i
a) = 0 and be a dierent value. We have
_
r J
n
(
i
r)

+
_

2
i
r
n
2
r
_
J
n
(
i
r) = 0, (D.21)
_
r J
n
(r)

+
_

2
r
n
2
r
_
J
n
(r) = 0. (D.22)
If we multiply (D.21) by J
n
(r), (D.22) by J
n
(
i
r), subtract and integrate we get
_
a
0
J
n
(r)
_
r J
n
(
i
r)

J
n
(
i
r)
_
r J
n
(r)

dr + (
2
i

2
)
_
a
0
r J
n
(
i
r) J
n
(r) dr
n
2
_
a
0
(J
n
(
i
r) J
n
(r) J
n
(r) J
n
(
i
r))
. .
=0
dr
r
= 0. (D.23)
Use integration by parts on the rst integral to get
_
J
n
(r)
_
r J
n
(
i
r)

_
J
n
(
i
r)
_
r J
n
(r)

_

_
a
0
J
n
(r)

r J
n
(
i
r)

J
n
(
i
r)

r J
n
(r)

. .
=0
dr
+(
2
i

2
)
_
a
0
r J
n
(
i
r) J
n
(r) dr = 0.
Now J
n
(r)

= J

n
(r) so, using the fact that J
n
(
i
a) = 0, we have the result
_
a
0
r J
n
(
i
r) J
n
(r) dr = a
i
J

n
(
i
a)
J
n
(a)

2
i
. (D.24)
Thus if =
i
and J(a) = 0, i.e. is another, distinct, eigenvalue, then
_
a
0
r J
n
(
i
r) J
n
(r) dr = 0. (D.25)
40
To see what happens if =
i
, let
i
and use lHopitals rule.
_
a
0
r J
n
(
i
r)
2
dr = a
i
J

n
(
i
a)

J
n
(a)

=i

(
2

2
i
)

=i
=
a
2
2
_
J

n
(
i
a)

2
. (D.26)
We can take and
i
as dierent roots of J
0
, j
0m
. We shall see in the section on the generating
function that follows that
J
n1
(x) =
n
x
J
n
(x) J

n
(x), = (J

0
)
2
= (J
1
)
2
,
and we can easily evaluate the values of J
1
at the zeros of J
0
.
D1.8 The solution
We can now consider (D.19), multiply by r J
0
(rj
0n
/a) and integrate between 0 and a. Only one
element of the sum survives due to the orthogonality and we have
_
a
0
r J(rj
0n
/a)f(r) dr = R
n
(cj
0n
/a)(a
2
/2)[J
1
(j
0n
)]
2
,
giving R
n
We make the substitution for A
0m
= R
m
exp(i
m
) = iR
m
in (D.17) and take the real
part of the result to yeild
(r, , t) =

m=1
2
_
a
0
r J(rj
0m
/a)f(r) dr
cj
0m
a[J
1
(j
0m
)]
2
J
0
(rj
0m
/a) sin(cj
0m
t/a). (D.27)
The animation at http://www.ucl.ac.uk/

ucahdrb/MATHM242/ illustrates this solution and


used the following commands
<< NumericalMathBesselZeros;f[x_] = Sin[2Pi x]; stop = 10; c = 1;j0m = BesselJZeros[0,
stop]; coef = Map[2NIntegrate[x BesselJ[0, # x] f[x], {x, 0, 1}]/(c# BesselJ[1, #]^2) &,
j0m]; soln[r_, t_] := Tr[coef*Map[BesselJ[0, r#]&, j0m]*Map[Sin[c # t] &, j0m]];
res[t_] := Module[{},wrap[f_, r_]=If[r < 1, f[r, t], 0]; polarr[x_, y_] = Sqrt[x^2 + y^2];
surf = Plot3D[wrap[soln, polarr[x, y]], {x, -1, 1}, {y, -1, 1}, PlotRange -> {-.3, .3},
PlotPoints -> {20, 20},Lighting -> False, Mesh -> False, Axes -> False, Boxed -> False]];
Export["res.gif", Table[res[i], {i, 0, 6, .01}]]
D1.8a Generating Function
There is a generating function for the Bessel functions J
n
. It turns out that
n=

n=
J
n
(x)t
n
= exp
_
x
2
_
t
1
t
__
. (D.28)
This can be derived by considering solutions of Helmholtz equation for G(x) in Cartesian and in
polar coordinates.

2
G + G = 0, G
xx
+ G
yy
+ G = 0, r
2
G
rr
+ rG
r
+ G

+ r
2
G = 0.
One solution is G = exp(iy), corresponding to a plane wave. In polar coordinates, this is G =
exp(ir sin ). We have seen that the solution in polar coordinates can be found by separating out
41
the angular dependence exp(in) leaving the radial dependence satisfying Bessels equation. The
plane wave is regular at the origin so we do not need the Y
n
solutions. We know from section A8
that we should be able to write G as follows
G = exp(ir sin ) =

n=
A
n
J
n
(r) exp(in) (D.29)
for some constants A
n
. If we write t = exp(i) so that sin = (t t
1
)/2, and replace r by x then
we have
exp
_
x
2
_
t
1
t
__
=
n=

n=
A
n
J
n
(x)t
n
. (D.30)
To show that the A
n
are all equal to one we use our knowledge of the expansions of J
n
(x) for small
x, obtained from the series solutions. From (D.12)
J
n
n
(0) = 2
n
.
We start by isolating a particular J
m
from the sum (D.29) by eectively using the orthogonality
properties of cos(n) and sin(n) which lie behind the idea of Fourier Series. We multiply (D.29)
by exp(im) and integrate
_
2
0
exp(ir sin im)d =

n=
A
n
J
n
(r)
_
2
0
exp(in im)d = 2A
m
J
m
(r)
Now dierentiate m times with respect to r and put r = 0 to nd
_
2
0
i
m
(sin
m
) exp(ir sin )d = 2A
m
J
m
m
(r),
=
_
2
0
i
m
sin
m
d = 2A
m
/2
m
.
Using the exponential form for sin gives
A
m
=
2
m
2
_
2
0
1
2
m
(exp i exp i)
m
exp(im)d =
1
2
_
2
0
(1 exp(2im))
m
d =
1
2
2 = 1,
as the only non-zero contribution to the integral comes from integrating the rst term in the
binomial expansion of the integrand.
D1.8b Use of the Generating Function
1. We see that
G
x
=
1
2
_
t
1
t
_
G
so that

n=
t
n
J

n
(x) =
1
2

n=
t
n+1
J
n
(x)
1
2

n=
t
n1
J
n
(x),
and comparing coecients of powers of t,
J

n
(x) =
1
2
(J
n1
(x) J
n+1
(x)) . (D.31)
42
2. If we make the replacement t 1/t, we see that
G(x, 1/t) = exp
_
1
2
_

1
t
+ t
__
= exp
_
1
2
_
t
1
t
__
= G(x, t),
so that

n=
J
n
(x)
(1)
n
t
n
=

n=
t
n
J
n
(x).
However the left hand side is also

n=
t
n
J
n
(x)(1)
n
, using n, rather than n to take
us through the sum. Thus

n=
t
n
J
n
(x)
(1)
n
=

n=
t
n
J
n
(x),
and comparing coecients of t
n
,
J
n
(x) = (1)
n
J
n
(x), (D.32)
so that
J
1
(x) = J
1
(x), J
2
(x) = J
2
(x).
Thus from (D.31), with n = 0,
J

0
(x) = J
1
(x). (D.33)
3. If we choose to dierentiate G with respect to t, then we can show
2n
x
J
n
(x) = J
n1
(x) + J
n+1
(x). (D.34)
43

You might also like