You are on page 1of 8

A Feedback Neural Network Approach for

Electromagnetic NDE Signal Inversion



Pradeep Ramuhalli, Muhammad Afzal, Kyung-Tae Hwang, Satish Udpa,
Lalita Udpa
Materials Assessment Research Group,
Department of Electrical and Computer Engineering
Iowa State University,
Ames, IA 50010, USA

Abstract: A new neural network based approach for solving inverse problems
in magnetostatic nondestructive evaluation (NDE) is presented. The approach
employs two neural networks - a forward network and an inverse network - in
feedback configuration for characterizing defects using magnetic flux leakage
(MFL) data. Initial simulation results show that the suggested approach is
efficient and yields promising results.


1. INTRODUCTION

The solution of inverse problems is of interest in a variety of applications ranging
from geophysical exploration to medical diagnosis and NDE. Inverse problems in NDE
involve estimation of a defect profile on the basis of information contained in a measured
NDE probe signal. This paper describes a method for reconstructing defect profiles generated
by magnetic flux leakage inspection tools that are used extensively for evaluating the
integrity of ferromagnetic components.
Solution techniques for inverse problems can be divided into two broad categories:
phenomenological and non-phenomenological approaches. Phenomenological approaches
typically employ a forward model that simulates the underlying physical process to solve the
inverse problem. The forward model is used in a feedback configuration (Figure 1) where the
output of the model is compared to the experimentally measured signal. If the prediction
error is less than some preset threshold, the initial solution is assumed to be the desired
defect profile. On the other hand, a higher error indicates the need for further refinement of
the solution. This process is carried out in an iterative manner till the solution is reached.
The second class of approaches non-phenomenological approaches attempts to
solve the inverse problem by using signal processing techniques. Typical methods include
calibration methods and neural networks. In the case of the latter, the problem is formulated
as a function approximation problem and the underlying function mapping is learnt by a
neural network.
Methods utilizing the previously described approaches have been reported
extensively in literature [1,2 and 3]. However, these methods have certain drawbacks. Neural
network based techniques are open loop in nature and are capable of providing a confidence
measure for accuracy only during the training phase. The FEM based approach is, in general,
not suitable for real life applications due to the computational complexity involved. This
paper presents an alternate method for solving inverse problems in NDE incorporating the
strengths of both the above-mentioned techniques. The suggested technique is capable of
Initial defect
profile
F <
Forward
Model
Update defect
profile
B
i
No
Desired Defect
Profile
Yes
Experimental
Input Signal
B
mi
, i=1,2,.., N

Figure 1. The phenomenological approach to solving the inverse problem
incremental learning, provides an online measure for accuracy of the defect estimate, and is
computationally efficient.
The rest of this paper is organized as follows. Section 2 proposes the feedback neural
network scheme for defect characterization and describes the wavelet basis function neural
network (WBFNN) and the radial basis function neural network (RBFNN) used in the
scheme. Section 3 presents initial results of applying the algorithm to MFL signals. Finally,
conclusions and suggestions for future work are contained in Section 4.


2. Feedback Neural Network Approach

Inverse problems in NDE involve the estimation of defect profiles in materials. This
problem can be formulated as a function approximation problem and the solution can be
obtained using artificial neural networks. In order to retain the advantages of the two solution
techniques described earlier, and to overcome the disadvantages described, a feedback neural
network scheme is proposed here for solving the inverse problem. The feedback neural
network (FBNN) approach is depicted in Figure 2. Two neural networks are used in a
feedback configuration. The forward network predicts the signal corresponding to a defect
profile while the inverse network predicts a profile given an NDE signal. The forward
network replaces the finite element model employed in a typical phenomenological approach
and provides a reference for comparing the defect profile predicted by the inverse neural
network.
The overall approach to solving the inverse problem is as follows. The signal from a
defect of unknown profile is input to the characterization neural network to obtain an
estimate of the profile. This estimate is then input into the forward network to obtain the
corresponding prediction of the MFL signal for that estimate of the profile. If the estimated
defect profile is close to the true profile, the measured MFL signal and the predicted signal
from the forward network will be similar to each other. On the other hand, if the error
exceeds a threshold T, the training mode is invoked and the networks are retrained with the
correct defect profile-MFL signal dataset.




2.1 The Forward Network

Since the forward neural network serves as a "standard' for measuring the
performance of the FBNN scheme, it must be capable of accurately estimating the signal
obtained from a variety of defect profiles. A wavelet basis function neural network is used
for implementing the forward network. The structure of a wavelet basis function network
(WBFNN) is shown in Figure 3. The WBFNN uses a multiresolution function approximation
[3] given by

= = =
+ =
L
j
N
k
j
k
j
k
N
k
L
k
L
k
j
L
d s f
1 1 1
(1)
I n v e r s e
N N
I n p u t S i g n a l
F o r w a r d
N N
P r e d i c t e d
D e f e c t
P r o f i l e
P r e d i c t e d
S i g n a l
+
E r r o r S i g n a l
T h r e s h o l d T
d

f
f
T r a i n i n g D a t a
j j
f d ,
I n v o k e T r a i n i n g M o d e

Figure 2. Schematic of the feedback neural network approach (Prediction mode).

These networks use a single hidden layer with sets of function nodes depending on
the number of resolutions. A family of wavelets is used as the basis functions and the
network is fully interconnected. Training of WBFNN networks involves determining the
weights connecting the hidden layer nodes to the output layer nodes as well as the centers
and spreads of the basis functions. Centers of the scaling functions at the coarse (or first)
resolution are determined by using a K-means clustering algorithm while the centers of the
wavelet functions at higher (or finer) resolutions are computed using a dyadic grid. The

L
x

L

1
f(x)

Figure 3. The wavelet basis function neural network.
spreads of these functions are set proportional to the cluster sizes. The interconnection
weights are then computed using a matrix inversion step. The network used in this study
employs Mexican hat functions as the wavelet and a Gaussian function as the scaling
function.

2.2 The Inverse Network

A radial basis function (RBFNN) neural network is used as an inverse network for
characterizing the defect profiles. The RBFNN (Figure 4) is a three-layer function
approximation network [4]. The structure of the network is similar to that of a WBFNN. The
difference lies in the fact that the RBFNN uses a single set of basis functions (the scaling
functions in the WBFNN). The training algorithm for the RBFNN is similar to that of the
WBFNN with the centers of the basis functions determined by using a clustering algorithm.
The spread of each basis function is proportional to the cluster size. Alternatively, it may be
set to some common constant value for all bases. The output interconnection weights are
then determined using a matrix inversion step.

i n p u t
l a y e r
M
h i d d e n
l a y e r
o u t p u t
l a y e r
c
j
kj
w
1
N
1

Figure 4. The Radial basis function neural network.
Once the inverse network is trained, the parameters need to be optimized. This
process is referred to as the training mode. The goal of the optimization step is to minimize
the error due to the inverse RBFNN. Let f be the error between the actual MFL signal and
the prediction of the forward network in the feedback configuration. In order for f to be
zero, the characterization network must be an exact inverse of the forward network. While
the functional form of the forward network can be derived easily, obtaining its inverse
analytically is difficult. This is due to the fact that the output of the forward network is a
function of the number and location of their respective basis function centers in each
network. The inverse is, therefore, estimated numerically.
An adaptive scheme is used to estimate the inverse of the forward network as shown
in Figure 5. This "inverse network" is used as the characterization network.
Let E = the error at the output of the inverse network in Figure 5,
w
kj
= interconnection weight from node j in the hidden layer to node k in the output layer
c
j
= center of the j
th
basis function (at node j in the hidden layer)

j
= spread of the j
th
basis function
f = the signal
( )
n k
d d d d ,... ,..., ,
2 1
= d be the desired output of the RBF network
( )
n k
d d d d

,...

,...,

,

2 1
= d be the actual output of the RBF network
Then, the error E can be defined as
( )

=
=
N
k
k k
d d E
1
2

(2)
where
k
d

is given by

=
|
|
.
|

\
|

=
l
j
j
j
kj k
w d
1
2
2

c f
(3)
and the basis function is chosen to be a Gaussian function:
|
|
|
.
|

\
|

=
|
|
.
|

\
|

2
2
2
2
exp
2
j
j
j
j

c f c f
(4)
Forward
NN
Inverse
NN
Predicted
Defect
Profile
Predicted
Signal
+

Training Data
j j
f d ,
j
d
d
Measurement and/or FEM

Figure 5. Feedback neural network: Training mode.
Substituting equations (2) and (3) into equation (1) and taking the derivative with respect to
the weights w
kj
, we have
( )
|
|
.
|

\
|

=

2
2

2
j
j
k k
kj
d d
w
E

c f
(5)
Similarly, the derivative of the error with respect to the other two parameters (c
j
and
j
) can
be computed as follows:
( )

=
(
(

|
|
.
|

\
|

|
|
.
|

\
|

=

n
k
j
ji i
j
j
kj k k
ji
c f
w d d
c
E
1
2 2

2

c f
(6)
( )

=
(
(
(

|
|
|
.
|

\
|

|
|
.
|

\
|

=

n
k
j
j
j
j
kj k k
j
w d d
E
1
3
2
2

2

c f c f
(7)
The derivatives are then substituted into the gradient descent equation to derive the update
equations for the three parameters. The gradient descent equation is given by
|
|
.
|

\
|

+ =
x
E
x x
old new
(8)
where x is the parameter of interest (w
kj
, c
ji
or
j
).
Once the characterization network is trained and optimized, the two networks are
connected in the feedback configuration shown in Figure 1. The characterization network can
then be used for predicting flaw profiles using signals obtained from defects of unknown
shape and size.
3. Results and Discussions

The algorithm was tested with the tangential component of magnetic flux leakage
(MFL) data generated by a 2D FEM employing a 100x100-element mesh. MFL techniques
are used extensively for the inspection of ferromagnetic materials where the measured signal
consists of changes in magnetic flux density as the probe scans the sample surface. A wavelet
basis function neural network (WBFNN) is used as the forward network while the radial
basis function (RBFNN) network is used as the inverse network for characterization. The
WBFNN uses 3 resolution levels with 5 centers at the coarsest resolution. The centers at the
other resolutions are computed using a dyadic grid (a total of 31 hidden nodes). The number
of input nodes is equal to the number of points (100) used to approximate the defect profile.
The number of output nodes is equal to the length of each signal (also 100 points). The
RBFNN uses 150 basis functions in the hidden layer. 218 defect profile-MFL signal pairs
were used in the training set and 22 signals were used for testing with no overlap between the
two data sets.
Figure 6 shows the results of training the forward network. The solid line shows the
true signal while the dotted line shows the neural network prediction. These plots indicate
that the forward network is capable of predicting the signal with little error. A typical
prediction result is shown in Figure 7. The solid line in Figure 7 (a) shows the true signal.
This is applied into the RBFNN network, which has not been optimized. The prediction
result of the RBFNN network is shown in Figure 7 (b). The resulting signal is then applied to
the forward network. The corresponding output is shown in Figure 7 (a). The results after
optimizing the inverse network are also shown in Figures 7 (a) and (b). Similar results
obtained using signals from defects with other geometries are shown in Figures 8 and 9.
0 10 20 30 40 50 60 70 80 90 100
0.02
0.03
0.04
0.05
0.06
0.07
Signal #1, defect width = 1, depth = 60%
0 10 20 30 40 50 60 70 80 90 100
0.025
0.03
0.035
0.04
0.045
Signal #20, defect width = 6.2, depth = 35%
0 10 20 30 40 50 60 70 80 90 100
0.02
0.03
0.04
0.05
0.06
Signal #10, defect width = 3.4, depth = 60%
0 10 20 30 40 50 60 70 80 90 100
0.02
0.03
0.04
0.05
0.06
Signal #22, defect width = 6.6, depth = 60%
: True
: Predicted

Figure 6. Training results for the forward network.
0 10 20 30 40 50 60 70 80 90 100
0.25
0.3
0.35
0.4
0.45
MFL signal (#11)
True MFL
Initial prediction
Optimized prediction
0 10 20 30 40 50 60 70 80 90 100
-0.4
-0.3
-0.2
-0.1
0
0.1
Profile
True profile
Initial prediction
Final prediction

Figure 7. Feedback neural network results (Flaw #11: 3.8", 35%).

0 10 20 30 40 50 60 70 80 90 100
0.2
0.3
0.4
0.5
0.6
0.7
MFL signal (#13)
True MFL
Initial prediction
Optimized prediction
0 10 20 30 40 50 60 70 80 90 100
-0.8
-0.6
-0.4
-0.2
0
0.2
Profile
True profile
Initial prediction
Final prediction

Figure 8. Feedback neural network result (Flaw #13: 4.2", 60%).
(a)
(b)
(a)
(b)
0 10 20 30 40 50 60 70 80 90 100
0.25
0.3
0.35
0.4
0.45
MFL signal (#14)
True MFL
Initial prediction
Optimized prediction
0 10 20 30 40 50 60 70 80 90 100
-0.5
-0.4
-0.3
-0.2
-0.1
0
0.1
0.2
Profile
True profile
Initial prediction
Final prediction

Figure 9. Feedback neural network result (Flaw #14: 4.6", 35%).
These results indicate that the optimization process improves the prediction results. In
addition, the use of a forward network in a feedback configuration provides a measure of the
error in the characterization with the error in the defect profile prediction being proportional
to the error in the signal prediction.


4. Conclusions
A feedback neural network scheme has been proposed for the solution of NDE
inverse problems. This scheme uses a closed loop design and consequently, provides an
estimate of the error during the test phase. Initial results on magnetic flux leakage data from
the inspection of gas transmission pipelines indicate that the technique is capable of
predicting the defect profiles reasonably well.
Future work would concentrate on the use of experimental data and the use of noisy
signals to test the robustness of the system. This scheme also needs to be tested for
reconstructing 3-dimensional flaws.


REFERENCES

[1] M. Yan, M. Afzal, S. Udpa, S. Mandayam, Y. Sun, L. Udpa and P. Sacks, "Iterative Algorithms for
Electromagnetic NDE Signal Inversion," ENDE '97, September 14-16, 1997, Reggio Calabria, Italy.
[2] S. Hoole, S. Subramaniam, R. Saldanha, J. Coulomb, and J. Sabonnadiere, "Inverse Problem Methodology
and Finite Elements in The Identification of Cracks, Sources, Materials and Their Geometry in
Inaccessible Locations," IEEE Transactions on Magnetics, Vol.27, No.3, 1991.
[3] K. Hwang, W. Lord, S. Mandayam, L. Udpa, S. Udpa, "A Multiresolution Approach for Characterizing
MFL Signatures from Gas Pipeline Inspections," Review of Progress in Quantitative Nondestructive
Evaluation, Plenum Press, New York, Vol.16, pp.733-739, 1997.
[4] S. Haykin, Neural Networks: A Comprehensive Foundation, Prentice Hall, Inc., New Jersey, 1994.

(a)
(b)

You might also like