Professional Documents
Culture Documents
f
f
T r a i n i n g D a t a
j j
f d ,
I n v o k e T r a i n i n g M o d e
Figure 2. Schematic of the feedback neural network approach (Prediction mode).
These networks use a single hidden layer with sets of function nodes depending on
the number of resolutions. A family of wavelets is used as the basis functions and the
network is fully interconnected. Training of WBFNN networks involves determining the
weights connecting the hidden layer nodes to the output layer nodes as well as the centers
and spreads of the basis functions. Centers of the scaling functions at the coarse (or first)
resolution are determined by using a K-means clustering algorithm while the centers of the
wavelet functions at higher (or finer) resolutions are computed using a dyadic grid. The
L
x
L
1
f(x)
Figure 3. The wavelet basis function neural network.
spreads of these functions are set proportional to the cluster sizes. The interconnection
weights are then computed using a matrix inversion step. The network used in this study
employs Mexican hat functions as the wavelet and a Gaussian function as the scaling
function.
2.2 The Inverse Network
A radial basis function (RBFNN) neural network is used as an inverse network for
characterizing the defect profiles. The RBFNN (Figure 4) is a three-layer function
approximation network [4]. The structure of the network is similar to that of a WBFNN. The
difference lies in the fact that the RBFNN uses a single set of basis functions (the scaling
functions in the WBFNN). The training algorithm for the RBFNN is similar to that of the
WBFNN with the centers of the basis functions determined by using a clustering algorithm.
The spread of each basis function is proportional to the cluster size. Alternatively, it may be
set to some common constant value for all bases. The output interconnection weights are
then determined using a matrix inversion step.
i n p u t
l a y e r
M
h i d d e n
l a y e r
o u t p u t
l a y e r
c
j
kj
w
1
N
1
Figure 4. The Radial basis function neural network.
Once the inverse network is trained, the parameters need to be optimized. This
process is referred to as the training mode. The goal of the optimization step is to minimize
the error due to the inverse RBFNN. Let f be the error between the actual MFL signal and
the prediction of the forward network in the feedback configuration. In order for f to be
zero, the characterization network must be an exact inverse of the forward network. While
the functional form of the forward network can be derived easily, obtaining its inverse
analytically is difficult. This is due to the fact that the output of the forward network is a
function of the number and location of their respective basis function centers in each
network. The inverse is, therefore, estimated numerically.
An adaptive scheme is used to estimate the inverse of the forward network as shown
in Figure 5. This "inverse network" is used as the characterization network.
Let E = the error at the output of the inverse network in Figure 5,
w
kj
= interconnection weight from node j in the hidden layer to node k in the output layer
c
j
= center of the j
th
basis function (at node j in the hidden layer)
j
= spread of the j
th
basis function
f = the signal
( )
n k
d d d d ,... ,..., ,
2 1
= d be the desired output of the RBF network
( )
n k
d d d d
,...
,...,
,
2 1
= d be the actual output of the RBF network
Then, the error E can be defined as
( )
=
=
N
k
k k
d d E
1
2
(2)
where
k
d
is given by
=
|
|
.
|
\
|
=
l
j
j
j
kj k
w d
1
2
2
c f
(3)
and the basis function is chosen to be a Gaussian function:
|
|
|
.
|
\
|
=
|
|
.
|
\
|
2
2
2
2
exp
2
j
j
j
j
c f c f
(4)
Forward
NN
Inverse
NN
Predicted
Defect
Profile
Predicted
Signal
+
Training Data
j j
f d ,
j
d
d
Measurement and/or FEM
Figure 5. Feedback neural network: Training mode.
Substituting equations (2) and (3) into equation (1) and taking the derivative with respect to
the weights w
kj
, we have
( )
|
|
.
|
\
|
=
2
2
2
j
j
k k
kj
d d
w
E
c f
(5)
Similarly, the derivative of the error with respect to the other two parameters (c
j
and
j
) can
be computed as follows:
( )
=
(
(
|
|
.
|
\
|
|
|
.
|
\
|
=
n
k
j
ji i
j
j
kj k k
ji
c f
w d d
c
E
1
2 2
2
c f
(6)
( )
=
(
(
(
|
|
|
.
|
\
|
|
|
.
|
\
|
=
n
k
j
j
j
j
kj k k
j
w d d
E
1
3
2
2
2
c f c f
(7)
The derivatives are then substituted into the gradient descent equation to derive the update
equations for the three parameters. The gradient descent equation is given by
|
|
.
|
\
|
+ =
x
E
x x
old new
(8)
where x is the parameter of interest (w
kj
, c
ji
or
j
).
Once the characterization network is trained and optimized, the two networks are
connected in the feedback configuration shown in Figure 1. The characterization network can
then be used for predicting flaw profiles using signals obtained from defects of unknown
shape and size.
3. Results and Discussions
The algorithm was tested with the tangential component of magnetic flux leakage
(MFL) data generated by a 2D FEM employing a 100x100-element mesh. MFL techniques
are used extensively for the inspection of ferromagnetic materials where the measured signal
consists of changes in magnetic flux density as the probe scans the sample surface. A wavelet
basis function neural network (WBFNN) is used as the forward network while the radial
basis function (RBFNN) network is used as the inverse network for characterization. The
WBFNN uses 3 resolution levels with 5 centers at the coarsest resolution. The centers at the
other resolutions are computed using a dyadic grid (a total of 31 hidden nodes). The number
of input nodes is equal to the number of points (100) used to approximate the defect profile.
The number of output nodes is equal to the length of each signal (also 100 points). The
RBFNN uses 150 basis functions in the hidden layer. 218 defect profile-MFL signal pairs
were used in the training set and 22 signals were used for testing with no overlap between the
two data sets.
Figure 6 shows the results of training the forward network. The solid line shows the
true signal while the dotted line shows the neural network prediction. These plots indicate
that the forward network is capable of predicting the signal with little error. A typical
prediction result is shown in Figure 7. The solid line in Figure 7 (a) shows the true signal.
This is applied into the RBFNN network, which has not been optimized. The prediction
result of the RBFNN network is shown in Figure 7 (b). The resulting signal is then applied to
the forward network. The corresponding output is shown in Figure 7 (a). The results after
optimizing the inverse network are also shown in Figures 7 (a) and (b). Similar results
obtained using signals from defects with other geometries are shown in Figures 8 and 9.
0 10 20 30 40 50 60 70 80 90 100
0.02
0.03
0.04
0.05
0.06
0.07
Signal #1, defect width = 1, depth = 60%
0 10 20 30 40 50 60 70 80 90 100
0.025
0.03
0.035
0.04
0.045
Signal #20, defect width = 6.2, depth = 35%
0 10 20 30 40 50 60 70 80 90 100
0.02
0.03
0.04
0.05
0.06
Signal #10, defect width = 3.4, depth = 60%
0 10 20 30 40 50 60 70 80 90 100
0.02
0.03
0.04
0.05
0.06
Signal #22, defect width = 6.6, depth = 60%
: True
: Predicted
Figure 6. Training results for the forward network.
0 10 20 30 40 50 60 70 80 90 100
0.25
0.3
0.35
0.4
0.45
MFL signal (#11)
True MFL
Initial prediction
Optimized prediction
0 10 20 30 40 50 60 70 80 90 100
-0.4
-0.3
-0.2
-0.1
0
0.1
Profile
True profile
Initial prediction
Final prediction
Figure 7. Feedback neural network results (Flaw #11: 3.8", 35%).
0 10 20 30 40 50 60 70 80 90 100
0.2
0.3
0.4
0.5
0.6
0.7
MFL signal (#13)
True MFL
Initial prediction
Optimized prediction
0 10 20 30 40 50 60 70 80 90 100
-0.8
-0.6
-0.4
-0.2
0
0.2
Profile
True profile
Initial prediction
Final prediction
Figure 8. Feedback neural network result (Flaw #13: 4.2", 60%).
(a)
(b)
(a)
(b)
0 10 20 30 40 50 60 70 80 90 100
0.25
0.3
0.35
0.4
0.45
MFL signal (#14)
True MFL
Initial prediction
Optimized prediction
0 10 20 30 40 50 60 70 80 90 100
-0.5
-0.4
-0.3
-0.2
-0.1
0
0.1
0.2
Profile
True profile
Initial prediction
Final prediction
Figure 9. Feedback neural network result (Flaw #14: 4.6", 35%).
These results indicate that the optimization process improves the prediction results. In
addition, the use of a forward network in a feedback configuration provides a measure of the
error in the characterization with the error in the defect profile prediction being proportional
to the error in the signal prediction.
4. Conclusions
A feedback neural network scheme has been proposed for the solution of NDE
inverse problems. This scheme uses a closed loop design and consequently, provides an
estimate of the error during the test phase. Initial results on magnetic flux leakage data from
the inspection of gas transmission pipelines indicate that the technique is capable of
predicting the defect profiles reasonably well.
Future work would concentrate on the use of experimental data and the use of noisy
signals to test the robustness of the system. This scheme also needs to be tested for
reconstructing 3-dimensional flaws.
REFERENCES
[1] M. Yan, M. Afzal, S. Udpa, S. Mandayam, Y. Sun, L. Udpa and P. Sacks, "Iterative Algorithms for
Electromagnetic NDE Signal Inversion," ENDE '97, September 14-16, 1997, Reggio Calabria, Italy.
[2] S. Hoole, S. Subramaniam, R. Saldanha, J. Coulomb, and J. Sabonnadiere, "Inverse Problem Methodology
and Finite Elements in The Identification of Cracks, Sources, Materials and Their Geometry in
Inaccessible Locations," IEEE Transactions on Magnetics, Vol.27, No.3, 1991.
[3] K. Hwang, W. Lord, S. Mandayam, L. Udpa, S. Udpa, "A Multiresolution Approach for Characterizing
MFL Signatures from Gas Pipeline Inspections," Review of Progress in Quantitative Nondestructive
Evaluation, Plenum Press, New York, Vol.16, pp.733-739, 1997.
[4] S. Haykin, Neural Networks: A Comprehensive Foundation, Prentice Hall, Inc., New Jersey, 1994.
(a)
(b)