You are on page 1of 25

Artificial Neural Networks

-ApplicationPeter Andras peter.andras@ncl.ac.uk www.staff.ncl.ac.uk/peter.andras/lectures

Overview
1. 2. 3. Application principles Problem Neural network solution

Application principles
The solution of a problem must be the simple.

Complicated solutions waste time and resources. If a problem can be solved with a small look-up table that can be easily calculated that is a more preferred solution than a complex neural network with many layers that learns with back-propagation.

Application principles
The speed is crucial for computer game applications.

If it is possible on-line neural network solutions should be avoided, because they are big time consumers. Preferably, neural networks should be applied in an off-line fashion, when the learning phase doesnt happen during the game playing time.

Application principles
On-line neural network solutions should be very simple.

Using many layer neural networks should be avoided, if possible. Complex learning algorithms should be avoided. If possible a priori knowledge should be used to set the initial parameters such that very short training is needed for optimal performance.

Application principles
All the available data should be collected about the problem.

Having redundant data is usually a smaller problem than not having the necessary data.

The data should be partitioned in training, validation and testing data.

Application principles
The neural network solution of a problem should be selected from a large enough pool of potential solutions.

Because of the nature of the neural networks, it is likely that if a single solution is build than that will not be the optimal one.

If a pool of potential solutions is generated and trained, it is more likely that one which is close to the optimal one is found.

Problem
10 .6 10 .4

Control:

10 .2 2 09 .8 09 .6 09 .4 09 .2 3 4 5

The objective is to maintain some variable in a given range (possibly around a fixed value), by changing the value of other, directly modifiable (controllable) variables. Example: keeping a stick vertically on a finger, by moving your arm, such that the stick doesnt fall.

Problem
Movement control:

How to move the parts (e.g., legs, arms, head) of an animated figure that moves on some terrain, using various types of movements (e.g., walks, runs, jumps) ?

Problem
Problem analysis: variables modularisation into sub-problems objectives data collection

Problem
Simple problems need simple solutions.

If the animated figure has only a few components, moves on simple terrains, and is intended to do a few simple moves (e.g., two types of leg and arm movements, no head movement), the movement control can be described by a few rules.

Problem
Example rules for a simple problem: IF (left_leg IS forward) AND (right_leg IS backward) THEN right_leg CHANGES TO forward left_leg CHANGES TO backward

Problem
Controlling complex movements needs complex rules. Complex rules by simple solutions:

A1 B1 B2 B3 M1 M3

A2 M4 M2

A3 M2 M3

A4 M4 M4

M1a M3

M1a M1

Simple solutions get very complex structure.

Problem
Complex solutions by complex methods: Variable B Variable A Approximation of functional relationship by a neural network.

Neural network solution


Problem specification: input and output variables other specifications (e.g., smoothness) Example: desired movement parameters for given input values
t x1 x2 x3 y1 y2 1 0 .1 0 5 0 .1 3 3 0 .6 8 5 0 .8 5 1 0 .0 8 3 2 0 .0 6 0 0 .4 6 5 0 .2 9 2 0 .9 9 9 0 .0 5 9 3 0 .7 5 4 0 .7 8 9 0 .7 3 2 1 .4 9 8 0 .9 6 5 4 0 .8 9 2 0 .8 9 4 0 .9 6 9 1 .4 2 1 1 .1 2 5 5 0 .4 1 4 0 .8 6 9 0 .5 6 7 1 .2 5 3 0 .4 8 0 6 0 .8 8 1 0 .5 1 9 0 .0 4 7 1 .4 7 1 1 .2 6 5 7 0 .1 7 1 0 .7 6 7 0 .5 8 1 1 .0 0 3 0 .1 6 4 8 0 .4 4 7 0 .2 2 4 0 .0 0 9 1 .1 0 3 0 .4 9 1 9 0 .9 6 6 0 .2 7 0 0 .6 2 1 1 .5 6 5 1 .0 3 5 10 0 .5 9 3 0 .0 1 6 0 .6 2 3 1 .1 6 6 0 .4 8 7

Neural network solution


Problem modularisation: separating sub-problems that are solved separately Example: the movements should be separated on the basis of causal independence and connectedness

separate solution for y1 and y2 if they are causally independent, joint solution if they are interdependent, connected solution if one is causally dependent on the other

Neural network solution


Data collection and organization: training, validation and testing data sets

Example: Training set: ~ 75% of the data Validation set: ~ 10% of the data Testing set: ~ 5% of the data

Neural network solution


Solution design: neural network model selection

Example: x1 x2 x3

f ( x) = e
yout

|| x w|| 2 2a 2

Gaussian neurons

Neural network solution


Generation of a pool of candidate models.

Example: W1, W2 W3, W4


y y
1 out

= w e
k =1 4 1 k

|| x c k ,1 || 2 2 ( a k ,1 ) 2

;y

2 out

= w e
k =1 4 2 k

|| x c k , 2 || 2 2 ( ak , 2 ) 2

3 out

= w e
k =1 3 k

|| x c k , 3 || 2 2( ak , 3 ) 2

;y

4 out

= w e
k =1 4 k

|| x c k , 4 || 2 2 ( ak , 4 ) 2

W19, W20
y
19 out

= w e
k =1 19 k

|| x c k ,19 || 2 2 ( a k ,19 ) 2

;y

20 out

20 = wk e k =1

|| x c k , 20 || 2 2 ( a k , 20 ) 2

Neural network solution


Learning the task from the data: we apply the learning algorithm to each network from the solution pool we use the training data set Example:
x (1) = (0.105 ,0.133 ,0.685 )
1 1 y 1 (1) = w1 f1 ( x (1)) + w1 f 2 ( x (1)) + w3 f 3 ( x (1)) + w1 f 4 ( x (1)) out 2 4

y 1 (1) = 0.997 out E = ( y 1 (1) y1 (1)) 2 = (0.997 0.851 ) 2 = 0.0213 out


1 1 w1, new = w1 c 0.146 f1 ( x (1))

... x (1) = (0.105 ,0.133 ,0.685 )


1 1 y 1 (1) = w1 f1 ( x (1)) + w1 f 2 ( x (1)) + w3 f 3 ( x (1)) + w1 f 4 ( x (1)) out 2 4

y 1 (1) = 0.847 out E = ( y 1 (1) y1 (1)) 2 = (0.847 0.851 ) 2 = 0.000016 out


1 1 w1, new = w1 c 0.004 f1 ( x (1))

Neural network solution


Learning the task from the data:
5 2.5 0 -2.5 1 2 3 4 5 1 2 3

5 4

15 10 5 0 1 2 3 4 2 51 3

5 4

5 2.5 0 -2.5 1 2 3 4 2 51 3

5 4

Before learning

After learning

Neural network solution


Neural network solution selection each candidate solution is tested with the validation data and the best performing network is selected Network 11 Network 4
5 2.5 0 -2.5 1 2 3 4 2 5 1 3 4

Network 7

5 2.5 0 -2.5 1 2 3 4 1 2 3

5 5 2.5 0 4 -2.5 1 2 3 4 51 2 3

7.5 5 5 2.5 0 4 -2.5 1 2 3 4 5 1 2 3

Neural network solution


Choosing a solution representation: the solution can be represented directly as a neural network specifying the parameters of the neurons alternatively the solution can be represented as a multi-dimensional look-up table the representation should allow fast use of the solution within the application

Summary
Neural network solutions should be kept as simple as possible. For the sake of the gaming speed neural networks should be applied preferably off-line. A large data set should be collected and it should be divided into training, validation, and testing data. Neural networks fit as solutions of complex problems. A pool of candidate solutions should be generated, and the best candidate solution should be selected using the validation data. The solution should be represented to allow fast application.

Questions
1. 2. 3. 4. 5. 6. Are the immune cells part of the nervous system ? Can an artificial neuron receive inhibitory and excitatory inputs ? Do the Gaussian neurons use sigmoidal activation function ? Can we use general optimisation methods to calculate the weights of neural networks with a single nonlinear layer ? Does the application of neural networks increase the speed of simple games ? Should we have a validation data set when we train neural networks ?

You might also like