You are on page 1of 6

Ms.

Ruchi

Research Paper
Department onScience
of Computer Basic of Artificial
and Technology

Neural
Lovely Professional Network
University(Jalandhar, Punjab)

ruchi.giggled@gmail.com

Abstract-
An Artificial Neural Network (ANN) is an information
processing paradigm that is inspired by the way biological
nervous systems, such as the brain, process information. It is
composed of a large number of highly interconnected
processing elements (neurons) working in unison to solve
specific problems. An ANN is configured for a specific
application, such as pattern recognition or data classification,
through a learning process. This paper gives brief introduction
to biological and artificial neural networks, their basic
functions & working, their architecture. It also covers basic
learning techniques.
Single neuron
INTRODUCTION
An Artificial Neural Network (ANN) is an information processing 1. What is Artificial Neural Network?
paradigm that is inspired by the way biological nervous systems,
such as the brain, process information. The key element of this Artificial neural networks are computational models
paradigm is the novel structure of the information processing Inspired by biological neural models used for processing
system. It is composed of a large number of highly interconnected large no. of inputs. Nodes are used as neurons work in
processing elements (neurons) working in unison to solve specific biologica l neural networks.
problems. ANNs, like people, learn by example. An ANN is
configured for a specific application, such as pattern recognition or
data classification, through a learning process. Learning in Hypothetical node with basic parts is shown:
biological systems involves adjustments to the synaptic
connections that exist between the neurons.
The basic unit of human nervous system is
Neuron .Neurons connect with each other for processing
Data .A neuron is consists of three main parts ;dendrites
Which accept input, soma which is central processing part
and axon which forwards output of neuron to other

nerons.This output may be input to other neurons or may


be final output. Generally, a neuron is connected with 2. Architecture of neural networks
10,000 other neurons in nervous system.
1) Feed-forward networks
Feed-forward ANNs allow signals to travel one way only; from
input to output. There is no feedback (loops) i.e. the output of any
layer does not affect that same layer. Feed-forward ANNs tend to
be straight forward networks that associate inputs with outputs.
They are extensively used in pattern recognition. This type of
organisation is also referred to as bottom-up or top-down.


neural networks that can be implemented in silicon.
Currently, neural networks are the simple clustering of
the primitive artificial neurons. This clustering occurs
by creating layers which are then connected to one
another. How these layers connect is the other part of
the "art" of engineering networks to resolve real world
problems
.

example of feedforward network

Feedback networks
Feedback networks can have signals travelling in both
directions by introducing loops in the network. Feedback
networks are very powerful and can get extremely
complicated. Feedback networks are dynamic; their 'state' is
changing continuously until they reach an equilibrium
point. They remain at the equilibrium point until the input
changes and a new equilibrium needs to be found. Feedback
architectures are also referred to as interactive or recurrent,
although the latter term is often used to denote feedback
connections in single-layer organisations. Basically, all artificial neural networks have a similar
structure or topology . In that
structure some of the neurons interfaces to the real
world to receive its inputs. Other neurons provide the
real world with the network's outputs. This output
might be the particular character that the network
thinks that it has scanned or the particular image it
thinks is being viewed. All the rest of the neurons are
hidden from view. But a neural network is more than
a bunch of neurons. Some early researchers tried to
simply connect neurons in a random manner, without
much success. Now, it is known that even the brains
of snails are structured devices. One of the easiest
ways to design a structure is to create layers of
elements. It is the grouping of these neurons into
layers, the connections between these layers, and the
summation and transfer functions that comprises a
functioning neural network. The general terms used to
describe these characteristics are common to all
example of feedback network
networks. Although there are useful networks which
contain only one layer, or even one element, most
applications require networks that contain at least the
three normal types of layers - input, hidden, and
3. Working of ANN: output. The layer of input neurons receive the data
either from input files or directly from electronic
The other parts of the art of using neural networks sensors in real-time applications. The output layer
revolve around the myriad of ways these individual sends information directly to the outside world, to a
neurons can be clustered together. This clustering secondary computer process, or to other devices such
occurs in the human mind in such a way that as a mechanical control system. Between these two
information can be processed in a dynamic, layers can be many hidden layers. These internal
interactive, and self-organizing way. Biologically, layers contain many of the neurons in various
neural networks are constructed in a three-dimensional interconnected structures. The inputs and outputs of
world from microscopic components. These neurons each of these hidden neurons simply go to other
seem capable of nearly unrestricted interconnections. neurons. In most networks each neuron in a hidden
That is not true of any proposed, or existing, manmade layer receives the signals from all of the neurons in a
network. Integrated circuits, using current technology, are two layer above it, typically an input layer. After a neuron
dimensional devices with a limited performs its function it passes its output to all of the
number of layers for interconnection. This physical neurons in the layer below it, providing a feed forward
reality restrains the types, and scope, of artificial path to the output.
systems don't lock themselves in but continue to learn
4. Training an Artificial Neural Network: while in production use.

Once a network has been structured for a particular 4.2 Unsupervised training
application, that network is ready to be trained. To
start this process the initial weights are chosen The other type of training is called unsupervised
randomly. Then, the training, or learning, begins. training. In unsupervised training, the network is
There are two approaches to training - supervised and provided with inputs but not with desired outputs. The
unsupervised. system itself must then decide what features it will use
to group the input data. This is often referred to as
4.1 Supervised training self organization or adaption. At the present time,
Unsupervised learning is not well understood. This
In supervised training, both the inputs and the outputs Adaption to the environment is the promise which
are provided. The network then processes the inputs would enable science fiction types of robots to
and compares its resulting outputs against the desired continually learn on their own as they encounter new
outputs. Errors are then propagated back through the situations and new environments. Life is filled with
system, causing the system to adjust the weights which situations where exact training sets do not exist. Some
control the network. This process occurs over and over of these situations involve military action where new
as the weights are continually tweaked. The set of data combat techniques and new weapons might be
which enables the training is called the "training set." encountered. Because of this unexpected aspect to life
During the training of a network the same set of data and the human desire to be prepared, there continues
is processed many times as the connection weights are to be research into, and hope for, this field. Yet, at the
ever refined. The current commercial network present time, the vast bulk of neural network work is
development packages provide tools to monitor how in systems with supervised learning. Supervised
well an artificial neural network is converging on the learning is achieving results.
ability to predict the right answer. These tools allow 5. An engineering approach
the training process to go on for days, stopping only
when the system reaches some statistically desired
point, or accuracy. However, some networks never An artificial neuron is a device with many inputs and one output.
learn. This could be because the input data does not The neuron has two modes of operation; the training mode and the
contain the specific information from which the using mode. In the training mode, the neuron can be trained to fire
desired output is derived. Networks also don't (or not), for particular input patterns. In the using mode, when a
converge if there is not enough data to enable complete taught input pattern is detected at the input, its associated output
learning. Ideally, there should be enough data so that becomes the current output. If the input pattern does not belong in
part of the data can be held back as a test. Many the taught list of input patterns, the firing rule is used to determine
layered networks with multiple nodes are capable of whether to fire or not.
memorizing data. To monitor the network to
determine if the system is simply memorizing its data
in some non significant way, supervised training needs
to hold back a set of data to be used to test the system
after it has undergone its training. If a network simply
can't solve the problem, the designer then has to
review the input and outputs, the number of layers, the
number of elements per layer, the connections
between the layers, the summation, transfer, and
training functions, and even the initial weights
themselves. Those changes required to create a
successful network constitute a process wherein the
"art" of neural networking occurs. Another part of the
designer's creativity governs the rules of training.
There are many laws (algorithms) used to implement
A simple neuron
the adaptive feedback required to adjust the weights
during training. The most common technique is 5.1 Firing rules
backward-error propagation, more commonly known The firing rule is an important concept in neural networks and
as back-propagation. These various learning accounts for their high flexibility. A firing rule determines how one
techniques are explored in greater depth later in this calculates whether a neuron should fire for any input pattern. It
report. Yet, training is not just a technique. It involves relates to all the input patterns, not only the ones on which the
a "feel," and conscious analysis, to insure that the node was trained.
network is not over trained. Initially, an artificial
neural network configures itself with the general A simple firing rule can be implemented by using Hamming
statistical trends of the data. Later, it continues to distance technique. The rule goes as follows:
"learn" about other aspects of the data which may be
spurious from a general viewpoint. When finally the Take a collection of training patterns for a node, some of which
system has been correctly trained, and no further cause it to fire (the 1-taught set of patterns) and others which
learning is needed, the weights can, if desired, be prevent it from doing so (the 0-taught set). Then the patterns not
"frozen." In some systems this finalized network is in the collection cause the node to fire if, on comparison , they
then turned into hardware so that it can be fast. Other
have more input elements in common with the 'nearest' pattern in
the 1-taught set than with the 'nearest' pattern in the 0-taught set.
If there is a tie, then the pattern remains in the undefined state.

For example, a 3-input neuron is taught to output 1 when the input


(X1,X2 and X3) is 111 or 101 and to output 0 when the input is
000 or 001. Then, before applying the firing rule, the truth table
is;

X1: 0 0 0 0 1 1 1 1
X2: 0 0 1 1 0 0 1 1
Figure (a)
X3: 0 1 0 1 0 1 0 1 example:
The network of figure (a) is trained to recognize the patterns T and
H. The associated patterns are all black and all white respectively
OUT: 0 0 0/1 0/1 0/1 1 0/1 1 as shown below.

As an example of the way the firing rule is applied, take the


pattern 010. It differs from 000 in 1 element, from 001 in 2
elements, from 101 in 3 elements and from 111 in 2 elements.
Therefore, the 'nearest' pattern is 000 which belongs in the 0-
taught set. Thus the firing rule requires that the neuron should not
fire when the input is 001. On the other hand, 011 is equally
distant from two taught patterns that have different outputs and
thus the output stays undefined (0/1).

By applying the firing in every column the following truth table is


obtained;
If we represent black squares with 0 and white squares with
1 then the truth tables for the 3 neurones after
X1: 0 0 0 0 1 1 1 1 generalisation are;
X2: 0 0 1 1 0 0 1 1
X11: 0 0 0 0 1 1 1 1
X3: 0 1 0 1 0 1 0 1
X12: 0 0 1 1 0 0 1 1
X13: 0 1 0 1 0 1 0 1
OUT: 0 0 0 0/1 0/1 1 1 1

OUT: 0 0 1 1 0 0 1 1
Top neuron

The difference between the two truth tables is called X21: 0 0 0 0 1 1 1 1


the generalization of the neuron. Therefore the firing rule gives
the neuron a sense of similarity and enables it to respond 'sensibly' X22: 0 0 1 1 0 0 1 1
to patterns not seen during training
X23: 0 1 0 1 0 1 0 1
B. Pattern Recognition - an example

An important application of neural networks is pattern OUT: 1 0/1 1 0/1 0/1 0 0/1 0
recognition. Pattern recognition can be implemented by using a
feed-forward (figure ) neural network that has been trained Middle neuron
accordingly. During training, the network is trained to associate
outputs with input patterns. When the network is used, it X21: 0 0 0 0 1 1 1 1
identifies the input pattern and tries to output the associated
output pattern. The power of neural networks comes to life when a X22: 0 0 1 1 0 0 1 1
pattern that has no output associated with it, is given as an input.
In this case, the network gives the output that corresponds to a X23: 0 1 0 1 0 1 0 1
taught input pattern that is least different from the given pattern.
OUT: 1 0 1 1 0 0 1 0 Electronics Code sequence prediction, IC chip layout,
Bottom neuron
chip failure analysis, machine vision, voice synthesis.
From the tables it can be seen the following associasions can be
extracted:
Financial Real estate appraisal, loan advisor,

mortgage screening, corporate bond rating, portfolio

trading program, corporate financial analysis, currency

value prediction, document readers, credit application

evaluators.

In this case, it is obvious that the output should be all blacks since
the input pattern is almost the same as the 'T' pattern. Industrial Manufacturing process control, product

design and analysis, quality inspection systems,

welding quality analysis, paper quality prediction,

chemical product design analysis, dynamic modeling of

chemical process systems, machine maintenance

analysis, project bidding, planning, and management.


Here also, it is obvious that the output should be all whites since
the input pattern is almost the same as the 'H' pattern.
Medical Cancer cell analysis, EEG and ECG

analysis, prosthetic design, transplant time optimizer.

Speech Speech recognition, speech classification, text

to speech conversion.

Here, the top row is 2 errors away from the a T and 3 from an H.
So the top output is black. The middle row is 1 error away from
Telecommunications Image and data compression,
both T and H so the output is random. The bottom row is 1 error
automated information services, real-time spoken
away from T and 2 away from H. Therefore the output is black.
The total output of the network is still in favour of the T shape.
language translation.
II. Applications of neural networks
Transportation Truck Brake system diagnosis,
They can perform tasks that are easy for a human but difficult for
vehicle scheduling, routing systems.
a machine

Software Pattern Recognition in facial recognition,


Aerospace Autopilot aircrafts, aircraft fault detection.
optical character recognization

Automotive Automobile guidance systems.


Signal Processing Neural networks can be trained to

Military Weapon orientation and steering, target process an audio signal and filter it appropriately in the

tracking, object discrimination, facial recognition, hearing aids.

signal/image identification.
Neural networks also contribute to other areas of
Control ANNs are often used to make steering research such as neurology and psychology. They are
regularly used to model parts of living organisms and to
decisions of physical vehicles. investigate the internal mechanisms of the brain.

Perhaps the most exciting aspect of neural networks is


Anomaly Detection As ANNs are expert at the possibility that some day conscious' networks
might be produced. There is a number of scientists
recognizing patterns, they can also be trained to arguing that consciousness is a 'mechanical' property and
that 'conscious' neural networks are a realistic
generate an output when something unusual occurs that possibility.
misfits the pattern.
Finally, I would like to state that even though neural
networks have a huge potential we will only get the best
of them when they are integrated with computing, AI,
fuzzy logic and related subjects.
Key advantages of neural Networks:
REFERENCES-
[1] RESEARCH PAPER ON BASIC OF ARTIFICIAL
Most suitable for certain problems and situations: NEURAL NETWORK
Deepak Kumar and Hemant Sangwan

1. ANNs have the ability to learn and model non-line 2015 IJIRT | Volume 1 Issue 12 | ISSN: 2349-6002
ANNs have some key advantages that make them
[2] International Journal on Recent and Innovation Trends
mar and complex relationships, which is really
in Computing and Communication ISSN: 2321-8169 Volume: 2
important because in real-life, many of the
Issue: 1 96 100
relationships between inputs and outputs are non-
linear as well as complex. {3} RESEARCH PAPER ON BASIC OF ARTIFICIAL
NEURAL NETWORK
Deepak Kumar and Hemant Sangwan
2. ANNs can generalizeAfter learning from the
initial inputs and their relationships, it can infer 2015 IJIRT | Volume 1 Issue 12 | ISSN: 2349-6002
unseen relationships on unseen data as well, thus
making the model generalize and predict on unseen [4] International Journal of Emerging Engineering Research and
data. Technology Volume 2, Issue 2, May 2014, PP 143-148
[5] http://www.doc.ic.ac.uk/~nd/surprise_96/journal
/vol4/cs11/report.html
3. Unlike many other prediction techniques, ANN
does not impose any restrictions on the input variables
(like how they should be distributed). Additionally, many
studies have shown that ANNs can better model i.e. data
with high volatility and non-constant variance, given its
ability to learn hidden relationships in the data without
imposing any fixed relationships in the data. This is
something very useful in financial time series forecasting
(e.g. stock prices) where data volatility is very high.

Conclusion-

The computing world has a lot to gain from neural


networks. Their ability to learn by example makes them
very flexible and powerful. Furthermore there is no need
to devise an algorithm in order to perform a specific
task; i.e. there is no need to understand the internal
mechanisms of that task. They are also very well suited
for real time systems because of their fast response and
computational times which are due to their parallel
architecture.

You might also like