The Urdu
character set inherits the properties of several other languages such as Persian,
Arabic and Sanskrit. Handwritten Urdu characters are more cursive and complex
in nature and hence more difficult to be recognized. The Urdu script contains
39 letters and ten numerals, some of them are similar in shape which makes them
harder to be classified. Some characters are looped, some are open and some
consist of rhombus shaped dots ranging from single to triple. Most of the
characters are differentiated with respect to the position of dots as well as
their own position in the word. The absence of baseline and direction of
writing of Urdu language makes it further distinguishable. The Urdu character
changes its shape upto a maximum of four with the change in its context. These
exclusive characteristics of Urdu characters make their classification and
recognition more open and challenging to the researchers. The ongoing work involves two main phases for recognition
of handwritten Urdu characters. In first phase, the characters of Urdu script
are preprocessed, skeletonized and then modelled as B-Spline curves whose
dominant points can be computed from the control points of the curve as
described in our previous part of this work 2. The characters are split into
36 blocks which are scaled to a standard size and then thinned so as to obtain
the skeletal patterns of the characters to be recognized.  The second phase consists of neural network architecture
to which the characters to be classified are fed as input.

 

3. ARTIFICIAL NEURAL NETWORKS

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

Inspired by the biological neuron
system and its functions, artificial neural networks have been evolved as exciting
mechanism to instruct the computers to behave like humans and they are proving
to be impressive tools in numerous applications and fields like speech
recognition, image processing and character recognition. Artificial neural
networks were familiarized by McCulloch and Pitts in 1943. Neural networks
are composed of simple units called neurons which operate in parallel. As in
natural neural system, the network function is decided with the help of
connections between units. A neural network can be trained to perform a
specific function by altering the values of the weights between nodes. The ANN consists
of many trainable algorithms which have capability to learn from training data
and solve various complex problems.

SINGLE NEURON STRUCTURE

The single neuron structure gives
a transfer function which is equal to the weighted linear combination of input
neurons. The structure is depicted in the fig.2 as follows:

Using training data (input—target
pairs), the weights of the neuron can be iteratively

adjusted to give local or global
optima. Optimum weights in the sense of least square

errors were derived by Widrow and
Hoff 9and the algorithm was called the LMS

algorithm and is commonly known
as the Widrow-Hoff rule and has become a widely

accepted algorithm .In the LMS
algorithm, the network weights are moved along the

negative of the gradient of the
performance function. Specifically, after each iteration

or epoch (new set of input—target
pairs) the weights are adjusted according to the

following rule

Let A = A1,A2,A3…..AnT
 be the input vector and W=w1,w2,w3……wnT
be the associated weights then the output Y of the neuron is given by :

Y= A1w1 +A2
w2 +A3 w3 +…………+ An wn                       (1)

And the transfer function is

B=f(Y)

After every iteration, the
weights can be altered by W = W + UeA        (2)

This is known as Widrow-Hoff rule
9 and is commonly used Algorithm in training the neural network

MULTI LAYER NEURAL NETWORK

An artificial neural network
which makes use of more than one layer to accomplish the job of mapping the
input neurons to the target output are known as multi-layer neural networks.
The design of multi-layer ANN showing two layers has been presented in fig.2 as
under:

An Artificial neural network which
employs the Widrow-Hoff learning rule and nonlinear transfer functions is
called the Backpropagation neural network and it estimates nearly any function with
a finite number of cutoffs. The back propagation network when accurately
trained produces better results for the inputs have never been seen before.