The Urducharacter set inherits the properties of several other languages such as Persian,Arabic and Sanskrit. Handwritten Urdu characters are more cursive and complexin nature and hence more difficult to be recognized. The Urdu script contains39 letters and ten numerals, some of them are similar in shape which makes themharder to be classified. Some characters are looped, some are open and someconsist of rhombus shaped dots ranging from single to triple. Most of thecharacters are differentiated with respect to the position of dots as well astheir own position in the word. The absence of baseline and direction ofwriting of Urdu language makes it further distinguishable. The Urdu characterchanges its shape upto a maximum of four with the change in its context. Theseexclusive characteristics of Urdu characters make their classification andrecognition more open and challenging to the researchers.

The ongoing work involves two main phases for recognitionof handwritten Urdu characters. In first phase, the characters of Urdu scriptare preprocessed, skeletonized and then modelled as B-Spline curves whosedominant points can be computed from the control points of the curve asdescribed in our previous part of this work 2. The characters are split into36 blocks which are scaled to a standard size and then thinned so as to obtainthe skeletal patterns of the characters to be recognized.  The second phase consists of neural network architectureto which the characters to be classified are fed as input.  3.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

ARTIFICIAL NEURAL NETWORKSInspired by the biological neuronsystem and its functions, artificial neural networks have been evolved as excitingmechanism to instruct the computers to behave like humans and they are provingto be impressive tools in numerous applications and fields like speechrecognition, image processing and character recognition. Artificial neuralnetworks were familiarized by McCulloch and Pitts in 1943. Neural networksare composed of simple units called neurons which operate in parallel. As innatural neural system, the network function is decided with the help ofconnections between units. A neural network can be trained to perform aspecific function by altering the values of the weights between nodes. The ANN consistsof many trainable algorithms which have capability to learn from training dataand solve various complex problems.

SINGLE NEURON STRUCTUREThe single neuron structure givesa transfer function which is equal to the weighted linear combination of inputneurons. The structure is depicted in the fig.2 as follows:Using training data (input—targetpairs), the weights of the neuron can be iterativelyadjusted to give local or globaloptima. Optimum weights in the sense of least squareerrors were derived by Widrow andHoff 9and the algorithm was called the LMSalgorithm and is commonly knownas the Widrow-Hoff rule and has become a widelyaccepted algorithm .

In the LMSalgorithm, the network weights are moved along thenegative of the gradient of theperformance function. Specifically, after each iterationor epoch (new set of input—targetpairs) the weights are adjusted according to thefollowing ruleLet A = A1,A2,A3…..AnT be the input vector and W=w1,w2,w3……wnTbe the associated weights then the output Y of the neuron is given by :Y= A1w1 +A2w2 +A3 w3 +…………+ An wn                       (1)And the transfer function isB=f(Y)After every iteration, theweights can be altered by W = W + UeA        (2)This is known as Widrow-Hoff rule9 and is commonly used Algorithm in training the neural networkMULTI LAYER NEURAL NETWORKAn artificial neural networkwhich makes use of more than one layer to accomplish the job of mapping theinput neurons to the target output are known as multi-layer neural networks.The design of multi-layer ANN showing two layers has been presented in fig.

2 asunder: An Artificial neural network whichemploys the Widrow-Hoff learning rule and nonlinear transfer functions iscalled the Backpropagation neural network and it estimates nearly any function witha finite number of cutoffs. The back propagation network when accuratelytrained produces better results for the inputs have never been seen before.