Classification by Backpropagation

 What is backpropagation?

 Backpropagation is a neural network learning algorithm. The neural networks field was originally kindled by psychologists and neurobiologists who sought to develop and test computational analogs of neurons. Roughly speaking, a neural network is a set of connected input/output units in which each connection has a weight associated with it. During the learning phase, the network learns by adjusting the weights so as to be able to predict the correct class label of the input tuples. Neural network learning is also referred to as connectionist learning due to the connections between units.

How does backpropagation work?

Let us take a look at how backpropagation works. It has four layers: input layer, hidden layer, hidden layer II and final output layer.

So, the main three layers are:

  1. Input layer
  2. Hidden layer
  3. Output layer

Each layer has its own way of working and its own way to take action such that we are able to get the desired results and correlate these scenarios to our conditions. Let us discuss other details needed to help summarizing this algorithm.


This image summarizes the functioning of the backpropagation approach.

  1. Input layer receives x
  2. Input is modeled using weights w
  3. Each hidden layer calculates the output and data is ready at the output layer
  4. Difference between actual output and desired output is known as the error
  5. Go back to the hidden layers and adjust the weights so that this error is reduced in future runs

This process is repeated till we get the desired output. The training phase is done with supervision.  Once the model is stable, it is used in production.

Comments

Popular posts from this blog

DBMS - Program 6 - Insurance Database

Types of Addressing modes

Java - Swing