Suche

3.3 An alternative network with two layers

The neural network on this page is trained with the same data and has a similar structure to the previous neural network: it has three neurons in the first layer. This time, however, there are two neurons in the output layer instead of one.

There is a good and useful reason for this: instead of having just one output neuron, which has to output values around 0 for one class and values around 1 for the other class, you can simply use two output neurons for two classes – one neuron per class. The upper (first) output neuron should be 1 if patterns of the first class are present, and the lower (second) output neuron should be 1 if patterns of the second class are present. Correspondingly, the other neuron (or all other neurons if there are more classes) should deliver the desired value 0.

This is also known as One Hot Encoding. It is not only more practical but generally delivers better results in classification, particularly when dealing with many classes. Neural networks used for image recognition, for example, work with up to 1000 different classes. Such networks have 1000 output neurons, with each output neuron representing a class. As the neural network is trained to only output 1 for the neuron that represents the associated class (and all other neurons should remain 0), the class of the image can later be easily identified by the neuron that has the highest output value.

The neural network in the interactive figure therefore returns 1 0 for the class that corresponded to 0 in the previous example, and 0 1 for the class that corresponded to 1 in the previous example.

Instructions

  • Use the checkboxes to select which training data should be loaded.
  • Click New to select the pre-selected values for a simple separation, a new randomly selected Boolean function, new random numbers, or circle values as a data set.
  • Click Train to start the training.
  • At the end of the training, click Train again to continue training.
  • Click on the numerical values on the left-hand side of the interactive figure. The selected line from the input x is highlighted in green and the calculation of the neural network for this pattern is displayed.
  • Click in the empty space in the figure to hide the calculation again.
  • Use the calculation of the activation function further down on this page to recalculate everything on your own. Enter the result of the subtraction (activation minus threshold) as the x-value.
  • In the figure on the left, blue stands for negative values and red stands for positive values.
Bitte einen anderen Browser benutzen. Bitte einen anderen Browser benutzen.

Bitte einen anderen Browser benutzen.



Tasks

Load several different examples and train the neural network with the given patterns. Try to understand how the output is calculated and how the neural network manages to calculate the correct class (0 1 or 1 0) for a pattern.

Calculation of the activation function:

Close search