Home >Common Problem >artificial neural network algorithm

artificial neural network algorithm

(*-*)浩
(*-*)浩Original
2019-06-04 10:00:415287browse

Many algorithms of artificial neural networks have been widely used in intelligent information processing systems, especially the following four algorithms: ART network, LVQ network, Kohonen network and Hopfield network.

artificial neural network algorithm

The following is a detailed introduction to these four algorithms:

1. Adaptive Resonance Theory (ART) Network

The Adaptive Resonance Theory (ART) network has different schemes. An ART-1 network contains two layers, an input layer and an output layer. The two layers are fully interconnected, with the connections proceeding in both forward (bottom-up) and feedback (top-down) directions.

When the ART-1 network is working, its training is continuous and includes the following algorithm steps:

(1) For all output neurons, if all of an output neuron If the warning weights are all set to 1, it is called an independent neuron because it is not specified to represent any pattern type.

(2) Give a new input pattern x.

(3) Enable all output neurons to participate in excitation competition.

(4) Find the winning output neuron from the competing neurons, that is, the x·W value of this neuron is the largest; at the beginning of training or when there is no better output neuron, the winning neuron The neuron may be an independent neuron.

(5) Check whether the input pattern x is similar enough to the vigilance vector V of the winning neuron.

(6) If r≥p, that is, resonance exists, go to step (7); otherwise, make the winning neuron temporarily unable to compete further, and go to step (4), and repeat this process until it does not exist as many capable neurons as possible.

Recommended courses: Python Tutorial.

2. Learning vector quantization (LVQ) network

Learning vector quantization (LVQ) network consists of three layers of neurons, namely the input conversion layer, the hidden layer and the output layer. The network is fully connected between the input and hidden layers and partially connected between the hidden and output layers, with each output neuron connected to a different group of hidden neurons.

The simplest LVQ training steps are as follows:

(1) Preset the initial weight of the reference vector.

(2) Provide the network with a training input pattern.

(3) Calculate the Euclidean distance between the input pattern and each reference vector.

(4) Update the weight of the reference vector closest to the input pattern (that is, the reference vector of the winning hidden neuron). If the winning hidden neuron belongs to the buffer connected to the output neuron with the same class as the input pattern, then the reference vector should be closer to the input pattern. Otherwise, the reference vector leaves input mode.

(5) Go to step (2) and repeat this process with a new training input pattern until all training patterns are correctly classified or a termination criterion is met.

3. Kohonen network

Kohonen network or self-organizing feature map network contains two layers, one input buffer layer is used to receive the input pattern, and the other is the output layer. The neurons in the output layer are generally regular two-dimensional arrays. Arrange, each output neuron is connected to all input neurons. The connection weights form components of the reference vector connected to known output neurons.

Training a Kohonen network includes the following steps:

(1) Preset small random initial values ​​for the reference vectors of all output neurons.

(2) Provide the network with a training input pattern.

(3) Determine the winning output neuron, that is, the neuron whose reference vector is closest to the input pattern. The Euclidean distance between the reference vector and the input vector is often used as a distance measurement.

(4) Update the reference vector of the winning neuron and its neighbor reference vectors. These reference vectors are (referenced to) closer to the input vectors. For the winning reference vector, its adjustment is the largest, while for neurons further away, the size of the neuron neighborhood decreases relatively as training proceeds. By the end of training, there is only the reference of the winning neuron. The vector is adjusted.

4. Hopfield network

Hopfield network is a typical recursive network that usually only accepts binary inputs (0 or 1) and bipolar inputs (1 or -1). It contains a single layer of neurons, each connected to all other neurons, forming a recursive structure.

The above is the detailed content of artificial neural network algorithm. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn