Home  >  Article  >  Technology peripherals  >  Let’s talk about image recognition: Recurrent Neural Network

Let’s talk about image recognition: Recurrent Neural Network

WBOY
WBOYforward
2023-04-08 15:11:031563browse

This article is reproduced from the WeChat public account "Living in the Information Age". The author lives in the information age. To reprint this article, please contact the Living in the Information Age public account.

Recurrent Neural Network (RNN) is mainly used to solve sequence data problems. The reason why it is a recurrent neural network is that the current output of a sequence is also related to the previous output. The RNN network remembers information from previous moments and applies it to the current output calculation. Unlike the convolutional neural network, the neurons in the hidden layers of the recurrent neural network are connected to each other. The input of the neurons in the hidden layer is determined by the input The output of the layer is composed of the output of the hidden neurons at the previous moment. Although the RNN network has achieved some remarkable results, it has some shortcomings and limitations, such as: high training difficulty, low accuracy, low efficiency, long time, etc. Therefore, some improved network models based on RNN have been gradually developed, such as : Long Short-Term Memory (LSTM), bidirectional RNN, bidirectional LSTM, GRU, etc. These improved RNN models have shown outstanding results in the field of image recognition and are widely used. Taking the LSTM network as an example, we will introduce its main network structure.

Long Short-Term Memory (LSTM) solves the problems of gradient disappearance or gradient explosion in RNN and can learn long-term dependence problems. Its structure is as follows.

Let’s talk about image recognition: Recurrent Neural Network

LSTM has three gates to allow information to selectively pass through: forgetting gate, input gate, and output gate. The forgetting gate determines what information can pass through this cell. It is implemented through a sigmoid neural layer. Its input is, and the output is a vector with a value between (0, 1), representing the proportion of each part of the information that is allowed to pass through. 0 means "let no information pass", 1 means "let all information pass".

Let’s talk about image recognition: Recurrent Neural Network

The input gate determines how much new information is added to the cell state. A tanh layer generates a vector, which is the alternative to update content.

Let’s talk about image recognition: Recurrent Neural Network

Update cell status:

Let’s talk about image recognition: Recurrent Neural Network

The output gate is being determined Which part of the information is output:

Let’s talk about image recognition: Recurrent Neural Network

The GRU network model also solves the problems of gradient disappearance or gradient explosion in RNN, and can learn long-term dependencies Relationship is a deformation of LSTM. The structure is simpler than LSTM, has fewer parameters, and the training time is shorter than LSTM. It is also widely used in speech recognition, image description, natural language processing and other scenarios.

The above is the detailed content of Let’s talk about image recognition: Recurrent Neural Network. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete