Home > Article > Backend Development > Let you briefly understand the content of creating neural network models in python
This article brings you a brief understanding of how to create a neural network model in Python. It has certain reference value. Friends in need can refer to it. I hope it will be helpful to you.
Summary: Curious about how neural networks work? Give it a try. The best way to understand how neural networks work is to create a simple neural network yourself.
Neural networks (NN), also known as artificial neural networks (ANN), are a subset of learning algorithms in the field of machine learning, loosely borrowing the concept of biological neural networks. At present, neural networks are widely used in fields such as computer vision and natural language processing. Andrey Bulezyuk, a senior German machine learning expert, said, "Neural networks are revolutionizing machine learning because they can effectively simulate complex abstractions in various disciplines and industries without much human involvement."
In general, artificial neural networks The network basically includes the following components:
The input layer that receives and transmits data;
hidden layer;
Output layer;
The weight between layers;
Activation function used in each hidden layer;
In this tutorial, a simple Sigmoid activation function is used, but please note that in deep neural In network models, the sigmoid activation function is generally not the first choice because it is prone to gradient dispersion.
In addition, there are several different types of artificial neural networks, such as feedforward neural networks, convolutional neural networks, and recurrent neural networks. This article will take a simple feedforward or perceptual neural network as an example. This type of artificial neural network transfers data directly from front to back, which is referred to as the forward propagation process.
Training feedforward neurons usually requires a backpropagation algorithm, which requires corresponding input and output sets for the network. When input data is transmitted to a neuron, it is processed accordingly and the resulting output is transmitted to the next layer.
The following figure simply shows a neural network structure:
In addition, the best way to understand how a neural network works is to learn how to Build one from scratch using any toolbox. In this article, we will demonstrate how to create a simple neural network using Python.
The table below shows the problem we will solve:
We will train the neural network so that it can predict the correct output value when given a new set of data .
As you can see from the table, the output value is always equal to the first value in the input section. Therefore, we can expect the output (?) value of the new situation to be 1.
Let's see if we can get the same result using some Python code.
We will create a NeuralNetwork class in Python to train neurons to provide accurate predictions. This class also contains other auxiliary functions. We will not use the neural network library to create this simple neural network example, but will import the basic Numpy library to assist in the calculation.
The Numpy library is a basic library for processing data. It has the following four important calculation methods:
EXP - used to generate the natural exponent;
array——used to generate matrices;
dot——used for matrix multiplication;
random - used to generate random numbers;
We will use the Sigmoid function, which draws an "S" shaped curve, It serves as the activation function of the neural network created in this article.
This function can map any value between 0 and 1 and helps us normalize the weighted sum of the input.
After this, we will create the derivative of the Sigmoid function to help calculate basic adjustments to the weights.
The output of the Sigmoid function can be used to generate its derivative. For example, if the output variable is "x", then its derivative will be x *(1-x).
Training the model means the stage where we will teach the neural network to make accurate predictions. Each input has a weight - positive or negative, which means that inputs with large positive weights or large negative weights will have a greater impact on the resulting output.
Note that when the model is initially trained, each weight is initialized with a random number.
The following is the training process in the neural network example problem constructed in this article:
1. Get the inputs from the training data set, make some adjustments according to their weights, and calculate the neural network through The output method is used to transmit layer by layer;
2. Calculate the error rate of backpropagation. In this case, it is the error between the predicted output of the neuron and the expected output of the training data set;
3. Based on the error range obtained, use the error weighted derivative The formula makes some small weight adjustments;
4. Repeat this process 15,000 times, and during each iteration, the entire training set is processed simultaneously;
Here, we use the ".T" function to bias the matrix. Therefore, the numbers will be stored this way:
Eventually, the weights of the neurons will be optimized for the training data provided. Therefore, if the output of the neural network is consistent with the expected output, it means that the training is completed and accurate predictions can be made. This is the method of backpropagation.
Finally, after initializing the NeuralNetwork class and running the entire program, the following is the complete code for how to create a neural network in a Python project:
import numpy as np class NeuralNetwork(): def __init__(self): # 设置随机数种子 np.random.seed(1) # 将权重转化为一个3x1的矩阵,其值分布为-1~1,并且均值为0 self.synaptic_weights = 2 * np.random.random((3, 1)) - 1 def sigmoid(self, x): # 应用sigmoid激活函数 return 1 / (1 + np.exp(-x)) def sigmoid_derivative(self, x): #计算Sigmoid函数的偏导数 return x * (1 - x) def train(self, training_inputs, training_outputs, training_iterations): # 训练模型 for iteration in range(training_iterations): # 得到输出 output = self.think(training_inputs) # 计算误差 error = training_outputs - output # 微调权重 adjustments = np.dot(training_inputs.T, error * self.sigmoid_derivative(output)) self.synaptic_weights += adjustments def think(self, inputs): # 输入通过网络得到输出 # 转化为浮点型数据类型 inputs = inputs.astype(float) output = self.sigmoid(np.dot(inputs, self.synaptic_weights)) return output if __name__ == "__main__": # 初始化神经类 neural_network = NeuralNetwork() print("Beginning Randomly Generated Weights: ") print(neural_network.synaptic_weights) #训练数据 training_inputs = np.array([[0,0,1], [1,1,1], [1,0,1], [0,1,1]]) training_outputs = np.array([[0,1,1,0]]).T # 开始训练 neural_network.train(training_inputs, training_outputs, 15000) print("Ending Weights After Training: ") print(neural_network.synaptic_weights) user_input_one = str(input("User Input One: ")) user_input_two = str(input("User Input Two: ")) user_input_three = str(input("User Input Three: ")) print("Considering New Situation: ", user_input_one, user_input_two, user_input_three) print("New Output data: ") print(neural_network.think(np.array([user_input_one, user_input_two, user_input_three]))) print("Wow, we did it!")
The following is after running the code Resulting output:
# The above is a simple neural network we managed to create. First the neural network starts assigning itself some random weights, after that it trains itself using training examples.
So if a new sample input [1,0,0] appears, its output value is 0.9999584. The expected correct answer is 1. It can be said that the two are very close. Considering that the Sigmoid function is a nonlinear function, this error is acceptable.
In addition, this article only uses one layer of neural network to perform simple tasks. What would happen if we put thousands of these artificial neural networks together? Can we imitate human thinking 100%? The answer is yes, but it is currently difficult to implement. It can only be said to be very similar. Readers who are interested in this can read materials related to deep learning.
The above is the detailed content of Let you briefly understand the content of creating neural network models in python. For more information, please follow other related articles on the PHP Chinese website!