Home >Technology peripherals >AI >How to create a simple neural network using PyTorch
PyTorch is a Python-based deep learning framework for building various neural networks. This article will show how to use PyTorch to build a simple neural network and provide code examples.
First, we need to install PyTorch. It can be installed from the command line with the following command:
pip install torch
Next, we will use PyTorch to build a simple fully connected neural network for binary classification tasks. This neural network will have two hidden layers with 10 neurons each. We will use the sigmoid activation function and the cross-entropy loss function.
The following is the complete code:
import torch import torch.nn as nn import torch.optim as optim # 定义神经网络模型 class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.fc1 = nn.Linear(2, 10) # 第一个隐藏层 self.fc2 = nn.Linear(10, 10) # 第二个隐藏层 self.fc3 = nn.Linear(10, 1) # 输出层 def forward(self, x): x = torch.sigmoid(self.fc1(x)) x = torch.sigmoid(self.fc2(x)) x = torch.sigmoid(self.fc3(x)) return x # 创建数据集 X = torch.tensor([[0, 0], [0, 1], [1, 0], [1, 1]], dtype=torch.float32) y = torch.tensor([[0], [1], [1], [0]], dtype=torch.float32) # 创建神经网络实例 net = Net() # 定义损失函数和优化器 criterion = nn.BCELoss() optimizer = optim.SGD(net.parameters(), lr=0.1) # 训练神经网络 for epoch in range(10000): optimizer.zero_grad() output = net(X) loss = criterion(output, y) loss.backward() optimizer.step() # 打印训练损失 if epoch % 1000 == 0: print('Epoch {}: loss = {}'.format(epoch, loss.item())) # 使用训练好的神经网络进行预测 with torch.no_grad(): output = net(X) predicted = (output > 0.5).float() print('Predicted: {}\n'.format(predicted))
First, we define a class named Net, which inherits from nn.Module. This class contains all layers of the neural network. In this example, we define three fully connected layers, the first two of which are hidden layers and the last one is the output layer.
In the Net class, in addition to defining a forward method to describe the forward propagation process of the neural network, we also use the sigmoid activation function to pass the output of each hidden layer to Next level.
Next, we created a dataset containing four samples, each with two features. We also defined a neural network instance named net and selected BCELoss as the loss function and SGD as the optimizer.
Then, we start training the neural network. In each iteration, we first zero out the gradient of the optimizer and then pass the dataset X into the neural network to get the output. We calculate the loss and perform backpropagation, and finally update the network parameters using an optimizer. We also printed the training loss for every 1000 iterations.
After training is completed, we use the no_grad context manager to make predictions on the dataset. We will output the four predictions and print them.
This is a simple example demonstrating how to build a basic neural network using PyTorch. PyTorch provides many tools and functions to help us build and train neural networks more easily.
The above is the detailed content of How to create a simple neural network using PyTorch. For more information, please follow other related articles on the PHP Chinese website!