Home  >  Article  >  Backend Development  >  Deep Learning with PHP and PyTorch

Deep Learning with PHP and PyTorch

WBOY
WBOYOriginal
2023-06-19 14:43:381176browse

Deep learning is an important branch in the field of artificial intelligence and has received more and more attention in recent years. In order to be able to conduct deep learning research and applications, it is often necessary to use some deep learning frameworks to help achieve it. In this article, we will introduce how to use PHP and PyTorch for deep learning.

1. What is PyTorch

PyTorch is an open source machine learning framework developed by Facebook, which can help us quickly create and train deep learning models. The characteristic of PyTorch is the use of dynamic computing graphs to achieve model training and optimization. This method allows us to create complex deep learning models more flexibly. At the same time, PyTorch also provides a wealth of pre-trained models and algorithms, which can help us conduct deep learning research and applications more conveniently.

2. Why use PHP and PyTorch

Compared with other programming languages, Python is a very popular and popular language in the field of deep learning. Python has a wealth of third-party libraries and open source tools that make it easy for us to use and deploy deep learning models. Meanwhile, PHP is another widely used programming language that is very popular for web application and website development. Using PHP and PyTorch can help us apply deep learning models to web applications and websites to achieve various intelligent functions. For example, we can embed the deep learning model into a web application to implement functions such as face recognition and image classification, and interact with the front end through PHP to provide users with a better experience.

3. Use PHP and PyTorch for deep learning

Below, we will introduce how to use PHP and PyTorch for deep learning.

  1. Installing PyTorch

Before we begin, we need to install the PyTorch library. You can refer to PyTorch's official documentation for installation: https://pytorch.org/get-started/locally/.

  1. Writing a Python script

Next, we will write a simple Python script to create and train a deep learning model. This model is used to classify handwritten digits.

First, we need to import the PyTorch library and other necessary libraries:

import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader
from torchvision import datasets, transforms

Then, define a neural network model:

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
        self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
        self.dropout = nn.Dropout2d()
        self.fc1 = nn.Linear(320, 50)
        self.fc2 = nn.Linear(50, 10)

    def forward(self, x):
        x = nn.functional.relu(nn.functional.max_pool2d(self.conv1(x), 2))
        x = nn.functional.relu(nn.functional.max_pool2d(self.dropout(self.conv2(x)), 2))
        x = x.view(-1, 320)
        x = nn.functional.relu(self.fc1(x))
        x = nn.functional.dropout(x, training=self.training)
        x = self.fc2(x)
        return nn.functional.log_softmax(x, dim=1)

This neural network model includes two convolutions layer and two fully connected layers. Among them, the convolutional layer is used to extract the features of the input image, and the fully connected layer is used to output the classification results. During forward propagation, we use ReLU as the activation function and use max pooling and dropout to help the model generalize better.

Next, we need to define some hyperparameters and training parameters:

batch_size = 64
learning_rate = 0.01
momentum = 0.5
epochs = 10

In this example, we use a simple batch stochastic gradient descent algorithm (SGD) to optimize the model. In each epoch, we divide the training data into batches and train and optimize each batch. During the training process, we calculate and record the loss and accuracy on the training and test sets.

train_loader = DataLoader(
    datasets.MNIST('./data', train=True, download=True, transform=transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])),
    batch_size=batch_size, shuffle=True)
test_loader = DataLoader(
    datasets.MNIST('./data', train=False, transform=transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])),
    batch_size=batch_size, shuffle=True)

model = Net()
optimizer = optim.SGD(model.parameters(), lr=learning_rate, momentum=momentum)

train_loss_history = []
train_acc_history = []
test_loss_history = []
test_acc_history = []

for epoch in range(1, epochs + 1):
    # Train
    model.train()
    for batch_idx, (data, target) in enumerate(train_loader):
        optimizer.zero_grad()
        output = model(data)
        loss = nn.functional.nll_loss(output, target)
        loss.backward()
        optimizer.step()
        if batch_idx % 10 == 0:
            print('Epoch [{}/{}], Train Batch: [{}/{}], Train Loss: {:.6f}'.format(epoch, epochs, batch_idx, len(train_loader), loss.item()))
    # Evaluate
    model.eval()
    train_loss = 0
    train_correct = 0
    test_loss = 0
    test_correct = 0
    with torch.no_grad():
        for data, target in train_loader:
            output = model(data)
            train_loss += nn.functional.nll_loss(output, target, reduction='sum').item()
            pred = output.argmax(dim=1, keepdim=True)
            train_correct += pred.eq(target.view_as(pred)).sum().item()
        train_loss /= len(train_loader.dataset)
        train_acc = 100. * train_correct / len(train_loader.dataset)
        train_loss_history.append(train_loss)
        train_acc_history.append(train_acc)
        for data, target in test_loader:
            output = model(data)
            test_loss += nn.functional.nll_loss(output, target, reduction='sum').item()
            pred = output.argmax(dim=1, keepdim=True)
            test_correct += pred.eq(target.view_as(pred)).sum().item()
        test_loss /= len(test_loader.dataset)
        test_acc = 100. * test_correct / len(test_loader.dataset)
        test_loss_history.append(test_loss)
        test_acc_history.append(test_acc)
        print('Epoch {}: Train Loss: {:.6f}, Train Acc: {:.2f}%, Test Loss: {:.6f}, Test Acc: {:.2f}%'.format(epoch, train_loss, train_acc, test_loss, test_acc))
  1. Use PHP to call Python script

Now that we have completed the creation and training of a simple deep learning model, we will introduce how to use PHP to call This Python script will use the trained model for practical applications.

We can use PHP's exec function to call the Python script, for example:

$output = exec("python train.py 2>&1", $output_array);

This command will execute the train.py script and store the output results in the $output_array array. If the training process is time-consuming, we can use PHP's flush function to achieve real-time output, for example:

echo '
';
$output = exec("python train.py 2>&1", $output_array);
foreach ($output_array as $o) {
    echo $o . '
'; flush(); } echo '
';

In this way, we can integrate the deep learning model into our PHP application and Use it to provide various intelligent functions.

4. Summary

This article introduces how to use PHP and PyTorch for deep learning, including creating and training a simple handwritten digit classification model, and how to embed this model into a PHP application middle. In this way, we can apply deep learning models to various web applications and websites to provide more intelligent functions and services.

The above is the detailed content of Deep Learning with PHP and PyTorch. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn