Home >Technology peripherals >AI >How to do hyperparameter grid search for PyTorch model using scikit-learn?

How to do hyperparameter grid search for PyTorch model using scikit-learn?

PHPz
PHPzforward
2023-04-20 20:13:061793browse

使用scikit-learn为PyTorch 模型进行超参数网格搜索

scikit-learn is the best machine learning library in Python, and PyTorch provides us with convenient operations for building models. Can their advantages be integrated? In this article, we will cover how to use the grid search function in scikit-learn to tune the hyperparameters of a PyTorch deep learning model:

  • How to wrap a PyTorch model for use with scikit-learn and how to use Grid search
  • How to grid search common neural network parameters, such as learning rate, dropout, epochs, number of neurons
  • Define your own hyperparameter tuning experiment on your own project

How to use PyTorch models in scikit-learn

One of the easiest ways to make PyTorch models available in scikit-learn is to use the skorch package. This package provides a scikit-learn compatible API for PyTorch models. In skorch, there are NeuralNetClassifier for classification neural networks and NeuralNetRegressor for regression neural networks.

pip install skorch

To use these wrappers, you must define the PyTorch model as a class using nn.Module, and then pass the name of the class to the module parameter when constructing the NeuralNetClassifier class. For example: The constructor of the

class MyClassifier(nn.Module):
def __init__(self):
super().__init__()
...
 
def forward(self, x):
...
return x
 
 # create the skorch wrapper
 model = NeuralNetClassifier(
module=MyClassifier
 )

NeuralNetClassifier class can get the parameters passed to the model.fit() call (the method of calling a training loop in a scikit-learn model), such as the number of epochs and batch size. For example:

model = NeuralNetClassifier(
module=MyClassifier,
max_epochs=150,
batch_size=10
 )

The constructor of the NeuralNetClassifier class can also accept new parameters. These parameters can be passed to the constructor of your model class. The requirement is that module__ (two underscores) must be added in front of it. These new parameters may have default values ​​in the constructor, but they will be overridden when the wrapper instantiates the model. For example:

import torch.nn as nn
 from skorch import NeuralNetClassifier
 
 class SonarClassifier(nn.Module):
def __init__(self, n_layers=3):
super().__init__()
self.layers = []
self.acts = []
for i in range(n_layers):
self.layers.append(nn.Linear(60, 60))
self.acts.append(nn.ReLU())
self.add_module(f"layer{i}", self.layers[-1])
self.add_module(f"act{i}", self.acts[-1])
self.output = nn.Linear(60, 1)
 
def forward(self, x):
for layer, act in zip(self.layers, self.acts):
x = act(layer(x))
x = self.output(x)
return x
 
 model = NeuralNetClassifier(
module=SonarClassifier,
max_epochs=150,
batch_size=10,
module__n_layers=2
 )

We can verify the results by initializing a model and printing:

print(model.initialize())
 
 #结果如下:
 <class 'skorch.classifier.NeuralNetClassifier'>[initialized](
module_=SonarClassifier(
(layer0): Linear(in_features=60, out_features=60, bias=True)
(act0): ReLU()
(layer1): Linear(in_features=60, out_features=60, bias=True)
(act1): ReLU()
(output): Linear(in_features=60, out_features=1, bias=True)
),
 )

Using grid search in scikit-learn

Grid search is a model Hyperparameter optimization technology. It simply exhausts all combinations of hyperparameters and finds the one that gives the best score. In scikit-learn, the GridSearchCV class provides this technique. When constructing this class, a dictionary of hyperparameters must be provided in the param_grid parameter. This is a map of model parameter names and arrays of values ​​to try.

The default is to use precision as the score for optimization, but other scores can be specified in the score parameter of the GridSearchCV constructor. GridSearchCV will build a model for each parameter combination to evaluate. And use the default 3-fold cross-validation, which can be set through parameters.

The following is an example of defining a simple grid search:

param_grid = {
'epochs': [10,20,30]
 }
 grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1, cv=3)
 grid_result = grid.fit(X, Y)

By setting the n_jobs parameter in the GridSearchCV constructor to -1, all cores on the machine will be used. Otherwise, the grid search process will only run in a single thread, which is slower in multi-core CPUs.

After running, you can access the grid search results in the result object returned by grid.fit(). best_score provides the best score observed during optimization and best_params_ describes the combination of parameters that achieved the best results.

Example Problem Description

Our examples will all be demonstrated on a small standard machine learning dataset, which is a diabetes onset classification dataset. This is a small dataset and all numerical attributes are easy to handle.

How to tune the batch size and number of training epochs

In the first simple example, we will introduce how to tune the batch size and the number of epochs used when fitting the network.

We will simply evaluate different batch sizes from 10 to 100. The code listing is as follows:

import random
 import numpy as np
 import torch
 import torch.nn as nn
 import torch.optim as optim
 from skorch import NeuralNetClassifier
 from sklearn.model_selection import GridSearchCV
 
 # load the dataset, split into input (X) and output (y) variables
 dataset = np.loadtxt('pima-indians-diabetes.csv', delimiter=',')
 X = dataset[:,0:8]
 y = dataset[:,8]
 X = torch.tensor(X, dtype=torch.float32)
 y = torch.tensor(y, dtype=torch.float32).reshape(-1, 1)
 
 # PyTorch classifier
 class PimaClassifier(nn.Module):
def __init__(self):
super().__init__()
self.layer = nn.Linear(8, 12)
self.act = nn.ReLU()
self.output = nn.Linear(12, 1)
self.prob = nn.Sigmoid()
 
def forward(self, x):
x = self.act(self.layer(x))
x = self.prob(self.output(x))
return x
 
 # create model with skorch
 model = NeuralNetClassifier(
PimaClassifier,
criterinotallow=nn.BCELoss,
optimizer=optim.Adam,
verbose=False
 )
 
 # define the grid search parameters
 param_grid = {
'batch_size': [10, 20, 40, 60, 80, 100],
'max_epochs': [10, 50, 100]
 }
 grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1, cv=3)
 grid_result = grid.fit(X, y)
 
 # summarize results
 print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
 means = grid_result.cv_results_['mean_test_score']
 stds = grid_result.cv_results_['std_test_score']
 params = grid_result.cv_results_['params']
 for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))

The results are as follows:

Best: 0.714844 using {'batch_size': 10, 'max_epochs': 100}
 0.665365 (0.020505) with: {'batch_size': 10, 'max_epochs': 10}
 0.588542 (0.168055) with: {'batch_size': 10, 'max_epochs': 50}
 0.714844 (0.032369) with: {'batch_size': 10, 'max_epochs': 100}
 0.671875 (0.022326) with: {'batch_size': 20, 'max_epochs': 10}
 0.696615 (0.008027) with: {'batch_size': 20, 'max_epochs': 50}
 0.714844 (0.019918) with: {'batch_size': 20, 'max_epochs': 100}
 0.666667 (0.009744) with: {'batch_size': 40, 'max_epochs': 10}
 0.687500 (0.033603) with: {'batch_size': 40, 'max_epochs': 50}
 0.707031 (0.024910) with: {'batch_size': 40, 'max_epochs': 100}
 0.667969 (0.014616) with: {'batch_size': 60, 'max_epochs': 10}
 0.694010 (0.036966) with: {'batch_size': 60, 'max_epochs': 50}
 0.694010 (0.042473) with: {'batch_size': 60, 'max_epochs': 100}
 0.670573 (0.023939) with: {'batch_size': 80, 'max_epochs': 10}
 0.674479 (0.020752) with: {'batch_size': 80, 'max_epochs': 50}
 0.703125 (0.026107) with: {'batch_size': 80, 'max_epochs': 100}
 0.680990 (0.014382) with: {'batch_size': 100, 'max_epochs': 10}
 0.670573 (0.013279) with: {'batch_size': 100, 'max_epochs': 50}
 0.687500 (0.017758) with: {'batch_size': 100, 'max_epochs': 100}

You can see 'batch_size': 10 , 'max_epochs': 100 achieved the best result of about 71% accuracy.

How to adjust the training optimizer

Let's take a look at how to adjust the optimizer. We know that there are many optimizers to choose from, such as SDG, Adam, etc., so how to choose?

The complete code is as follows:

import numpy as np
 import torch
 import torch.nn as nn
 import torch.optim as optim
 from skorch import NeuralNetClassifier
 from sklearn.model_selection import GridSearchCV
 
 # load the dataset, split into input (X) and output (y) variables
 dataset = np.loadtxt('pima-indians-diabetes.csv', delimiter=',')
 X = dataset[:,0:8]
 y = dataset[:,8]
 X = torch.tensor(X, dtype=torch.float32)
 y = torch.tensor(y, dtype=torch.float32).reshape(-1, 1)
 
 # PyTorch classifier
 class PimaClassifier(nn.Module):
def __init__(self):
super().__init__()
self.layer = nn.Linear(8, 12)
self.act = nn.ReLU()
self.output = nn.Linear(12, 1)
self.prob = nn.Sigmoid()
 
def forward(self, x):
x = self.act(self.layer(x))
x = self.prob(self.output(x))
return x
 
 # create model with skorch
 model = NeuralNetClassifier(
PimaClassifier,
criterinotallow=nn.BCELoss,
max_epochs=100,
batch_size=10,
verbose=False
 )
 
 # define the grid search parameters
 param_grid = {
'optimizer': [optim.SGD, optim.RMSprop, optim.Adagrad, optim.Adadelta,
optim.Adam, optim.Adamax, optim.NAdam],
 }
 grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1, cv=3)
 grid_result = grid.fit(X, y)
 
 # summarize results
 print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
 means = grid_result.cv_results_['mean_test_score']
 stds = grid_result.cv_results_['std_test_score']
 params = grid_result.cv_results_['params']
 for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))

The output is as follows:

Best: 0.721354 using {'optimizer': <class 'torch.optim.adamax.Adamax'>}
 0.674479 (0.036828) with: {'optimizer': <class 'torch.optim.sgd.SGD'>}
 0.700521 (0.043303) with: {'optimizer': <class 'torch.optim.rmsprop.RMSprop'>}
 0.682292 (0.027126) with: {'optimizer': <class 'torch.optim.adagrad.Adagrad'>}
 0.572917 (0.051560) with: {'optimizer': <class 'torch.optim.adadelta.Adadelta'>}
 0.714844 (0.030758) with: {'optimizer': <class 'torch.optim.adam.Adam'>}
 0.721354 (0.019225) with: {'optimizer': <class 'torch.optim.adamax.Adamax'>}
 0.709635 (0.024360) with: {'optimizer': <class 'torch.optim.nadam.NAdam'>}

It can be seen that the Adamax optimization algorithm is the best for our model and data set, and the accuracy is approximately 72%.

How to adjust the learning rate

Although the learning rate plan in pytorch allows us to dynamically adjust the learning rate according to rounds, as an example, we use the learning rate and the parameters of the learning rate as a grid Search for a parameter to demonstrate. In PyTorch, setting the learning rate and momentum is as follows:

optimizer = optim.SGD(lr=0.001, momentum=0.9)

In the skorch package, use the prefix optimizer__ to route parameters to the optimizer.

import numpy as np
 import torch
 import torch.nn as nn
 import torch.optim as optim
 from skorch import NeuralNetClassifier
 from sklearn.model_selection import GridSearchCV
 
 # load the dataset, split into input (X) and output (y) variables
 dataset = np.loadtxt('pima-indians-diabetes.csv', delimiter=',')
 X = dataset[:,0:8]
 y = dataset[:,8]
 X = torch.tensor(X, dtype=torch.float32)
 y = torch.tensor(y, dtype=torch.float32).reshape(-1, 1)
 
 # PyTorch classifier
 class PimaClassifier(nn.Module):
def __init__(self):
super().__init__()
self.layer = nn.Linear(8, 12)
self.act = nn.ReLU()
self.output = nn.Linear(12, 1)
self.prob = nn.Sigmoid()
 
def forward(self, x):
x = self.act(self.layer(x))
x = self.prob(self.output(x))
return x
 
 # create model with skorch
 model = NeuralNetClassifier(
PimaClassifier,
criterinotallow=nn.BCELoss,
optimizer=optim.SGD,
max_epochs=100,
batch_size=10,
verbose=False
 )
 
 # define the grid search parameters
 param_grid = {
'optimizer__lr': [0.001, 0.01, 0.1, 0.2, 0.3],
'optimizer__momentum': [0.0, 0.2, 0.4, 0.6, 0.8, 0.9],
 }
 grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1, cv=3)
 grid_result = grid.fit(X, y)
 
 # summarize results
 print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
 means = grid_result.cv_results_['mean_test_score']
 stds = grid_result.cv_results_['std_test_score']
 params = grid_result.cv_results_['params']
 for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))

The results are as follows:

Best: 0.682292 using {'optimizer__lr': 0.001, 'optimizer__momentum': 0.9}
 0.648438 (0.016877) with: {'optimizer__lr': 0.001, 'optimizer__momentum': 0.0}
 0.671875 (0.017758) with: {'optimizer__lr': 0.001, 'optimizer__momentum': 0.2}
 0.674479 (0.022402) with: {'optimizer__lr': 0.001, 'optimizer__momentum': 0.4}
 0.677083 (0.011201) with: {'optimizer__lr': 0.001, 'optimizer__momentum': 0.6}
 0.679688 (0.027621) with: {'optimizer__lr': 0.001, 'optimizer__momentum': 0.8}
 0.682292 (0.026557) with: {'optimizer__lr': 0.001, 'optimizer__momentum': 0.9}
 0.671875 (0.019918) with: {'optimizer__lr': 0.01, 'optimizer__momentum': 0.0}
 0.648438 (0.024910) with: {'optimizer__lr': 0.01, 'optimizer__momentum': 0.2}
 0.546875 (0.143454) with: {'optimizer__lr': 0.01, 'optimizer__momentum': 0.4}
 0.567708 (0.153668) with: {'optimizer__lr': 0.01, 'optimizer__momentum': 0.6}
 0.552083 (0.141790) with: {'optimizer__lr': 0.01, 'optimizer__momentum': 0.8}
 0.451823 (0.144561) with: {'optimizer__lr': 0.01, 'optimizer__momentum': 0.9}
 0.348958 (0.001841) with: {'optimizer__lr': 0.1, 'optimizer__momentum': 0.0}
 0.450521 (0.142719) with: {'optimizer__lr': 0.1, 'optimizer__momentum': 0.2}
 0.450521 (0.142719) with: {'optimizer__lr': 0.1, 'optimizer__momentum': 0.4}
 0.450521 (0.142719) with: {'optimizer__lr': 0.1, 'optimizer__momentum': 0.6}
 0.348958 (0.001841) with: {'optimizer__lr': 0.1, 'optimizer__momentum': 0.8}
 0.348958 (0.001841) with: {'optimizer__lr': 0.1, 'optimizer__momentum': 0.9}
 0.444010 (0.136265) with: {'optimizer__lr': 0.2, 'optimizer__momentum': 0.0}
 0.450521 (0.142719) with: {'optimizer__lr': 0.2, 'optimizer__momentum': 0.2}
 0.348958 (0.001841) with: {'optimizer__lr': 0.2, 'optimizer__momentum': 0.4}
 0.552083 (0.141790) with: {'optimizer__lr': 0.2, 'optimizer__momentum': 0.6}
 0.549479 (0.142719) with: {'optimizer__lr': 0.2, 'optimizer__momentum': 0.8}
 0.651042 (0.001841) with: {'optimizer__lr': 0.2, 'optimizer__momentum': 0.9}
 0.552083 (0.141790) with: {'optimizer__lr': 0.3, 'optimizer__momentum': 0.0}
 0.348958 (0.001841) with: {'optimizer__lr': 0.3, 'optimizer__momentum': 0.2}
 0.450521 (0.142719) with: {'optimizer__lr': 0.3, 'optimizer__momentum': 0.4}
 0.552083 (0.141790) with: {'optimizer__lr': 0.3, 'optimizer__momentum': 0.6}
 0.450521 (0.142719) with: {'optimizer__lr': 0.3, 'optimizer__momentum': 0.8}
 0.450521 (0.142719) with: {'optimizer__lr': 0.3, 'optimizer__momentum': 0.9}

For SGD, the best results were obtained using a learning rate of 0.001 and a momentum of 0.9, with an accuracy of about 68%.

How to activate the function

The activation function controls the nonlinearity of a single neuron. We will demonstrate evaluating some of the activation functions available in PyTorch.

import numpy as np
 import torch
 import torch.nn as nn
 import torch.nn.init as init
 import torch.optim as optim
 from skorch import NeuralNetClassifier
 from sklearn.model_selection import GridSearchCV
 
 # load the dataset, split into input (X) and output (y) variables
 dataset = np.loadtxt('pima-indians-diabetes.csv', delimiter=',')
 X = dataset[:,0:8]
 y = dataset[:,8]
 X = torch.tensor(X, dtype=torch.float32)
 y = torch.tensor(y, dtype=torch.float32).reshape(-1, 1)
 
 # PyTorch classifier
 class PimaClassifier(nn.Module):
def __init__(self, activatinotallow=nn.ReLU):
super().__init__()
self.layer = nn.Linear(8, 12)
self.act = activation()
self.output = nn.Linear(12, 1)
self.prob = nn.Sigmoid()
# manually init weights
init.kaiming_uniform_(self.layer.weight)
init.kaiming_uniform_(self.output.weight)
 
def forward(self, x):
x = self.act(self.layer(x))
x = self.prob(self.output(x))
return x
 
 # create model with skorch
 model = NeuralNetClassifier(
PimaClassifier,
criterinotallow=nn.BCELoss,
optimizer=optim.Adamax,
max_epochs=100,
batch_size=10,
verbose=False
 )
 
 # define the grid search parameters
 param_grid = {
'module__activation': [nn.Identity, nn.ReLU, nn.ELU, nn.ReLU6,
nn.GELU, nn.Softplus, nn.Softsign, nn.Tanh,
nn.Sigmoid, nn.Hardsigmoid]
 }
 grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1, cv=3)
 grid_result = grid.fit(X, y)
 
 # summarize results
 print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
 means = grid_result.cv_results_['mean_test_score']
 stds = grid_result.cv_results_['std_test_score']
 params = grid_result.cv_results_['params']
 for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))

The results are as follows:

Best: 0.699219 using {'module__activation': <class 'torch.nn.modules.activation.ReLU'>}
 0.687500 (0.025315) with: {'module__activation': <class 'torch.nn.modules.linear.Identity'>}
 0.699219 (0.011049) with: {'module__activation': <class 'torch.nn.modules.activation.ReLU'>}
 0.674479 (0.035849) with: {'module__activation': <class 'torch.nn.modules.activation.ELU'>}
 0.621094 (0.063549) with: {'module__activation': <class 'torch.nn.modules.activation.ReLU6'>}
 0.674479 (0.017566) with: {'module__activation': <class 'torch.nn.modules.activation.GELU'>}
 0.558594 (0.149189) with: {'module__activation': <class 'torch.nn.modules.activation.Softplus'>}
 0.675781 (0.014616) with: {'module__activation': <class 'torch.nn.modules.activation.Softsign'>}
 0.619792 (0.018688) with: {'module__activation': <class 'torch.nn.modules.activation.Tanh'>}
 0.643229 (0.019225) with: {'module__activation': <class 'torch.nn.modules.activation.Sigmoid'>}
 0.636719 (0.022326) with: {'module__activation': <class 'torch.nn.modules.activation.Hardsigmoid'>}

The ReLU activation function obtained the best results, with an accuracy of about 70%.

如何调整Dropout参数

在本例中,我们将尝试在0.0到0.9之间的dropout百分比(1.0没有意义)和在0到5之间的MaxNorm权重约束值。

import numpy as np
 import torch
 import torch.nn as nn
 import torch.nn.init as init
 import torch.optim as optim
 from skorch import NeuralNetClassifier
 from sklearn.model_selection import GridSearchCV
 
 # load the dataset, split into input (X) and output (y) variables
 dataset = np.loadtxt('pima-indians-diabetes.csv', delimiter=',')
 X = dataset[:,0:8]
 y = dataset[:,8]
 X = torch.tensor(X, dtype=torch.float32)
 y = torch.tensor(y, dtype=torch.float32).reshape(-1, 1)
 
 # PyTorch classifier
 class PimaClassifier(nn.Module):
def __init__(self, dropout_rate=0.5, weight_cnotallow=1.0):
super().__init__()
self.layer = nn.Linear(8, 12)
self.act = nn.ReLU()
self.dropout = nn.Dropout(dropout_rate)
self.output = nn.Linear(12, 1)
self.prob = nn.Sigmoid()
self.weight_constraint = weight_constraint
# manually init weights
init.kaiming_uniform_(self.layer.weight)
init.kaiming_uniform_(self.output.weight)
 
def forward(self, x):
# maxnorm weight before actual forward pass
with torch.no_grad():
norm = self.layer.weight.norm(2, dim=0, keepdim=True).clamp(min=self.weight_constraint / 2)
desired = torch.clamp(norm, max=self.weight_constraint)
self.layer.weight *= (desired / norm)
# actual forward pass
x = self.act(self.layer(x))
x = self.dropout(x)
x = self.prob(self.output(x))
return x
 
 # create model with skorch
 model = NeuralNetClassifier(
PimaClassifier,
criterinotallow=nn.BCELoss,
optimizer=optim.Adamax,
max_epochs=100,
batch_size=10,
verbose=False
 )
 
 # define the grid search parameters
 param_grid = {
'module__weight_constraint': [1.0, 2.0, 3.0, 4.0, 5.0],
'module__dropout_rate': [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
 }
 grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1, cv=3)
 grid_result = grid.fit(X, y)
 
 # summarize results
 print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
 means = grid_result.cv_results_['mean_test_score']
 stds = grid_result.cv_results_['std_test_score']
 params = grid_result.cv_results_['params']
 for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))

结果如下:

Best: 0.701823 using {'module__dropout_rate': 0.1, 'module__weight_constraint': 2.0}
 0.669271 (0.015073) with: {'module__dropout_rate': 0.0, 'module__weight_constraint': 1.0}
 0.692708 (0.035132) with: {'module__dropout_rate': 0.0, 'module__weight_constraint': 2.0}
 0.589844 (0.170180) with: {'module__dropout_rate': 0.0, 'module__weight_constraint': 3.0}
 0.561198 (0.151131) with: {'module__dropout_rate': 0.0, 'module__weight_constraint': 4.0}
 0.688802 (0.021710) with: {'module__dropout_rate': 0.0, 'module__weight_constraint': 5.0}
 0.697917 (0.009744) with: {'module__dropout_rate': 0.1, 'module__weight_constraint': 1.0}
 0.701823 (0.016367) with: {'module__dropout_rate': 0.1, 'module__weight_constraint': 2.0}
 0.694010 (0.010253) with: {'module__dropout_rate': 0.1, 'module__weight_constraint': 3.0}
 0.686198 (0.025976) with: {'module__dropout_rate': 0.1, 'module__weight_constraint': 4.0}
 0.679688 (0.026107) with: {'module__dropout_rate': 0.1, 'module__weight_constraint': 5.0}
 0.701823 (0.029635) with: {'module__dropout_rate': 0.2, 'module__weight_constraint': 1.0}
 0.682292 (0.014731) with: {'module__dropout_rate': 0.2, 'module__weight_constraint': 2.0}
 0.701823 (0.009744) with: {'module__dropout_rate': 0.2, 'module__weight_constraint': 3.0}
 0.701823 (0.026557) with: {'module__dropout_rate': 0.2, 'module__weight_constraint': 4.0}
 0.687500 (0.015947) with: {'module__dropout_rate': 0.2, 'module__weight_constraint': 5.0}
 0.686198 (0.006639) with: {'module__dropout_rate': 0.3, 'module__weight_constraint': 1.0}
 0.656250 (0.006379) with: {'module__dropout_rate': 0.3, 'module__weight_constraint': 2.0}
 0.565104 (0.155608) with: {'module__dropout_rate': 0.3, 'module__weight_constraint': 3.0}
 0.700521 (0.028940) with: {'module__dropout_rate': 0.3, 'module__weight_constraint': 4.0}
 0.669271 (0.012890) with: {'module__dropout_rate': 0.3, 'module__weight_constraint': 5.0}
 0.661458 (0.018688) with: {'module__dropout_rate': 0.4, 'module__weight_constraint': 1.0}
 0.669271 (0.017566) with: {'module__dropout_rate': 0.4, 'module__weight_constraint': 2.0}
 0.652344 (0.006379) with: {'module__dropout_rate': 0.4, 'module__weight_constraint': 3.0}
 0.680990 (0.037783) with: {'module__dropout_rate': 0.4, 'module__weight_constraint': 4.0}
 0.692708 (0.042112) with: {'module__dropout_rate': 0.4, 'module__weight_constraint': 5.0}
 0.666667 (0.006639) with: {'module__dropout_rate': 0.5, 'module__weight_constraint': 1.0}
 0.652344 (0.011500) with: {'module__dropout_rate': 0.5, 'module__weight_constraint': 2.0}
 0.662760 (0.007366) with: {'module__dropout_rate': 0.5, 'module__weight_constraint': 3.0}
 0.558594 (0.146610) with: {'module__dropout_rate': 0.5, 'module__weight_constraint': 4.0}
 0.552083 (0.141826) with: {'module__dropout_rate': 0.5, 'module__weight_constraint': 5.0}
 0.548177 (0.141826) with: {'module__dropout_rate': 0.6, 'module__weight_constraint': 1.0}
 0.653646 (0.013279) with: {'module__dropout_rate': 0.6, 'module__weight_constraint': 2.0}
 0.661458 (0.008027) with: {'module__dropout_rate': 0.6, 'module__weight_constraint': 3.0}
 0.553385 (0.142719) with: {'module__dropout_rate': 0.6, 'module__weight_constraint': 4.0}
 0.669271 (0.035132) with: {'module__dropout_rate': 0.6, 'module__weight_constraint': 5.0}
 0.662760 (0.015733) with: {'module__dropout_rate': 0.7, 'module__weight_constraint': 1.0}
 0.636719 (0.024910) with: {'module__dropout_rate': 0.7, 'module__weight_constraint': 2.0}
 0.550781 (0.146818) with: {'module__dropout_rate': 0.7, 'module__weight_constraint': 3.0}
 0.537760 (0.140094) with: {'module__dropout_rate': 0.7, 'module__weight_constraint': 4.0}
 0.542969 (0.138144) with: {'module__dropout_rate': 0.7, 'module__weight_constraint': 5.0}
 0.565104 (0.148654) with: {'module__dropout_rate': 0.8, 'module__weight_constraint': 1.0}
 0.657552 (0.008027) with: {'module__dropout_rate': 0.8, 'module__weight_constraint': 2.0}
 0.428385 (0.111418) with: {'module__dropout_rate': 0.8, 'module__weight_constraint': 3.0}
 0.549479 (0.142719) with: {'module__dropout_rate': 0.8, 'module__weight_constraint': 4.0}
 0.648438 (0.005524) with: {'module__dropout_rate': 0.8, 'module__weight_constraint': 5.0}
 0.540365 (0.136861) with: {'module__dropout_rate': 0.9, 'module__weight_constraint': 1.0}
 0.605469 (0.053083) with: {'module__dropout_rate': 0.9, 'module__weight_constraint': 2.0}
 0.553385 (0.139948) with: {'module__dropout_rate': 0.9, 'module__weight_constraint': 3.0}
 0.549479 (0.142719) with: {'module__dropout_rate': 0.9, 'module__weight_constraint': 4.0}
 0.595052 (0.075566) with: {'module__dropout_rate': 0.9, 'module__weight_constraint': 5.0}

可以看到,10%的Dropout和2.0的权重约束获得了70%的最佳精度。

如何调整隐藏层神经元的数量

单层神经元的数量是一个需要调优的重要参数。一般来说,一层神经元的数量控制着网络的表示能力,至少在拓扑的这一点上是这样。

理论上来说:由于通用逼近定理,一个足够大的单层网络可以近似任何其他神经网络。

在本例中,将尝试从1到30的值,步骤为5。一个更大的网络需要更多的训练,至少批大小和epoch的数量应该与神经元的数量一起优化。

import numpy as np
 import torch
 import torch.nn as nn
 import torch.nn.init as init
 import torch.optim as optim
 from skorch import NeuralNetClassifier
 from sklearn.model_selection import GridSearchCV
 
 # load the dataset, split into input (X) and output (y) variables
 dataset = np.loadtxt('pima-indians-diabetes.csv', delimiter=',')
 X = dataset[:,0:8]
 y = dataset[:,8]
 X = torch.tensor(X, dtype=torch.float32)
 y = torch.tensor(y, dtype=torch.float32).reshape(-1, 1)
 
 class PimaClassifier(nn.Module):
def __init__(self, n_neurnotallow=12):
super().__init__()
self.layer = nn.Linear(8, n_neurons)
self.act = nn.ReLU()
self.dropout = nn.Dropout(0.1)
self.output = nn.Linear(n_neurons, 1)
self.prob = nn.Sigmoid()
self.weight_constraint = 2.0
# manually init weights
init.kaiming_uniform_(self.layer.weight)
init.kaiming_uniform_(self.output.weight)
 
def forward(self, x):
# maxnorm weight before actual forward pass
with torch.no_grad():
norm = self.layer.weight.norm(2, dim=0, keepdim=True).clamp(min=self.weight_constraint / 2)
desired = torch.clamp(norm, max=self.weight_constraint)
self.layer.weight *= (desired / norm)
# actual forward pass
x = self.act(self.layer(x))
x = self.dropout(x)
x = self.prob(self.output(x))
return x
 
 # create model with skorch
 model = NeuralNetClassifier(
PimaClassifier,
criterinotallow=nn.BCELoss,
optimizer=optim.Adamax,
max_epochs=100,
batch_size=10,
verbose=False
 )
 
 # define the grid search parameters
 param_grid = {
'module__n_neurons': [1, 5, 10, 15, 20, 25, 30]
 }
 grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1, cv=3)
 grid_result = grid.fit(X, y)
 
 # summarize results
 print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
 means = grid_result.cv_results_['mean_test_score']
 stds = grid_result.cv_results_['std_test_score']
 params = grid_result.cv_results_['params']
 for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))

结果如下:

Best: 0.708333 using {'module__n_neurons': 30}
 0.654948 (0.003683) with: {'module__n_neurons': 1}
 0.666667 (0.023073) with: {'module__n_neurons': 5}
 0.694010 (0.014382) with: {'module__n_neurons': 10}
 0.682292 (0.014382) with: {'module__n_neurons': 15}
 0.707031 (0.028705) with: {'module__n_neurons': 20}
 0.703125 (0.030758) with: {'module__n_neurons': 25}
 0.708333 (0.015733) with: {'module__n_neurons': 30}

你可以看到,在隐藏层中有30个神经元的网络获得了最好的结果,准确率约为71%。

The above is the detailed content of How to do hyperparameter grid search for PyTorch model using scikit-learn?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete