Home > Article > Technology peripherals > Class imbalance problem in image classification
Category imbalance problem in image classification, specific code examples are needed
Abstract: In the image classification task, the categories in the data set may be imbalanced, that is, Some categories have far more samples than others. This class imbalance can negatively impact model training and performance. This article will describe the causes and effects of the class imbalance problem and provide some concrete code examples to solve the problem.
The class imbalance problem has some negative impacts on the training and performance of the model. First, due to the small number of samples in some categories, the model may misjudge these categories. For example, in a two-classification problem, the number of samples in the two categories is 10 and 1000 respectively. If the model does not perform any learning and directly predicts all samples as categories with a larger number of samples, the accuracy will be very high, but in reality The samples are not effectively classified. Secondly, due to unbalanced sample distribution, the model may be biased towards predicting categories with a larger number of samples, resulting in poor classification performance for other categories. Finally, unbalanced category distribution may lead to insufficient training samples of the model for minority categories, making the learned model have poor generalization ability for minority categories.
Undersampling refers to randomly deleting some samples from categories with a larger number of samples, so that the number of samples in each category is closer. This method is simple and straightforward, but may result in information loss since deleting samples may result in the loss of some important features.
Oversampling refers to copying some samples from categories with a smaller number of samples to make the number of samples in each category more balanced. This method can increase the number of samples, but may lead to overfitting problems, because copying samples may cause the model to overfit on the training set and have poor generalization ability.
Weight adjustment refers to giving different weights to samples of different categories in the loss function, so that the model pays more attention to categories with a smaller number of samples. This method can effectively solve the class imbalance problem without introducing additional samples. The specific approach is to adjust the weight of each category in the loss function by specifying a weight vector so that categories with a smaller number of samples have larger weights.
The following is a code example using the PyTorch framework that demonstrates how to use the weight adjustment method to solve the class imbalance problem:
import torch import torch.nn as nn import torch.optim as optim # 定义分类网络 class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.fc1 = nn.Linear(784, 100) self.fc2 = nn.Linear(100, 10) def forward(self, x): x = x.view(-1, 784) x = self.fc1(x) x = self.fc2(x) return x # 定义损失函数和优化器 criterion = nn.CrossEntropyLoss(weight=torch.tensor([0.1, 0.9])) # 根据样本数量设置权重 optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) # 训练模型 for epoch in range(10): running_loss = 0.0 for i, data in enumerate(trainloader, 0): inputs, labels = data optimizer.zero_grad() outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss += loss.item() if i % 2000 == 1999: print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0 print('Finished Training')
In the above code, through torch.tensor([ 0.1, 0.9])
Specifies the weights of two categories, where the weight of the category with a smaller number of samples is 0.1, and the weight of the category with a larger number of samples is 0.9. This allows the model to pay more attention to categories with a smaller number of samples.
References:
[1] He, H., & Garcia, E. A. (2009). Learning from imbalanced data. IEEE Transactions on knowledge and data engineering, 21(9), 1263 -1284.
[2] Chawla, N. V., Bowyer, K. W., Hall, L. O., & Kegelmeyer, W. P. (2002). SMOTE: synthetic minority over-sampling technique. Journal of artificial intelligence research, 16, 321 -357.
The above is the detailed content of Class imbalance problem in image classification. For more information, please follow other related articles on the PHP Chinese website!