Home > Article > Backend Development > Gradient boosting (GBM) algorithm example in Python
Gradient boosting (GBM) algorithm example in Python
Gradient boosting (GBM) is a machine learning method that gradually reduces the loss function by iteratively training the model. It has good application results in both regression and classification problems, and is a powerful ensemble learning algorithm. This article will use Python as an example to introduce how to use the GBM algorithm to model a regression problem.
First we need to import some commonly used Python libraries, as shown below:
import pandas as pd import numpy as np from sklearn.model_selection import train_test_split from sklearn.ensemble import GradientBoostingRegressor from sklearn.metrics import mean_squared_error
In this case, we will use the Car Evaluation data set for modeling, which contains 6 attributes and 1 categorical variable. We will use these attribute variables to predict the price of the vehicle. First, we need to read the CSV file into a Pandas DataFrame as shown below:
data=pd.read_csv("car_data_1.csv")
Next, we need to divide the original data into a training set and a test set. We use 80% of the data as the training set and 20% of the data as the test set.
train_data, test_data, train_label, test_label = train_test_split(data.iloc[:,:-1], data.iloc[:,-1], test_size=0.2, random_state=1)
Then we need to perform feature engineering to encode the categorical variables into dummy variables (Dummy Variable). Here we use Pandas’ get_dummies function.
train_data = pd.get_dummies(train_data) test_data = pd.get_dummies(test_data)
Now we can build a GBM model. First, we initialize the model and then set the parameters. Here, we set the number of iterations of the model (n_estimators) to 100 and the learning rate parameter (learning_rate) to 0.1.
model = GradientBoostingRegressor(n_estimators=100, learning_rate=0.1, random_state=1)
Next, we fit the model using the training set data:
model.fit(train_data,train_label)
Next, we evaluate the performance of the model using the test set data. Here, we use the mean square error (MSE) to evaluate the performance of the model. The code looks like this:
pred=model.predict(test_data) mse=mean_squared_error(test_label, pred) print("MSE:",mse)
Finally, we can further explore the importance of variables in the GBM model. We can use sklearn's feature_importances_ function to get it.
feat_imp = pd.Series(model.feature_importances_, index=train_data.columns).sort_values(ascending=False) print(feat_imp)
In short, this article demonstrates how to implement the GBM algorithm using Python's sklearn library. We use the Car Evaluation dataset to predict the price of vehicles and evaluate the performance of the model, and we can also get the importance scores of the variables. GBM has good application effects in machine learning and is a powerful ensemble learning algorithm.
The above is the detailed content of Gradient boosting (GBM) algorithm example in Python. For more information, please follow other related articles on the PHP Chinese website!