Home > Article > Backend Development > Local weighted regression example in Python
Locally Weighted Regression Example in Python
Locally Weighted Regression (Locally Weighted Regression) is a non-parametric regression method. Compared with traditional regression methods, it does not use fixed parameters for regression, but It builds a model adaptively based on sample data. This adaptive property makes locally weighted regression widely used in fields such as regression analysis and time series forecasting.
In Python, you can use the locally weighted scatterplot smoothing (LOWESS) algorithm in the scikit-learn package to perform locally weighted regression analysis. In this article, we will introduce how to use Python to implement the LOWESS algorithm and use examples for demonstration analysis.
The following are the steps to implement local weighted regression using Python:
To implement local weighted regression in Python, use import Libraries such as numpy, pandas, matplotlib.pyplot and sklearn.neighbors.
Use the pandas library to read the data and prepare the data for X and y. Among them, X is the independent variable and y is the dependent variable.
Standardize the data of X and y. This can eliminate the measurement differences between data, center the data near zero before modeling, and reduce the impact of magnitude.
Use the LocallyWeightedRegression class in the sklearn.neighbors library to fit the model. When using this method, two hyperparameters need to be specified: bandwidth and weight function. In most cases, smaller bandwidth values provide more accurate models, but are also more computationally expensive.
Use the predict method of the locally_weighted_regression object to predict the dependent variable corresponding to the newly input independent variable.
Next, let’s look at an example of local weighted regression.
Example 1: House price prediction based on population data
We use the Boston house price data set in the UCI database for model fitting and prediction. The data set contains 506 samples and 13 independent variables, including urban crime rate per capita (CRIM), proportion of self-owned housing before 1940 after construction (ZN), nitric oxide concentration (NOX), number of rooms per capita (RM) )wait.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import KNeighborsRegressor
Read Boston housing price data set
data = pd.read_csv('https ://archive.ics.uci.edu/ml/machine-learning-databases/housing/housing.data', header = None, sep = 's ')
Specify X and y
X = data.iloc[:, :13].values
y = data.iloc[:, 13].values
Normalize X and y
sc_X = StandardScaler()
sc_y = StandardScaler()
X = sc_X.fit_transform(X)
y = sc_y.fit_transform(y.reshape (-1, 1))
Split the data set into a training set and a test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X , y, test_size=0.2, random_state=0)
Use LocallyWeightedRegression to fit the model
lwr = KNeighborsRegressor(n_neighbors=2 , weights='uniform')
lwr.fit(X_train, y_train)
Use the predict method of the lwr object to predict
y_pred = lwr.predict(X_test)
y_pred = sc_y.inverse_transform(y_pred)
y_test = sc_y.inverse_transform(y_test)
Compare the prediction results with the true values in the test set on the same graph
plt.scatter(X_test[:,5], y_test, color='red')
plt.scatter(X_test [:,5], y_pred, color='blue')
plt.xlabel('Number of rooms')
plt.ylabel('Price')
plt.title('The Price of Houses ')
plt.show()
According to the visual graphics, we can see that the model's prediction results are very close to the true values in the test set. This shows that the locally weighted regression model can accurately predict housing prices.
The above is the detailed content of Local weighted regression example in Python. For more information, please follow other related articles on the PHP Chinese website!