Home >Backend Development >Python Tutorial >BigQuery and XGBoost Integration: A Jupyter Notebook Tutorial for Binary Classification
In selecting a binary classification model for tabular data, I decided to quickly try out a fast, non-deep learning model: Gradient Boosting Decision Trees (GBDT). This article describes the process of creating a Jupyter Notebook script using BigQuery as the data source and the XGBoost algorithm for modeling.
For those who prefer to jump straight into the script without the explanation, here it is. Please adjust the project_name, dataset_name, and table_name to fit your project.
import xgboost as xgb from sklearn.model_selection import train_test_split, GridSearchCV from sklearn.metrics import precision_score, recall_score, f1_score, log_loss from google.cloud import bigquery # Function to load data from BigQuery def load_data_from_bigquery(query): client = bigquery.Client() query_job = client.query(query) df = query_job.to_dataframe() return df def compute_metrics(labels, predictions, prediction_probs): precision = precision_score(labels, predictions, average='macro') recall = recall_score(labels, predictions, average='macro') f1 = f1_score(labels, predictions, average='macro') loss = log_loss(labels, prediction_probs) return { 'precision': precision, 'recall': recall, 'f1': f1, 'loss': loss } # Query in BigQuery query = """ SELECT * FROM `<project_name>.<dataset_name>.<table_name>` """ # Loading data df = load_data_from_bigquery(query) # Target data y = df["reaction"] # Input data X = df.drop(columns=["reaction"], axis=1) # Splitting data into training and validation sets X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=1) # Training the XGBoost model model = xgb.XGBClassifier(eval_metric='logloss') # Setting the parameter grid param_grid = { 'max_depth': [3, 4, 5], 'learning_rate': [0.01, 0.1, 0.2], 'n_estimators': [100, 200, 300], 'subsample': [0.8, 0.9, 1.0] } # Initializing GridSearchCV grid_search = GridSearchCV(estimator=model, param_grid=param_grid, cv=3, scoring='accuracy', verbose=1, n_jobs=-1) # Executing the grid search grid_search.fit(X_train, y_train) # Displaying the best parameters print("Best parameters:", grid_search.best_params_) # Model with the best parameters best_model = grid_search.best_estimator_ # Predictions on validation data val_predictions = best_model.predict(X_val) val_prediction_probs = best_model.predict_proba(X_val) # Predictions on training data train_predictions = best_model.predict(X_train) train_prediction_probs = best_model.predict_proba(X_train) # Evaluating the model (validation data) val_metrics = compute_metrics(y_val, val_predictions, val_prediction_probs) print("Optimized Validation Metrics:", val_metrics) # Evaluating the model (training data) train_metrics = compute_metrics(y_train, train_predictions, train_prediction_probs) print("Optimized Training Metrics:", train_metrics)
Previously, data was stored in Cloud Storage as CSV files, but the slow data loading was reducing the efficiency of our learning processes, prompting the shift to BigQuery for faster data handling.
from google.cloud import bigquery client = bigquery.Client()
This code initializes a BigQuery client using Google Cloud credentials, which can be set up through environment variables or the Google Cloud SDK.
def load_data_from_bigquery(query): query_job = client.query(query) df = query_job.to_dataframe() return df
This function executes an SQL query and returns the results as a DataFrame in Pandas, allowing for efficient data processing.
XGBoost is a high-performance machine learning algorithm utilizing gradient boosting, widely used for classification and regression problems.
https://arxiv.org/pdf/1603.02754
import xgboost as xgb model = xgb.XGBClassifier(eval_metric='logloss')
Here, the XGBClassifier class is instantiated, using log loss as the evaluation metric.
from sklearn.model_selection import train_test_split X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=1)
This function splits the data into training and validation sets, which is crucial for testing the model's performance and avoiding overfitting.
from sklearn.model_selection import GridSearchCV param_grid = { 'max_depth': [3, 4, 5], 'learning_rate': [0.01, 0.1, 0.2], 'n_estimators': [100, 200, 300], 'subsample': [0.8, 0.9, 1.0] } grid_search = GridSearchCV(estimator=model, param_grid=param_grid, cv=3, scoring='accuracy', verbose=1, n_jobs=-1) grid_search.fit(X_train, y_train)
GridSearchCV performs cross-validation to find the best combination of parameters for the model.
The performance of the model is evaluated using precision, recall, F1 score, and log loss on the validation dataset.
def compute_metrics(labels, predictions, prediction_probs): from sklearn.metrics import precision_score, recall_score, f1_score, log_loss return { 'precision': precision_score(labels, predictions, average='macro'), 'recall': recall_score(labels, predictions, average='macro'), 'f1': f1_score(labels, predictions, average='macro'), 'loss': log_loss(labels, prediction_probs) } val_metrics = compute_metrics(y_val, val_predictions, val_prediction_probs) print("Optimized Validation Metrics:", val_metrics)
When you run the notebook, you will get the following output showing the best parameters and the model evaluation metrics.
Best parameters: {'learning_rate': 0.2, 'max_depth': 5, 'n_estimators': 300, 'subsample': 0.9} Optimized Validation Metrics: {'precision': 0.8919952583956949, 'recall': 0.753797304483842, 'f1': 0.8078981867164722, 'loss': 0.014006406471894417} Optimized Training Metrics: {'precision': 0.8969556573175115, 'recall': 0.7681976753444204, 'f1': 0.8199353049298048, 'loss': 0.012475375680566196}
In some cases, it may be more appropriate to load data from Google Cloud Storage rather than BigQuery. The following function reads a CSV file from Cloud Storage and returns it as a DataFrame in Pandas, and can be used interchangeably with the load_data_from_bigquery function.
from google.cloud import storage def load_data_from_gcs(bucket_name, file_path): client = storage.Client() bucket = client.get_bucket(bucket_name) blob = bucket.blob(file_path) data = blob.download_as_text() df = pd.read_csv(io.StringIO(data), encoding='utf-8') return df
Example of use:
bucket_name = '<bucket-name>' file_path = '<file-path>' df = load_data_from_gcs(bucket_name, file_path)
If you want to use LightGBM instead of XGBoost, you can simply replace the XGBClassifier with LGBMClassifier in the same setup.
import lightgbm as lgb model = lgb.LGBMClassifier()
Future articles will cover the use of BigQuery ML (BQML) for training.
The above is the detailed content of BigQuery and XGBoost Integration: A Jupyter Notebook Tutorial for Binary Classification. For more information, please follow other related articles on the PHP Chinese website!