Data preprocessing is the act of carrying out certain actions or steps on a dataset before it is used for machine learning or other tasks. Data preprocessing involves cleaning, formatting or transforming data in order to improve its quality or ensure that it is suitable for its main purpose (in this case, training a model). A clean and high-quality dataset enhances the machine learning model's performance.
Common issues with low-quality data include:
- Missing values
- Inconsistent formats
- Duplicate values
- Irrelevant features
In this article, I will show you some of the common data preprocessing techniques to prepare datasets for use in training models. You will need basic knowledge of Python and how to use Python libraries and frameworks.
Requirements:
The following are required to be able to get the best out of this guide
- Python 3.12
- Jupyter Notebook or your favourite notebook
- Numpy
- Pandas
- Scipy
- Scikit learn
- Melbourne Housing Dataset
You can also check out the output of each code in these Jupyter notebooks on Github.
Setup
If you haven't installed Python already, you can download it from the Python website and follow the instructions to install it.
Once Python has been installed, install the required libraries
pip install numpy scipy pandas scikit-learn
Install Jupyter Notebook.
pip install notebook
After installation, start Jupyter Notebook with the following command
jupyter notebook
This will launch Jupyter Notebook in your default web browser. If it doesn't, check the terminal for a link you can manually paste into your browser.
Open a new notebook from the File menu, import the required libraries and run the cell
import numpy as np import pandas as pd import scipy import sklearn
Go to the Melbourne Housing Dataset site and download the dataset. Load the dataset into the notebook using the following code. You can copy the file path on your computer to paste in the read_csv function. You can also put the csv file in the same folder as the notebook and import the file as seen below.
data = pd.read_csv(r"melb_data.csv") # View the first 5 columns of the dataset data.head()
Split the data into training and validation sets
from sklearn.model_selection import train_test_split # Set the target y = data['Price'] # Firstly drop categorical data types melb_features = data.drop(['Price'], axis=1) #drop the target column X = melb_features.select_dtypes(exclude=['object']) # Divide data into training and validation sets X_train, X_valid, y_train, y_valid = train_test_split(X, y, train_size=0.8, test_size=0.2, random_state=0)
You have to split data into training and validation sets in order to prevent data leakage. As a result, whatever preprocessing technique you carry out on the training features set is the same as the one you carry out on the validation features set.
Now the dataset is ready to be processed!
Data Cleaning
Handling missing values
Missing values in a dataset are like holes in a fabric that are supposed to be used to sew a dress. It spoils the dress before it is even made.
There are 3 ways to handle missing values in a dataset.
- Drop the rows or columns with empty cells
pip install numpy scipy pandas scikit-learn
The issue with this method is that you may lose valuable information that you are to train your model with. Unless most values in the dropped rows or columns are missing, there is no need to drop either rows or columns with empty cells.
- Impute values in the empty cells You can impute or fill in the empty cells with the mean, median or mode of the data in that particular column. SimpleImputer from Scikit learn will be used to impute values in the empty cells
pip install notebook
- Impute and notify How this works is that you impute values in the empty cells but you also create a column that indicates that the cell was initially empty.
jupyter notebook
Duplicate removal
Duplicate cells mean repeated data and it affects model accuracy. The only way to deal with them is to drop them.
import numpy as np import pandas as pd import scipy import sklearn
Dealing with outliers
Outliers are values that are significantly different from the other values in the dataset. They can be unusually high or low compared to other data values. They can arise due to entry errors or they could genuinely be outliers.
It is important to deal with outliers or else they will lead to inaccurate data analysis or models. One method to detect outliers is by calculating z-scores.
The way it works is that the z-score is used to check if a data point is 3 points or more away from the mean value. This calculation is done for every data point. If the z-score for a data point equals 3 or a higher value, the data point is an outlier.
data = pd.read_csv(r"melb_data.csv") # View the first 5 columns of the dataset data.head()
Data Transformation
Normalization
You normalize features so they can be described as a normal distribution.
A normal distribution (also known as the Gaussian distribution) is a statistical distribution where there are roughly equal distances or distributions above and below the mean. The graph of the data points of a normally distributed data form a bell curve.
The point of normalizing data is if the machine learning algorithm you want to use assumes that the data is normally distributed. An example is the Gaussian Naive Bayes model.
from sklearn.model_selection import train_test_split # Set the target y = data['Price'] # Firstly drop categorical data types melb_features = data.drop(['Price'], axis=1) #drop the target column X = melb_features.select_dtypes(exclude=['object']) # Divide data into training and validation sets X_train, X_valid, y_train, y_valid = train_test_split(X, y, train_size=0.8, test_size=0.2, random_state=0)
Standardization
Standardization transforms the features of a dataset to have a mean of 0 and a standard deviation of 1. This process scales each feature so that it has similar ranges across the data. This ensures that each feature contributes equally to model training.
You use standardization when:
- The features in your data are on different scales or units.
- The machine learning model you want to use is based on distance or gradient-based optimizations (e.g., linear regression, logistic regression, K-means clustering).
You use StandardScaler() from the sklearn library to standardize features.
pip install numpy scipy pandas scikit-learn
Conclusion
Data preprocessing is not just a preliminary stage. It is part of the process of building accurate machine learning models. It can also be tweaked to fit the needs of the dataset you are working with.
Like with most activities, practice makes perfect. As you continue to preprocess data, your skills will improve as well as your models.
I would love to read your thoughts on this ?
The above is the detailed content of Data Preprocessing Techniques for ML Models. For more information, please follow other related articles on the PHP Chinese website!

This article explains how to use Beautiful Soup, a Python library, to parse HTML. It details common methods like find(), find_all(), select(), and get_text() for data extraction, handling of diverse HTML structures and errors, and alternatives (Sel

Python's statistics module provides powerful data statistical analysis capabilities to help us quickly understand the overall characteristics of data, such as biostatistics and business analysis. Instead of looking at data points one by one, just look at statistics such as mean or variance to discover trends and features in the original data that may be ignored, and compare large datasets more easily and effectively. This tutorial will explain how to calculate the mean and measure the degree of dispersion of the dataset. Unless otherwise stated, all functions in this module support the calculation of the mean() function instead of simply summing the average. Floating point numbers can also be used. import random import statistics from fracti

Serialization and deserialization of Python objects are key aspects of any non-trivial program. If you save something to a Python file, you do object serialization and deserialization if you read the configuration file, or if you respond to an HTTP request. In a sense, serialization and deserialization are the most boring things in the world. Who cares about all these formats and protocols? You want to persist or stream some Python objects and retrieve them in full at a later time. This is a great way to see the world on a conceptual level. However, on a practical level, the serialization scheme, format or protocol you choose may determine the speed, security, freedom of maintenance status, and other aspects of the program

This article compares TensorFlow and PyTorch for deep learning. It details the steps involved: data preparation, model building, training, evaluation, and deployment. Key differences between the frameworks, particularly regarding computational grap

The article discusses popular Python libraries like NumPy, Pandas, Matplotlib, Scikit-learn, TensorFlow, Django, Flask, and Requests, detailing their uses in scientific computing, data analysis, visualization, machine learning, web development, and H

This article guides Python developers on building command-line interfaces (CLIs). It details using libraries like typer, click, and argparse, emphasizing input/output handling, and promoting user-friendly design patterns for improved CLI usability.

This tutorial builds upon the previous introduction to Beautiful Soup, focusing on DOM manipulation beyond simple tree navigation. We'll explore efficient search methods and techniques for modifying HTML structure. One common DOM search method is ex

The article discusses the role of virtual environments in Python, focusing on managing project dependencies and avoiding conflicts. It details their creation, activation, and benefits in improving project management and reducing dependency issues.


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

WebStorm Mac version
Useful JavaScript development tools

Atom editor mac version download
The most popular open source editor

ZendStudio 13.5.1 Mac
Powerful PHP integrated development environment
