Data preprocessing is the act of carrying out certain actions or steps on a dataset before it is used for machine learning or other tasks. Data preprocessing involves cleaning, formatting or transforming data in order to improve its quality or ensure that it is suitable for its main purpose (in this case, training a model). A clean and high-quality dataset enhances the machine learning model's performance.
Common issues with low-quality data include:
- Missing values
- Inconsistent formats
- Duplicate values
- Irrelevant features
In this article, I will show you some of the common data preprocessing techniques to prepare datasets for use in training models. You will need basic knowledge of Python and how to use Python libraries and frameworks.
Requirements:
The following are required to be able to get the best out of this guide
- Python 3.12
- Jupyter Notebook or your favourite notebook
- Numpy
- Pandas
- Scipy
- Scikit learn
- Melbourne Housing Dataset
You can also check out the output of each code in these Jupyter notebooks on Github.
Setup
If you haven't installed Python already, you can download it from the Python website and follow the instructions to install it.
Once Python has been installed, install the required libraries
pip install numpy scipy pandas scikit-learn
Install Jupyter Notebook.
pip install notebook
After installation, start Jupyter Notebook with the following command
jupyter notebook
This will launch Jupyter Notebook in your default web browser. If it doesn't, check the terminal for a link you can manually paste into your browser.
Open a new notebook from the File menu, import the required libraries and run the cell
import numpy as np import pandas as pd import scipy import sklearn
Go to the Melbourne Housing Dataset site and download the dataset. Load the dataset into the notebook using the following code. You can copy the file path on your computer to paste in the read_csv function. You can also put the csv file in the same folder as the notebook and import the file as seen below.
data = pd.read_csv(r"melb_data.csv") # View the first 5 columns of the dataset data.head()
Split the data into training and validation sets
from sklearn.model_selection import train_test_split # Set the target y = data['Price'] # Firstly drop categorical data types melb_features = data.drop(['Price'], axis=1) #drop the target column X = melb_features.select_dtypes(exclude=['object']) # Divide data into training and validation sets X_train, X_valid, y_train, y_valid = train_test_split(X, y, train_size=0.8, test_size=0.2, random_state=0)
You have to split data into training and validation sets in order to prevent data leakage. As a result, whatever preprocessing technique you carry out on the training features set is the same as the one you carry out on the validation features set.
Now the dataset is ready to be processed!
Data Cleaning
Handling missing values
Missing values in a dataset are like holes in a fabric that are supposed to be used to sew a dress. It spoils the dress before it is even made.
There are 3 ways to handle missing values in a dataset.
- Drop the rows or columns with empty cells
pip install numpy scipy pandas scikit-learn
The issue with this method is that you may lose valuable information that you are to train your model with. Unless most values in the dropped rows or columns are missing, there is no need to drop either rows or columns with empty cells.
- Impute values in the empty cells You can impute or fill in the empty cells with the mean, median or mode of the data in that particular column. SimpleImputer from Scikit learn will be used to impute values in the empty cells
pip install notebook
- Impute and notify How this works is that you impute values in the empty cells but you also create a column that indicates that the cell was initially empty.
jupyter notebook
Duplicate removal
Duplicate cells mean repeated data and it affects model accuracy. The only way to deal with them is to drop them.
import numpy as np import pandas as pd import scipy import sklearn
Dealing with outliers
Outliers are values that are significantly different from the other values in the dataset. They can be unusually high or low compared to other data values. They can arise due to entry errors or they could genuinely be outliers.
It is important to deal with outliers or else they will lead to inaccurate data analysis or models. One method to detect outliers is by calculating z-scores.
The way it works is that the z-score is used to check if a data point is 3 points or more away from the mean value. This calculation is done for every data point. If the z-score for a data point equals 3 or a higher value, the data point is an outlier.
data = pd.read_csv(r"melb_data.csv") # View the first 5 columns of the dataset data.head()
Data Transformation
Normalization
You normalize features so they can be described as a normal distribution.
A normal distribution (also known as the Gaussian distribution) is a statistical distribution where there are roughly equal distances or distributions above and below the mean. The graph of the data points of a normally distributed data form a bell curve.
The point of normalizing data is if the machine learning algorithm you want to use assumes that the data is normally distributed. An example is the Gaussian Naive Bayes model.
from sklearn.model_selection import train_test_split # Set the target y = data['Price'] # Firstly drop categorical data types melb_features = data.drop(['Price'], axis=1) #drop the target column X = melb_features.select_dtypes(exclude=['object']) # Divide data into training and validation sets X_train, X_valid, y_train, y_valid = train_test_split(X, y, train_size=0.8, test_size=0.2, random_state=0)
Standardization
Standardization transforms the features of a dataset to have a mean of 0 and a standard deviation of 1. This process scales each feature so that it has similar ranges across the data. This ensures that each feature contributes equally to model training.
You use standardization when:
- The features in your data are on different scales or units.
- The machine learning model you want to use is based on distance or gradient-based optimizations (e.g., linear regression, logistic regression, K-means clustering).
You use StandardScaler() from the sklearn library to standardize features.
pip install numpy scipy pandas scikit-learn
Conclusion
Data preprocessing is not just a preliminary stage. It is part of the process of building accurate machine learning models. It can also be tweaked to fit the needs of the dataset you are working with.
Like with most activities, practice makes perfect. As you continue to preprocess data, your skills will improve as well as your models.
I would love to read your thoughts on this ?
The above is the detailed content of Data Preprocessing Techniques for ML Models. For more information, please follow other related articles on the PHP Chinese website!

Arraysaregenerallymorememory-efficientthanlistsforstoringnumericaldataduetotheirfixed-sizenatureanddirectmemoryaccess.1)Arraysstoreelementsinacontiguousblock,reducingoverheadfrompointersormetadata.2)Lists,oftenimplementedasdynamicarraysorlinkedstruct

ToconvertaPythonlisttoanarray,usethearraymodule:1)Importthearraymodule,2)Createalist,3)Usearray(typecode,list)toconvertit,specifyingthetypecodelike'i'forintegers.Thisconversionoptimizesmemoryusageforhomogeneousdata,enhancingperformanceinnumericalcomp

Python lists can store different types of data. The example list contains integers, strings, floating point numbers, booleans, nested lists, and dictionaries. List flexibility is valuable in data processing and prototyping, but it needs to be used with caution to ensure the readability and maintainability of the code.

Pythondoesnothavebuilt-inarrays;usethearraymoduleformemory-efficienthomogeneousdatastorage,whilelistsareversatileformixeddatatypes.Arraysareefficientforlargedatasetsofthesametype,whereaslistsofferflexibilityandareeasiertouseformixedorsmallerdatasets.

ThemostcommonlyusedmoduleforcreatingarraysinPythonisnumpy.1)Numpyprovidesefficienttoolsforarrayoperations,idealfornumericaldata.2)Arrayscanbecreatedusingnp.array()for1Dand2Dstructures.3)Numpyexcelsinelement-wiseoperationsandcomplexcalculationslikemea

ToappendelementstoaPythonlist,usetheappend()methodforsingleelements,extend()formultipleelements,andinsert()forspecificpositions.1)Useappend()foraddingoneelementattheend.2)Useextend()toaddmultipleelementsefficiently.3)Useinsert()toaddanelementataspeci

TocreateaPythonlist,usesquarebrackets[]andseparateitemswithcommas.1)Listsaredynamicandcanholdmixeddatatypes.2)Useappend(),remove(),andslicingformanipulation.3)Listcomprehensionsareefficientforcreatinglists.4)Becautiouswithlistreferences;usecopy()orsl

In the fields of finance, scientific research, medical care and AI, it is crucial to efficiently store and process numerical data. 1) In finance, using memory mapped files and NumPy libraries can significantly improve data processing speed. 2) In the field of scientific research, HDF5 files are optimized for data storage and retrieval. 3) In medical care, database optimization technologies such as indexing and partitioning improve data query performance. 4) In AI, data sharding and distributed training accelerate model training. System performance and scalability can be significantly improved by choosing the right tools and technologies and weighing trade-offs between storage and processing speeds.


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

SublimeText3 English version
Recommended: Win version, supports code prompts!

Zend Studio 13.0.1
Powerful PHP integrated development environment

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

VSCode Windows 64-bit Download
A free and powerful IDE editor launched by Microsoft

MantisBT
Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.
