search
HomeBackend DevelopmentPython TutorialHow to Use PySpark for Machine Learning

Since the release of Apache Spark (an open-source framework for processing Big Data), it has become one of the most widely used technologies for processing large amounts of data in parallel across multiple containers — it prides itself on efficiency and speed compared to similar software that existed before it.

Working with this amazing technology in Python is feasible through PySpark, a Python API that allows you to interact with and tap into ApacheSpark’s amazing potential using the Python programming language.

In this article, you will learn and get started with using PySpark to build a machine-learning model using the Linear Regression algorithm.

Note: Having prior knowledge of Python, an IDE like VSCode, how to use a command prompt/terminal and familiarity with Machine Learning concepts is essential for proper understanding of the concepts contained in this article.

By going through this article, you should be able to:

  • Understand what ApacheSpark is.
  • Learn about PySpark and how to use it for Machine Learning.

What’s PySpark all about?

According to the Apache Spark official website, PySpark lets you utilize the combined strengths of ApacheSpark (simplicity, speed, scalability, versatility) and Python (rich ecosystem, matured libraries, simplicity) for “data engineering, data science, and machine learning on single-node machines or clusters.”

How to Use PySpark for Machine Learning
Image source

PySpark is the Python API for ApacheSpark, which means it serves as an interface that lets your code written in Python communicate with the ApacheSpark technology written in Scala. This way, professionals already familiar with the Python ecosystem can quickly utilize the ApacheSpark technology. This also ensures that existing libraries used in Python remain relevant.

Detailed Guide on how to use PySpark for Machine Learning

In the ensuing steps, we will build a machine-learning model using the Linear Regression algorithm:

  • Install project dependencies: I’m assuming that you already have Python installed on your machine. If not, install it before moving to the next step. Open your terminal or command prompt and enter the code below to install the PySpark library.
pip install pyspark

You can install these additional Python libraries if you do not have them.

pip install pyspark
  • Create a file and import the necessary libraries: Open VSCode, and in your chosen project directory, create a file for your project, e.g pyspart_model.py. Open the file and import the necessary libraries for the project.
pip install pandas numpy
  • Create a spark session: Start a spark session for the project by entering this code under the imports.
from pyspark.sql import SparkSession
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.classification import LogisticRegression
from pyspark.ml.evaluation import BinaryClassificationEvaluator
import pandas as pd
  • Read the CSV file (the dataset you will be working with): If you already have your dataset named data.csv in your project directory/folder, load it using the code below.
spark = SparkSession.builder.appName("LogisticRegressionExample").getOrCreate()
  • Exploratory data analysis: This step helps you understand the dataset you are working with. Check for null values and decide on the cleansing approach to use.
data = spark.read.csv("data.csv", header=True, inferSchema=True)

Optionally, if you are working with a small dataset, you can convert it to a Python data frame and directory and use Python to check for missing values.

# Display the schema my
 data.printSchema() 
# Show the first ten rows 
data.show(10)
# Count null values in each column
missing_values = df.select(
    [count(when(isnull(c), c)).alias(c) for c in df.columns]
)

# Show the result
missing_values.show()
  • Data preprocessing: This step involves converting the columns/features in the dataset into a format that PySpark’s machine-learning library can easily understand or is compatible with.

Use VectorAssembler to combine all features into a single vector column.

pandas_df = data.toPandas()
# Use Pandas to check missing values
print(pandas_df.isna().sum())
  • Split the dataset: Split the dataset in a proportion that is convenient for you. Here, we are using 70% to 30%: 70% for training and 30% for testing the model.
# Combine feature columns into a single vector column
feature_columns = [col for col in data.columns if col != "label"]
assembler = VectorAssembler(inputCols=feature_columns, outputCol="features")

# Transform the data
data = assembler.transform(data)

# Select only the 'features' and 'label' columns for training
final_data = data.select("features", "label")

# Show the transformed data
final_data.show(5)
  • Train your model: We are using the Logistic Regression algorithm for training our model.

Create an instance of the LogisticRegression class and fit the model.

train_data, test_data = final_data.randomSplit([0.7, 0.3], seed=42)
  • Make predictions with your trained model: Use the model we have trained in the previous step to make predictions
lr = LogisticRegression(featuresCol="features", labelCol="label")

# Train the model
lr_model = lr.fit(train_data)
  • Model Evaluation: Here, the model is being evaluated to determine its predictive performance or its level of correctness. We achieve this by using a suitable evaluation metric.

Evaluate the model using the AUC metric

predictions = lr_model.transform(test_data)
# Show predictions
predictions.select("features", "label", "prediction", "probability").show(5)

The end-to-end code used for this article is shown below:

evaluator = BinaryClassificationEvaluator(rawPredictionCol="rawPrediction", labelCol="label", metricName="areaUnderROC")

# Compute the AUC
auc = evaluator.evaluate(predictions)
print(f"Area Under ROC: {auc}")

Next steps ?

We have reached the end of this article. By following the steps above, you have built your machine-learning model using PySpark.

Always ensure that your dataset is clean and free of null values before proceeding to the next steps. Lastly, make sure your features all contain numerical values before going ahead to train your model.

The above is the detailed content of How to Use PySpark for Machine Learning. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Merging Lists in Python: Choosing the Right MethodMerging Lists in Python: Choosing the Right MethodMay 14, 2025 am 12:11 AM

TomergelistsinPython,youcanusethe operator,extendmethod,listcomprehension,oritertools.chain,eachwithspecificadvantages:1)The operatorissimplebutlessefficientforlargelists;2)extendismemory-efficientbutmodifiestheoriginallist;3)listcomprehensionoffersf

How to concatenate two lists in python 3?How to concatenate two lists in python 3?May 14, 2025 am 12:09 AM

In Python 3, two lists can be connected through a variety of methods: 1) Use operator, which is suitable for small lists, but is inefficient for large lists; 2) Use extend method, which is suitable for large lists, with high memory efficiency, but will modify the original list; 3) Use * operator, which is suitable for merging multiple lists, without modifying the original list; 4) Use itertools.chain, which is suitable for large data sets, with high memory efficiency.

Python concatenate list stringsPython concatenate list stringsMay 14, 2025 am 12:08 AM

Using the join() method is the most efficient way to connect strings from lists in Python. 1) Use the join() method to be efficient and easy to read. 2) The cycle uses operators inefficiently for large lists. 3) The combination of list comprehension and join() is suitable for scenarios that require conversion. 4) The reduce() method is suitable for other types of reductions, but is inefficient for string concatenation. The complete sentence ends.

Python execution, what is that?Python execution, what is that?May 14, 2025 am 12:06 AM

PythonexecutionistheprocessoftransformingPythoncodeintoexecutableinstructions.1)Theinterpreterreadsthecode,convertingitintobytecode,whichthePythonVirtualMachine(PVM)executes.2)TheGlobalInterpreterLock(GIL)managesthreadexecution,potentiallylimitingmul

Python: what are the key featuresPython: what are the key featuresMay 14, 2025 am 12:02 AM

Key features of Python include: 1. The syntax is concise and easy to understand, suitable for beginners; 2. Dynamic type system, improving development speed; 3. Rich standard library, supporting multiple tasks; 4. Strong community and ecosystem, providing extensive support; 5. Interpretation, suitable for scripting and rapid prototyping; 6. Multi-paradigm support, suitable for various programming styles.

Python: compiler or Interpreter?Python: compiler or Interpreter?May 13, 2025 am 12:10 AM

Python is an interpreted language, but it also includes the compilation process. 1) Python code is first compiled into bytecode. 2) Bytecode is interpreted and executed by Python virtual machine. 3) This hybrid mechanism makes Python both flexible and efficient, but not as fast as a fully compiled language.

Python For Loop vs While Loop: When to Use Which?Python For Loop vs While Loop: When to Use Which?May 13, 2025 am 12:07 AM

Useaforloopwheniteratingoverasequenceorforaspecificnumberoftimes;useawhileloopwhencontinuinguntilaconditionismet.Forloopsareidealforknownsequences,whilewhileloopssuitsituationswithundeterminediterations.

Python loops: The most common errorsPython loops: The most common errorsMay 13, 2025 am 12:07 AM

Pythonloopscanleadtoerrorslikeinfiniteloops,modifyinglistsduringiteration,off-by-oneerrors,zero-indexingissues,andnestedloopinefficiencies.Toavoidthese:1)Use'i

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

VSCode Windows 64-bit Download

VSCode Windows 64-bit Download

A free and powerful IDE editor launched by Microsoft

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SecLists

SecLists

SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

PhpStorm Mac version

PhpStorm Mac version

The latest (2018.2.1) professional PHP integrated development tool

EditPlus Chinese cracked version

EditPlus Chinese cracked version

Small size, syntax highlighting, does not support code prompt function