Home > Article > Backend Development > Implementing a Perceptron from Scratch in Python
Hi devs,
The Perceptron is one of the simplest and most fundamental concepts in machine learning. It’s a binary linear classifier that forms the basis of neural networks. In this post, I'll walk through the steps to understand and implement a Perceptron from scratch in Python.
Let's dive in!
A Perceptron is a basic algorithm for supervised learning of binary classifiers. Given input features, the Perceptron learns weights that help separate classes based on a simple threshold function. Here’s how it works in simple terms:
Mathematically, it looks like this:
f(x) = w1*x1 w2*x2 ... wn*xn b
Where:
If f(x) is greater than or equal to a threshold, the output is class 1; otherwise, it’s class 0.
We’ll use only NumPy here for matrix operations to keep things lightweight.
import numpy as np
We’ll build the Perceptron as a class to keep everything organized. The class will include methods for training and prediction.
class Perceptron: def __init__(self, learning_rate=0.01, epochs=1000): self.learning_rate = learning_rate self.epochs = epochs self.weights = None self.bias = None def fit(self, X, y): # Number of samples and features n_samples, n_features = X.shape # Initialize weights and bias self.weights = np.zeros(n_features) self.bias = 0 # Training for _ in range(self.epochs): for idx, x_i in enumerate(X): # Calculate linear output linear_output = np.dot(x_i, self.weights) + self.bias # Apply step function y_predicted = self._step_function(linear_output) # Update weights and bias if there is a misclassification if y[idx] != y_predicted: update = self.learning_rate * (y[idx] - y_predicted) self.weights += update * x_i self.bias += update def predict(self, X): # Calculate linear output and apply step function linear_output = np.dot(X, self.weights) + self.bias y_predicted = self._step_function(linear_output) return y_predicted def _step_function(self, x): return np.where(x >= 0, 1, 0)
In the code above:
We’ll use a small dataset to make it easy to visualize the output. Here’s a simple AND gate dataset:
# AND gate dataset X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]]) y = np.array([0, 0, 0, 1]) # Labels for AND gate
Now, let’s train the Perceptron and test its predictions.
# Initialize Perceptron p = Perceptron(learning_rate=0.1, epochs=10) # Train the model p.fit(X, y) # Test the model print("Predictions:", p.predict(X))
Expected output for AND gate:
import numpy as np
This makes the Perceptron update only for misclassified points, gradually pushing the model closer to the correct decision boundary.
Visualize the decision boundary after training. This is especially helpful if you’re working with more complex datasets. For now, we’ll keep things simple with the AND gate.
While the Perceptron is limited to linearly separable problems, it’s the foundation of more complex neural networks like Multi-Layer Perceptrons (MLPs). With MLPs, we add hidden layers and activation functions (like ReLU or Sigmoid) to solve non-linear problems.
The Perceptron is a straightforward but foundational machine learning algorithm. By understanding how it works and implementing it from scratch, we gain insights into the basics of machine learning and neural networks. The beauty of the Perceptron lies in its simplicity, making it a perfect starting point for anyone interested in AI.
The above is the detailed content of Implementing a Perceptron from Scratch in Python. For more information, please follow other related articles on the PHP Chinese website!