search
HomeBackend DevelopmentPython TutorialA Guide to Unsupervised Image Segmentation using Normalized Cuts (NCut) in Python

A Guide to Unsupervised Image Segmentation using Normalized Cuts (NCut) in Python

Introduction

Image segmentation plays a vital role in understanding and analyzing visual data, and Normalized Cuts (NCut) is a widely used method for graph-based segmentation. In this article, we will explore how to apply NCut for unsupervised image segmentation in Python using a dataset from Microsoft Research, with a focus on improving segmentation quality using superpixels.
Dataset Overview
The dataset used for this task can be downloaded from the following link: MSRC Object Category Image Database. This dataset contains original images as well as their semantic segmentation into nine object classes (indicated by image files ending with "_GT"). These images are grouped into thematic subsets, where the first number in the file name refers to a class subset. This dataset is perfect for experimenting with segmentation tasks.

Problem Statement

We perform image segmentation on an image in the dataset using the NCut algorithm. Segmentation at the pixel level is computationally expensive and often noisy. To overcome this, we use SLIC (Simple Linear Iterative Clustering) to generate superpixels, which groups similar pixels and reduces the problem size. To evaluate the accuracy of the segmentation different metrics (e.g., Intersection over Union, SSIM, Rand Index) can be used.

Implementation

1. Install Required Libraries
We use skimage for image processing, numpy for numerical computations, and matplotlib for visualization.

pip install numpy matplotlib
pip install scikit-image==0.24.0
**2. Load and Preprocess the Dataset**

After downloading and extracting the dataset, load the images and ground truth segmentation:

wget http://download.microsoft.com/download/A/1/1/A116CD80-5B79-407E-B5CE-3D5C6ED8B0D5/msrc_objcategimagedatabase_v1.zip -O msrc_objcategimagedatabase_v1.zip
unzip msrc_objcategimagedatabase_v1.zip
rm msrc_objcategimagedatabase_v1.zip

Now we are ready to start coding.

from skimage import io, segmentation, color, measure
from skimage import graph
import numpy as np
import matplotlib.pyplot as plt

# Load the image and its ground truth
image = io.imread('/content/MSRC_ObjCategImageDatabase_v1/1_16_s.bmp')
ground_truth = io.imread('/content/MSRC_ObjCategImageDatabase_v1/1_16_s_GT.bmp')

# show images side by side
fig, ax = plt.subplots(1, 2, figsize=(10, 5))
ax[0].imshow(image)
ax[0].set_title('Image')
ax[1].imshow(ground_truth)
ax[1].set_title('Ground Truth')
plt.show()

3. Generate Superpixels using SLIC and create a Region Adjacency Graph

We use the SLIC algorithm to compute superpixels before applying NCut. Using the generated superpixels, we construct a Region Adjacency Graph (RAG) based on mean color similarity:

from skimage.util import img_as_ubyte, img_as_float, img_as_uint, img_as_float64

compactness=30 
n_segments=100 
labels = segmentation.slic(image, compactness=compactness, n_segments=n_segments, enforce_connectivity=True)
image_with_boundaries = segmentation.mark_boundaries(image, labels, color=(0, 0, 0))
image_with_boundaries = img_as_ubyte(image_with_boundaries)
pixel_labels = color.label2rgb(labels, image_with_boundaries, kind='avg', bg_label=0

compactness controls the balance between the color similarity and spatial proximity of pixels when forming superpixels. It determines how much emphasis is placed on keeping the superpixels compact (closer in spatial terms) versus ensuring that they are more homogeneously grouped by color.
Higher Values: A higher compactness value causes the algorithm to prioritize creating superpixels that are spatially tight and uniform in size, with less attention to color similarity. This might result in superpixels that are less sensitive to edges or color gradients.
Lower Values: A lower compactness value allows the superpixels to vary more in spatial size in order to respect the color differences more accurately. This typically results in superpixels that follow the boundaries of objects in the image more closely.

n_segments controls the number of superpixels (or segments) that the SLIC algorithm attempts to generate in the image. Essentially, it sets the resolution of the segmentation.
Higher Values: A higher n_segments value creates more superpixels, which means each superpixel will be smaller and the segmentation will be more fine-grained. This can be useful when the image has complex textures or small objects.
Lower Values: A lower n_segments value produces fewer, larger superpixels. This is useful when you want a coarse segmentation of the image, grouping larger areas into single superpixels.

4. Apply Normalized Cuts (NCut) and Visualize the Result

# using the labels found with the superpixeled image
# compute the Region Adjacency Graph using mean colors
g = graph.rag_mean_color(image, labels, mode='similarity')

# perform Normalized Graph cut on the Region Adjacency Graph
labels2 = graph.cut_normalized(labels, g)
segmented_image = color.label2rgb(labels2, image, kind='avg')
f, axarr = plt.subplots(nrows=1, ncols=4, figsize=(25, 20))

axarr[0].imshow(image)
axarr[0].set_title("Original")

#plot boundaries
axarr[1].imshow(image_with_boundaries)
axarr[1].set_title("Superpixels Boundaries")

#plot labels
axarr[2].imshow(pixel_labels)
axarr[2].set_title('Superpixel Labels')

#compute segmentation
axarr[3].imshow(segmented_image)
axarr[3].set_title('Segmented image (normalized cut)')

5. Evaluation Metrics
The key challenge in unsupervised segmentation is that NCut doesn't know the exact number of classes in the image. The number of segments found by NCut may exceed the actual number of ground truth regions. As a result, we need robust metrics to assess segmentation quality.

Intersection over Union (IoU) is a widely used metric for evaluating segmentation tasks, particularly in computer vision. It measures the overlap between the predicted segmented regions and the ground truth regions. Specifically, IoU calculates the ratio of the area of overlap between the predicted segmentation and the ground truth to the area of their union.

Structural Similarity Index (SSIM) is a metric used to assess the perceived quality of an image by comparing two images in terms of luminance, contrast, and structure.

To apply these metrics we need that the prediction and the ground truth image have the same labels. To compute the labels we compute a mask on the ground and on the prediction assign an ID to each color found on the image
Segmentation using NCut however may find more regions than ground truth, this will lower the accuracy.

def compute_mask(image):
  color_dict = {}

  # Get the shape of the image
  height,width,_ = image.shape

  # Create an empty array for labels
  labels = np.zeros((height,width),dtype=int)
  id=0
  # Loop over each pixel
  for i in range(height):
      for j in range(width):
          # Get the color of the pixel
          color = tuple(image[i,j])
          # Check if it is in the dictionary
          if color in color_dict:
              # Assign the label from the dictionary
              labels[i,j] = color_dict[color]
          else:
              color_dict[color]=id
              labels[i,j] = id
              id+=1

  return(labels)
def show_img(prediction, groundtruth):
  f, axarr = plt.subplots(nrows=1, ncols=2, figsize=(15, 10))

  axarr[0].imshow(groundtruth)
  axarr[0].set_title("groundtruth")
  axarr[1].imshow(prediction)
  axarr[1].set_title(f"prediction")
prediction_mask = compute_mask(segmented_image)
groundtruth_mask = compute_mask(ground_truth)

#usign the original image as baseline to convert from labels to color
prediction_img = color.label2rgb(prediction_mask, image, kind='avg', bg_label=0)
groundtruth_img = color.label2rgb(groundtruth_mask, image, kind='avg', bg_label=0)

show_img(prediction_img, groundtruth_img)

Now we compute the accuracy scores

from sklearn.metrics import jaccard_score
from skimage.metrics import structural_similarity as ssim

ssim_score = ssim(prediction_img, groundtruth_img, channel_axis=2)
print(f"SSIM SCORE: {ssim_score}")

jac = jaccard_score(y_true=np.asarray(groundtruth_mask).flatten(),
                        y_pred=np.asarray(prediction_mask).flatten(),
                        average = None)

# compute mean IoU score across all classes
mean_iou = np.mean(jac)
print(f"Mean IoU: {mean_iou}")

Conclusion

Normalized Cuts is a powerful method for unsupervised image segmentation, but it comes with challenges such as over-segmentation and tuning parameters. By incorporating superpixels and evaluating the performance using appropriate metrics, NCut can effectively segment complex images. The IoU and Rand Index metrics provide meaningful insights into the quality of segmentation, though further refinement is needed to handle multi-class scenarios effectively.
Finally, a complete example is available in my notebook here.

The above is the detailed content of A Guide to Unsupervised Image Segmentation using Normalized Cuts (NCut) in Python. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
How Do I Use Beautiful Soup to Parse HTML?How Do I Use Beautiful Soup to Parse HTML?Mar 10, 2025 pm 06:54 PM

This article explains how to use Beautiful Soup, a Python library, to parse HTML. It details common methods like find(), find_all(), select(), and get_text() for data extraction, handling of diverse HTML structures and errors, and alternatives (Sel

Mathematical Modules in Python: StatisticsMathematical Modules in Python: StatisticsMar 09, 2025 am 11:40 AM

Python's statistics module provides powerful data statistical analysis capabilities to help us quickly understand the overall characteristics of data, such as biostatistics and business analysis. Instead of looking at data points one by one, just look at statistics such as mean or variance to discover trends and features in the original data that may be ignored, and compare large datasets more easily and effectively. This tutorial will explain how to calculate the mean and measure the degree of dispersion of the dataset. Unless otherwise stated, all functions in this module support the calculation of the mean() function instead of simply summing the average. Floating point numbers can also be used. import random import statistics from fracti

Serialization and Deserialization of Python Objects: Part 1Serialization and Deserialization of Python Objects: Part 1Mar 08, 2025 am 09:39 AM

Serialization and deserialization of Python objects are key aspects of any non-trivial program. If you save something to a Python file, you do object serialization and deserialization if you read the configuration file, or if you respond to an HTTP request. In a sense, serialization and deserialization are the most boring things in the world. Who cares about all these formats and protocols? You want to persist or stream some Python objects and retrieve them in full at a later time. This is a great way to see the world on a conceptual level. However, on a practical level, the serialization scheme, format or protocol you choose may determine the speed, security, freedom of maintenance status, and other aspects of the program

How to Perform Deep Learning with TensorFlow or PyTorch?How to Perform Deep Learning with TensorFlow or PyTorch?Mar 10, 2025 pm 06:52 PM

This article compares TensorFlow and PyTorch for deep learning. It details the steps involved: data preparation, model building, training, evaluation, and deployment. Key differences between the frameworks, particularly regarding computational grap

What are some popular Python libraries and their uses?What are some popular Python libraries and their uses?Mar 21, 2025 pm 06:46 PM

The article discusses popular Python libraries like NumPy, Pandas, Matplotlib, Scikit-learn, TensorFlow, Django, Flask, and Requests, detailing their uses in scientific computing, data analysis, visualization, machine learning, web development, and H

Scraping Webpages in Python With Beautiful Soup: Search and DOM ModificationScraping Webpages in Python With Beautiful Soup: Search and DOM ModificationMar 08, 2025 am 10:36 AM

This tutorial builds upon the previous introduction to Beautiful Soup, focusing on DOM manipulation beyond simple tree navigation. We'll explore efficient search methods and techniques for modifying HTML structure. One common DOM search method is ex

How to Create Command-Line Interfaces (CLIs) with Python?How to Create Command-Line Interfaces (CLIs) with Python?Mar 10, 2025 pm 06:48 PM

This article guides Python developers on building command-line interfaces (CLIs). It details using libraries like typer, click, and argparse, emphasizing input/output handling, and promoting user-friendly design patterns for improved CLI usability.

Explain the purpose of virtual environments in Python.Explain the purpose of virtual environments in Python.Mar 19, 2025 pm 02:27 PM

The article discusses the role of virtual environments in Python, focusing on managing project dependencies and avoiding conflicts. It details their creation, activation, and benefits in improving project management and reducing dependency issues.

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Tools

SublimeText3 English version

SublimeText3 English version

Recommended: Win version, supports code prompts!

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

WebStorm Mac version

WebStorm Mac version

Useful JavaScript development tools

SublimeText3 Linux new version

SublimeText3 Linux new version

SublimeText3 Linux latest version

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.