search
HomeTechnology peripheralsAINine clustering algorithms to explore unsupervised machine learning
Nine clustering algorithms to explore unsupervised machine learningDec 01, 2023 pm 05:39 PM
machine learningunsupervised learning

Today, I would like to share with you the common unsupervised learning clustering methods in machine learning

In unsupervised learning, our data does not carry any labels, so there is no need to What needs to be done in supervised learning is to input this series of unlabeled data into the algorithm, and then let the algorithm find some structures hidden in the data. Through the data in the figure below, one of the structures that can be found is the point in the data set. It can be divided into two separate sets of points (clusters), and the algorithm that can circle these clusters (cluster) is called a clustering algorithm.

Nine clustering algorithms to explore unsupervised machine learning

Application of clustering algorithm

Nine clustering algorithms to explore unsupervised machine learning

  • Market segmentation: Group the customer information in the database into different groups according to markets, so as to sell them separately or improve services according to different markets.
  • Social network analysis: Find a closely related group through the people who are most frequently contacted by email and the people they are most frequently contacted by.
  • Organize computer clusters: In data centers, computer clusters often work together and can be used to reorganize resources, rearrange networks, optimize data centers, and communicate data.
  • Understand the composition of the Milky Way: Use this information to learn something about astronomy.

The goal of cluster analysis is to divide observations into groups ("clusters") such that pairwise differences between observations assigned to the same cluster tend to be smaller than those that are different The difference between observations in a cluster. Clustering algorithms are divided into three different types: combinatorial algorithms, hybrid modeling, and pattern search.

Several common clustering algorithms are:

  • K-Means Clustering
  • Hierarchical Clustering
  • Agglomerative Clustering
  • Affinity Propagation
  • Mean Shift Clustering
  • Bisecting K-Means
  • DBSCAN
  • OPTICS
  • BIRCH

K-means

The K-means algorithm is currently one of the most popular clustering methods.

K-means was proposed by Stuart Lloyd of Bell Labs in 1957. It was initially used for pulse code modulation. It was not until 1982 that the algorithm was announced to the public. In 1965, Edward W. Forgy published the same algorithm, so K-Means is sometimes called Lloyd-Forgy.

Clustering problems usually require processing a set of unlabeled data sets, and require an algorithm to automatically divide these data into closely related subsets or clusters. Currently, the most popular and widely used clustering algorithm is the K-means algorithm

Intuitive understanding of the K-means algorithm:

Nine clustering algorithms to explore unsupervised machine learning

Suppose there is an unlabeled data set (left in the figure above) and we want to divide it into two clusters. Now execute the K-means algorithm. The specific operations are as follows:

  • The first step is to randomly generate two points (because you want to cluster the data into two categories) (right in the picture above). These two points are called cluster centroids.
  • The second step is to perform the inner loop of the K-means algorithm. The K-means algorithm is an iterative algorithm that does two things. The first is cluster assignment and the second is move centroid.

The first step in the inner loop is to perform cluster assignment, that is, traverse each sample, and then assign each point according to its distance from the cluster center. Assigning to different cluster centers (which ones are closer to each other), in this case, is to traverse the data set and color each point red or blue.

The second step of the inner loop is to move the cluster center so that the red and blue cluster centers move to the average positions of the points they belong to

Assign all points to new clusters based on their distance from the new cluster center, and continue to cycle this process until the position of the cluster center no longer changes with iteration, and the color of the points also changes. No more changes. At this time it can be said that K-means has completed aggregation. This algorithm does a pretty good job of finding two clusters in the data

Nine clustering algorithms to explore unsupervised machine learning

Advantages of K-Means algorithm:

Simple and easy to understand, fast calculation speed, and suitable for large-scale data sets.

Disadvantages:

  • For example, the processing ability for non-spherical clusters is poor and it is susceptible to initial clusters The influence of the selection of clusters requires pre-specifying the number of clusters K and so on.
  • In addition, when there is noise or outliers between data points, the K-Means algorithm may assign them to the wrong clusters.

Hierarchical Clustering

Hierarchical clustering is the operation of clustering sample sets according to a certain level. The level here actually refers to the definition of a certain distance

The ultimate purpose of clustering is to reduce the number of classifications, so the behavior is similar to gradually approaching from the leaf node to the root node The dendrogram process, this behavior is also called "bottom-up"

More popularly, hierarchical clustering treats the initialized multiple clusters as tree nodes , each iteration step is to merge two similar clusters into a new large cluster, and so on, until finally only one cluster (root node) remains.

Hierarchical clustering strategies are divided into two basic paradigms: aggregation (bottom-up) and divisive (top-down).

The opposite of hierarchical clustering is divisive clustering, also known as DIANA (Divise Analysis), and its behavior process is "top-down"

The results of the K-means algorithm depend on the number of clusters chosen to search and the allocation of the starting configuration. In contrast, hierarchical clustering methods do not require such specification. Instead, they require the user to specify a measure of dissimilarity between (disjoint) groups of observations based on the pairwise dissimilarity between the two sets of observations. As the name suggests, hierarchical clustering methods produce a hierarchical representation in which clusters at each level are created by merging clusters at the next lower level. At the lowest level, each cluster contains one observation. At the highest level, there is only one cluster containing all the data

Advantages:

  • Distance and regularity Similarity is easy to define and has few restrictions;
  • There is no need to predetermine the number of clusters;
  • The hierarchical relationship of classes can be discovered;
  • can be clustered into other shapes.

Disadvantages:

  • ##The computational complexity is too high;
  • Singular values ​​can also have a big impact;
  • The algorithm is likely to cluster into chains.
Agglomerative Clustering

The rewritten content is: Agglomerative Clustering is a bottom-up clustering algorithm. Treat each data point as an initial cluster and gradually merge them to form larger clusters until the stopping condition is met. In this algorithm, each data point is initially treated as a separate cluster, and then clusters are gradually merged until all data points are merged into one large cluster

Advantages:

  • Applicable to clusters of different shapes and sizes, and does not require specifying the number of clusters in advance.
  • The algorithm can also output a clustering hierarchy for easy analysis and visualization.

Disadvantages:

  • The computational complexity is high, especially when dealing with large-scale Data sets require a large amount of computing resources and storage space.
  • This algorithm is also sensitive to the selection of initial clusters, which may lead to different clustering results.
Affinity Propagation

Modified content: Affinity Propagation Algorithm (AP) is usually translated as Affinity Propagation Algorithm or Proximity Propagation Algorithm

Affinity Propagation is a clustering algorithm based on graph theory designed to identify "exemplars" (representative points) and "clusters" (clusters) in data. Unlike traditional clustering algorithms such as K-Means, Affinity Propagation does not need to specify the number of clusters in advance, nor does it need to randomly initialize cluster centers. Instead, it obtains the final clustering result by calculating the similarity between data points.

Advantages:

  • No need to specify the number of final clustering families
  • The existing data points are used as the final cluster center instead of generating a new cluster center.
  • The model is not sensitive to the initial value of the data.
  • There is no requirement for the symmetry of the initial similarity matrix data.
  • Compared with the k-centers clustering method, the squared error error of the result is smaller.

Disadvantages:

  • The algorithm has high computational complexity and requires a lot of Storage space and computing resources;
  • has weak processing capabilities for noise points and outliers.

Mean Shift Clustering

Shifting clustering is a density-based non-parametric clustering algorithm. Its basic idea is to find the maximum density of data points. locations (called "local maxima" or "peaks") to identify clusters in the data. The core of this algorithm is to estimate the local density of each data point, and use the density estimation results to calculate the direction and distance of the movement of the data point

Advantages:

  • There is no need to specify the number of clusters, and it also has good results for clusters with complex shapes.
  • The algorithm can also effectively handle noisy data.

Disadvantages:

  • The computational complexity is high, especially when dealing with large-scale Data set needs to consume a lot of computing resources and storage space;
  • This algorithm is also sensitive to the selection of initial parameters and needs to be adjusted and optimized.

Bisecting K-Means

Bisecting K-Means is a hierarchical clustering algorithm based on the K-Means algorithm. Its basic idea is to combine all data The points are divided into a cluster, and then the cluster is divided into two sub-clusters, and the K-Means algorithm is applied to each sub-cluster respectively. This process is repeated until the predetermined number of clusters is reached.

The algorithm first treats all data points as an initial cluster, then applies the K-Means algorithm to the cluster, divides the cluster into two sub-clusters, and calculates the squared error of each sub-cluster and (SSE). Then, the subcluster with the largest sum of squared errors is selected and divided into two subclusters again, and this process is repeated until the predetermined number of clusters is reached.

Advantages:

  • ## has high accuracy and stability and can effectively handle large scale data set and does not require specifying the initial number of clusters.
  • The algorithm is also able to output a clustering hierarchy for easy analysis and visualization.

Disadvantages:

  • The computational complexity is high, especially when dealing with large-scale Data sets require a large amount of computing resources and storage space.
  • In addition, this algorithm is also sensitive to the selection of initial clusters, which may lead to different clustering results.
DBSCAN

Density-based spatial clustering algorithm DBSCAN (Density-Based Spatial Clustering of Applications with Noise) is a typical clustering algorithm with noise. Class method

The method of density has the characteristic that it does not depend on distance, but depends on density. Therefore, it can overcome the shortcoming that distance-based algorithms can only find "spherical" clusters

The core idea of ​​the DBSCAN algorithm is: for a given data point, if its density If it reaches a certain threshold, it belongs to a cluster; otherwise, it is regarded as a noise point.

Advantages:

  • This type of algorithm can overcome the problem that distance-based algorithms can only find "circular" (convex) Disadvantages of clustering;
  • can find clusters of arbitrary shapes and is insensitive to noise data;
  • does not need to specify the number of classes cluster;
  • There are only two parameters in the algorithm, scanning radius (eps) and minimum number of included points (min_samples).

Disadvantages:

  • Computational complexity, without any optimization, the algorithm The time complexity is O(N^{2}). R-tree, k-d tree, ball;
  • tree index can usually be used to speed up the calculation and reduce the time complexity of the algorithm. It is O(Nlog(N));
  • is greatly affected by eps. When the density of data distribution in a class is uneven, when eps is small, the cluster with small density will be divided into multiple clusters with similar properties; when eps is large, clusters that are closer and denser will be merged into one cluster. In the case of high-dimensional data, due to the curse of dimensionality, the selection of eps is more difficult;
  • relies on the selection of the distance formula. Due to the curse of dimensionality, the distance metric is not important;
  • It is not suitable for data sets with large density differences, because it is difficult to select eps and metric.

OPTICS

OPTICS (Ordering Points To Identify the Clustering Structure) is a density-based clustering algorithm that can automatically determine the number of clusters. At the same time, it can also discover clusters of any shape and can handle noisy data

The core idea of ​​the OPTICS algorithm is to calculate the distance between a given data point and other points to determine its distance. reachability over density and construct a density-based distance graph. Then, by scanning this distance map, the number of clusters is automatically determined and each cluster is divided

Advantages:

  • It can automatically determine the number of clusters, handle clusters of arbitrary shapes, and effectively handle noisy data.
  • The algorithm is also able to output a clustering hierarchy for easy analysis and visualization.

Disadvantages:

  • The computational complexity is high, especially when dealing with large-scale Data sets require a large amount of computing resources and storage space.
  • This algorithm may result in poor clustering results for data sets with large density differences.

BIRCH

BIRCH (Balanced Iterative Reduction and Hierarchical Clustering) is a clustering algorithm based on hierarchical clustering that can efficiently handle large scale data set, and can achieve good results for clusters of any shape

The core idea of ​​the BIRCH algorithm is to gradually reduce the size of the data by hierarchically clustering the data set. , and finally the cluster structure is obtained. The BIRCH algorithm uses a structure similar to B-tree, called CF tree, which can quickly insert and delete sub-clusters and can be automatically balanced to ensure the quality and efficiency of the cluster

Advantages:

  • Can quickly process large-scale data sets, and has good results for clusters of any shape.
  • This algorithm also has good fault tolerance for noisy data and outliers.

Disadvantages:

  • For data sets with large density differences, it may cause The clustering effect is not good;
  • is also not as effective as other algorithms for high-dimensional data sets.

The above is the detailed content of Nine clustering algorithms to explore unsupervised machine learning. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
解读CRISP-ML(Q):机器学习生命周期流程解读CRISP-ML(Q):机器学习生命周期流程Apr 08, 2023 pm 01:21 PM

译者 | 布加迪审校 | 孙淑娟目前,没有用于构建和管理机器学习(ML)应用程序的标准实践。机器学习项目组织得不好,缺乏可重复性,而且从长远来看容易彻底失败。因此,我们需要一套流程来帮助自己在整个机器学习生命周期中保持质量、可持续性、稳健性和成本管理。图1. 机器学习开发生命周期流程使用质量保证方法开发机器学习应用程序的跨行业标准流程(CRISP-ML(Q))是CRISP-DM的升级版,以确保机器学习产品的质量。CRISP-ML(Q)有六个单独的阶段:1. 业务和数据理解2. 数据准备3. 模型

2023年机器学习的十大概念和技术2023年机器学习的十大概念和技术Apr 04, 2023 pm 12:30 PM

机器学习是一个不断发展的学科,一直在创造新的想法和技术。本文罗列了2023年机器学习的十大概念和技术。 本文罗列了2023年机器学习的十大概念和技术。2023年机器学习的十大概念和技术是一个教计算机从数据中学习的过程,无需明确的编程。机器学习是一个不断发展的学科,一直在创造新的想法和技术。为了保持领先,数据科学家应该关注其中一些网站,以跟上最新的发展。这将有助于了解机器学习中的技术如何在实践中使用,并为自己的业务或工作领域中的可能应用提供想法。2023年机器学习的十大概念和技术:1. 深度神经网

基于因果森林算法的决策定位应用基于因果森林算法的决策定位应用Apr 08, 2023 am 11:21 AM

译者 | 朱先忠​审校 | 孙淑娟​在我之前的​​博客​​中,我们已经了解了如何使用因果树来评估政策的异质处理效应。如果你还没有阅读过,我建议你在阅读本文前先读一遍,因为我们在本文中认为你已经了解了此文中的部分与本文相关的内容。为什么是异质处理效应(HTE:heterogenous treatment effects)呢?首先,对异质处理效应的估计允许我们根据它们的预期结果(疾病、公司收入、客户满意度等)选择提供处理(药物、广告、产品等)的用户(患者、用户、客户等)。换句话说,估计HTE有助于我

使用PyTorch进行小样本学习的图像分类使用PyTorch进行小样本学习的图像分类Apr 09, 2023 am 10:51 AM

近年来,基于深度学习的模型在目标检测和图像识别等任务中表现出色。像ImageNet这样具有挑战性的图像分类数据集,包含1000种不同的对象分类,现在一些模型已经超过了人类水平上。但是这些模型依赖于监督训练流程,标记训练数据的可用性对它们有重大影响,并且模型能够检测到的类别也仅限于它们接受训练的类。由于在训练过程中没有足够的标记图像用于所有类,这些模型在现实环境中可能不太有用。并且我们希望的模型能够识别它在训练期间没有见到过的类,因为几乎不可能在所有潜在对象的图像上进行训练。我们将从几个样本中学习

LazyPredict:为你选择最佳ML模型!LazyPredict:为你选择最佳ML模型!Apr 06, 2023 pm 08:45 PM

本文讨论使用LazyPredict来创建简单的ML模型。LazyPredict创建机器学习模型的特点是不需要大量的代码,同时在不修改参数的情况下进行多模型拟合,从而在众多模型中选出性能最佳的一个。 摘要本文讨论使用LazyPredict来创建简单的ML模型。LazyPredict创建机器学习模型的特点是不需要大量的代码,同时在不修改参数的情况下进行多模型拟合,从而在众多模型中选出性能最佳的一个。​本文包括的内容如下:​简介​LazyPredict模块的安装​在分类模型中实施LazyPredict

Mango:基于Python环境的贝叶斯优化新方法Mango:基于Python环境的贝叶斯优化新方法Apr 08, 2023 pm 12:44 PM

译者 | 朱先忠审校 | 孙淑娟引言模型超参数(或模型设置)的优化可能是训练机器学习算法中最重要的一步,因为它可以找到最小化模型损失函数的最佳参数。这一步对于构建不易过拟合的泛化模型也是必不可少的。优化模型超参数的最著名技术是穷举网格搜索和随机网格搜索。在第一种方法中,搜索空间被定义为跨越每个模型超参数的域的网格。通过在网格的每个点上训练模型来获得最优超参数。尽管网格搜索非常容易实现,但它在计算上变得昂贵,尤其是当要优化的变量数量很大时。另一方面,随机网格搜索是一种更快的优化方法,可以提供更好的

人工智能自动获取知识和技能,实现自我完善的过程是什么人工智能自动获取知识和技能,实现自我完善的过程是什么Aug 24, 2022 am 11:57 AM

实现自我完善的过程是“机器学习”。机器学习是人工智能核心,是使计算机具有智能的根本途径;它使计算机能模拟人的学习行为,自动地通过学习来获取知识和技能,不断改善性能,实现自我完善。机器学习主要研究三方面问题:1、学习机理,人类获取知识、技能和抽象概念的天赋能力;2、学习方法,对生物学习机理进行简化的基础上,用计算的方法进行再现;3、学习系统,能够在一定程度上实现机器学习的系统。

超参数优化比较之网格搜索、随机搜索和贝叶斯优化超参数优化比较之网格搜索、随机搜索和贝叶斯优化Apr 04, 2023 pm 12:05 PM

本文将详细介绍用来提高机器学习效果的最常见的超参数优化方法。 译者 | 朱先忠​审校 | 孙淑娟​简介​通常,在尝试改进机器学习模型时,人们首先想到的解决方案是添加更多的训练数据。额外的数据通常是有帮助(在某些情况下除外)的,但生成高质量的数据可能非常昂贵。通过使用现有数据获得最佳模型性能,超参数优化可以节省我们的时间和资源。​顾名思义,超参数优化是为机器学习模型确定最佳超参数组合以满足优化函数(即,给定研究中的数据集,最大化模型的性能)的过程。换句话说,每个模型都会提供多个有关选项的调整“按钮

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

Repo: How To Revive Teammates
1 months agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
Hello Kitty Island Adventure: How To Get Giant Seeds
1 months agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

SublimeText3 English version

SublimeText3 English version

Recommended: Win version, supports code prompts!

Safe Exam Browser

Safe Exam Browser

Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

DVWA

DVWA

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

mPDF

mPDF

mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),