search
HomeTechnology peripheralsAISix clustering algorithms that data scientists must know
Six clustering algorithms that data scientists must knowApr 08, 2023 pm 11:31 PM
machine learningalgorithmunsupervised learning

Currently, many applications such as Google News use clustering algorithms as the main implementation method. They can use large amounts of unlabeled data to build powerful topic clusters. This article introduces 6 types of mainstream methods from the most basic K-means clustering to powerful density-based methods. They each have their own areas of expertise and scenarios, and the basic ideas are not necessarily limited to clustering methods.

Six clustering algorithms that data scientists must know

This article will start with simple and efficient K-means clustering, and then introduce mean shift clustering, density-based clustering, Gaussian mixture and maximum expectation method clustering, Hierarchical clustering and graph group detection applied to structured data. We will not only analyze the basic implementation concepts, but also give the advantages and disadvantages of each algorithm to clarify actual application scenarios.

Clustering is a machine learning technique that involves grouping data points. Given a set of data points, we can use a clustering algorithm to classify each data point into a specific group. In theory, data points belonging to the same group should have similar properties and/or characteristics, while data points belonging to different groups should have very different properties and/or characteristics. Clustering is an unsupervised learning method and a statistical data analysis technique commonly used in many fields.

K-Means (K-Means) Clustering

K-Means is probably the most well-known clustering algorithm. It is part of many introductory data science and machine learning courses. Very easy to understand and implement in code! Please see the picture below.

Six clustering algorithms that data scientists must know

K-Means Clustering

First, we select some classes/groups and randomly initialize their respective center points. In order to work out the number of classes to use, it's a good idea to take a quick look at the data and try to identify the different groups. The center point is the location with the same length as each data point vector, which is "X" in the above image.

Classify each point by calculating the distance between the data point and the center of each group, and then classify the point in the group to which the group center is closest.

Based on these classification points, we use the mean of all vectors in the group to recalculate the group center.

Repeat these steps for a certain number of iterations, or until the group center changes little after each iteration. You could also choose to randomly initialize the group center a few times and then choose the run that seems to provide the best results.

The advantage of K-Means is that it is fast, since all we are really doing is calculating the distance between the point and the center of the group: very few calculations! So it has linear complexity O(n).

On the other hand, K-Means has some disadvantages. First, you have to choose how many groups/classes there are. This is not always done carefully, and ideally we want the clustering algorithm to help us solve the problem of how many classes to classify, since its purpose is to gain some insights from the data. K-means also starts from randomly selected cluster centers, so it may produce different clustering results in different algorithms. Therefore, results may not be reproducible and lack consistency. Other clustering methods are more consistent.

K-Medians is another clustering algorithm related to K-Means, except that instead of using the mean, the group centers are recalculated using the median vector of the group. This method is insensitive to outliers (because the median is used), but is much slower for larger data sets because a sorting is required on each iteration when calculating the median vector.

Mean Shift Clustering

Mean Shift Clustering is a sliding window-based algorithm that attempts to find dense areas of data points. This is a centroid-based algorithm, which means that its goal is to locate the center point of each group/class, which is accomplished by updating the candidate points of the center point to the mean of the points within the sliding window. These candidate windows are then filtered in a post-processing stage to eliminate near-duplications, forming the final set of center points and their corresponding groups. Please see the legend below.

Six clustering algorithms that data scientists must know

Mean shift clustering for a single sliding window

  1. To explain mean shift, we will consider a set of points in a two-dimensional space, as shown in the figure above. We start with a circular sliding window centered at point C (randomly chosen) and with a core of radius r. Mean shift is a hill-climbing algorithm that involves iteratively moving toward higher density regions at each step until convergence.
  2. In each iteration, the sliding window moves toward higher density areas by moving the center point toward the mean of the points within the window (hence the name). The density within a sliding window is proportional to the number of points inside it. Naturally, by moving toward the mean of the points within the window, it gradually moves toward areas of higher point density.
  3. We continue to move the sliding window according to the mean until there is no direction that can accommodate more points in the kernel. Look at the picture above; we keep moving the circle until the density no longer increases (i.e. the number of points in the window).
  4. The process of steps 1 to 3 is completed through many sliding windows until all points are located within one window. When multiple sliding windows overlap, the window containing the most points is retained. Then clustering is performed based on the sliding window where the data points are located.

The entire process from start to finish for all sliding windows is shown below. Each black dot represents the centroid of the sliding window, and each gray dot represents a data point.

Six clustering algorithms that data scientists must know

The whole process of mean shift clustering

Compared with K-means clustering, this method does not need to select the number of clusters because the mean shift automatically Discover this. This is a huge advantage. The fact that the cluster centers cluster towards the maximum point density is also very satisfying, as it is very intuitive to understand and adapt to the natural data-driven implications. The disadvantage is that the choice of window size/radius "r" may be unimportant.

Density-based clustering method (DBSCAN)

DBSCAN is a density-based clustering algorithm that is similar to mean shift but has some significant advantages. Check out another fun graphic below and let’s get started!

Six clustering algorithms that data scientists must know

DBSCAN Clustering

  1. DBSCAN starts from an arbitrary starting data point that has not been visited. The neighborhood of this point is extracted with distance ε (all points within ε distance are neighbor points).
  2. If there are a sufficient number of points within this neighborhood (according to minPoints), the clustering process starts and the current data point becomes the first point of the new cluster. Otherwise, the point will be marked as noise (later this noise point may still become part of the cluster). In both cases, the point is marked as "Visited".
  3. For the first point in a new cluster, points within its ε distance neighborhood also become part of the cluster. This process of making all points in the ε neighborhood belong to the same cluster is repeated for all new points just added to the cluster.
  4. Repeat steps 2 and 3 until all points in the cluster are determined, that is, all points in the ε neighborhood of the cluster have been visited and marked.
  5. Once we are done with the current cluster, a new unvisited point will be retrieved and processed, resulting in the discovery of another cluster or noise. This process is repeated until all points are marked as visited. Since all points have been visited, each point belongs to some cluster or noise.

DBSCAN has many advantages over other clustering algorithms. First, it does not require a fixed number of clusters at all. It also identifies outliers as noise, unlike mean shift, which simply groups data points into clusters even if they are very different. Additionally, it is capable of finding clusters of any size and shape very well.

The main disadvantage of DBSCAN is that it does not perform as well as other clustering algorithms when the density of clusters is different. This is because the settings of the distance threshold ε and minPoints used to identify neighborhood points will change with clusters when density changes. This drawback also arises in very high-dimensional data, as the distance threshold ε again becomes difficult to estimate.

Expected Maximum (EM) Clustering with Gaussian Mixture Models (GMM)

A major drawback of K-Means is its simple use of cluster center means. From the diagram below we can see why this is not the best approach. On the left, you can see very clearly that there are two circular clusters with different radii, centered on the same mean. K-Means cannot handle this situation because the means of these clusters are very close. K-Means also fails in cases where the clusters are not circular, again due to using the mean as the cluster center.

Six clustering algorithms that data scientists must know

Two Failure Cases of K-Means

Gaussian Mixture Models (GMMs) give us more flexibility than K-Means. For GMMs, we assume that the data points are Gaussian distributed; this is a less restrictive assumption than using the mean to assume that they are circular. This way, we have two parameters to describe the shape of the cluster: mean and standard deviation! Taking 2D as an example, this means that the clusters can take any kind of elliptical shape (since we have standard deviations in both x and y directions). Therefore, each Gaussian distribution is assigned to a single cluster.

To find the Gaussian parameters (such as mean and standard deviation) of each cluster, we will use an optimization algorithm called Expectation Maximum (EM). Look at the diagram below, which is an example of a Gaussian fit to a cluster. We can then continue the process of maximum expectation clustering using GMMs.

Six clustering algorithms that data scientists must know

EM Clustering using GMMs

  1. We first choose the number of clusters (as done by K-Means) and randomly initialize each Gaussian distribution parameters of clusters. You can also try to provide a good guess for the initial parameters by taking a quick look at the data. Note however that as you can see above this is not 100% necessary as the Gaussian starts us off very poor but quickly gets optimized.
  2. Given the Gaussian distribution of each cluster, calculate the probability that each data point belongs to a specific cluster. The closer a point is to the center of the Gaussian, the more likely it is to belong to that cluster. This should be intuitive since with a Gaussian distribution we assume that most of the data is closer to the center of the cluster.
  3. Based on these probabilities, we calculate a new set of Gaussian distribution parameters that maximize the probability of data points within the cluster. We calculate these new parameters using a weighted sum of the data point locations, where the weight is the probability that the data point belongs to that particular cluster. To explain it visually, we can look at the image above, especially the yellow cluster, which we use as an example. The distribution starts right away on the first iteration, but we can see that most of the yellow points are on the right side of the distribution. When we calculate a probability weighted sum, even though there are some points near the center, they are mostly on the right side. Therefore, the mean of the distribution will naturally be close to these points. We can also see that most of the points are distributed "from upper right to lower left". The standard deviation is therefore changed to create an ellipse that better fits the points in order to maximize the weighted sum of the probabilities.
  4. Repeat steps 2 and 3 until convergence, where the distribution changes little between iterations.

There are two key advantages to using GMMs. First, GMMs are more flexible than K-Means in terms of cluster covariance; because of the standard deviation parameter, clusters can take on any elliptical shape instead of being restricted to circles. K-Means is actually a special case of GMM, where the covariance of each cluster is close to 0 in all dimensions. Second, because GMMs use probabilities, there can be many clusters per data point. So if a data point is in the middle of two overlapping clusters, we can define its class simply by saying that X percent of it belongs to class 1 and Y percent to class 2. That is, GMMs support hybrid qualifications.

Agglomerative hierarchical clustering

Hierarchical clustering algorithms are actually divided into two categories: top-down or bottom-up. Bottom-up algorithms first treat each data point as a single cluster and then merge (or aggregate) two clusters successively until all clusters are merged into a single cluster containing all data points. Therefore, bottom-up hierarchical clustering is called agglomerative hierarchical clustering or HAC. This hierarchy of clusters is represented by a tree (or dendrogram). The root of the tree is the only cluster that collects all samples, and the leaves are the clusters with only one sample. Before going into the algorithm steps, please see the legend below.

Six clustering algorithms that data scientists must know

Agglomerative hierarchical clustering

  1. We first treat each data point as a single cluster, i.e. if we have X data points in our data set, then we have X clusters. We then choose a distance metric that measures the distance between two clusters. As an example, we will use average linkage, which defines the distance between two clusters as the average distance between data points in the first cluster and data points in the second cluster.
  2. In each iteration, we merge the two clusters into one. The two clusters to be merged should have the smallest average linkage. That is, according to the distance metric we choose, the two clusters have the smallest distance between them and are therefore the most similar and should be merged together.
  3. Repeat step 2 until we reach the root of the tree, i.e. we have only one cluster containing all data points. In this way, we only need to choose when to stop merging clusters, that is, when to stop building the tree, to choose how many clusters we need in the end!

Hierarchical clustering does not require us to specify the number of clusters, we can even choose which number of clusters looks best since we are building a tree. Additionally, the algorithm is not sensitive to the choice of distance metric; they all perform equally well, whereas for other clustering algorithms the choice of distance metric is crucial. A particularly good example of hierarchical clustering methods is when the underlying data has a hierarchical structure and you want to restore the hierarchy; other clustering algorithms cannot do this. Unlike the linear complexity of K-Means and GMM, these advantages of hierarchical clustering come at the cost of lower efficiency since it has a time complexity of O(n³).

Graph Community Detection

When our data can be represented as a network or graph (graph), we can use the graph community detection method to complete clustering. In this algorithm, a graph community is usually defined as a subset of vertices that are more closely connected than other parts of the network.

Perhaps the most intuitive case is social networks. The vertices represent people, and the edges connecting the vertices represent users who are friends or fans. However, to model a system as a network, we must find a way to efficiently connect the various components. Some innovative applications of graph theory for clustering include feature extraction of image data, analysis of gene regulatory networks, etc.

Below is a simple diagram showing 8 recently viewed websites, connected based on links from their Wikipedia pages.

Six clustering algorithms that data scientists must know

The color of these vertices indicates their group relationship, and the size is determined based on their centrality. These clusters also make sense in real life, where the yellow vertices are usually reference/search sites and the blue vertices are all online publishing sites (articles, tweets, or code).

Suppose we have clustered the network into groups. We can then use this modularity score to evaluate the quality of the clustering. A higher score means we segmented the network into "accurate" groups, while a low score means our clustering is closer to random. As shown in the figure below:

Six clustering algorithms that data scientists must know

Modularity can be calculated using the following formula:

Six clustering algorithms that data scientists must know

where L represents the edge in the network The quantities, k_i and k_j refer to the degree of each vertex, which can be found by adding up the terms of each row and column. Multiplying the two and dividing by 2L represents the expected number of edges between vertices i and j when the network is randomly assigned.

Overall, the terms in parentheses represent the difference between the true structure of the network and the expected structure when combined randomly. Studying its value shows that it returns the highest value when A_ij = 1 and ( k_i k_j ) / 2L is small. This means that when there is an "unexpected" edge between fixed points i and j, the resulting value is higher.

The last δc_i, c_j is the famous Kronecker δ function (Kronecker-delta function). Here is its Python explanation:

Six clustering algorithms that data scientists must know

The modularity of the graph can be calculated through the above formula, and the higher the modularity, the better the degree to which the network is clustered into different groups. Therefore, the best way to cluster the network can be found by looking for maximum modularity through optimization methods.

Combinatorics tells us that for a network with only 8 vertices, there are 4140 different clustering methods. A network of 16 vertices would be clustered in over 10 billion ways. The possible clustering methods of a network with 32 vertices will exceed 128 septillion (10^21); if your network has 80 vertices, the number of possible clustering methods has exceeded the number of clustering methods in the observable universe. number of atoms.

We must therefore resort to a heuristic that works well at evaluating the clusters that yield the highest modularity scores, without trying every possibility. This is an algorithm called Fast-Greedy Modularity-Maximization, which is somewhat similar to the agglomerative hierarchical clustering algorithm described above. It’s just that Mod-Max does not fuse groups based on distance, but fuses groups based on changes in modularity.

Here's how it works:

  • First initially assign each vertex to its own group, then calculate the modularity M of the entire network.
  • Step 1 requires that each community pair is linked by at least one unilateral link. If two communities merge together, the algorithm calculates the resulting modularity change ΔM.
  • The second step is to select the group pairs with the largest growth in ΔM and then merge them. A new modularity M is then calculated for this cluster and recorded.
  • Repeat steps 1 and 2 - each time fusing the group pairs, so that the maximum gain of ΔM is finally obtained, and then record the new clustering pattern and its corresponding modularity score M.
  • You can stop when all vertices are grouped into a giant cluster. The algorithm then examines the records in this process and finds the clustering pattern in which the highest M value was returned. This is the returned group structure.

Community detection is a popular research field in graph theory. Its limitations are mainly reflected in the fact that it ignores some small clusters and is only applicable to structured graph models. But this type of algorithm has very good performance in typical structured data and real network data.

Conclusion

The above are the 6 major clustering algorithms that data scientists should know! We’ll end this article by showing visualizations of various algorithms!

Six clustering algorithms that data scientists must know

The above is the detailed content of Six clustering algorithms that data scientists must know. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
特斯拉自动驾驶算法和模型解读特斯拉自动驾驶算法和模型解读Apr 11, 2023 pm 12:04 PM

特斯拉是一个典型的AI公司,过去一年训练了75000个神经网络,意味着每8分钟就要出一个新的模型,共有281个模型用到了特斯拉的车上。接下来我们分几个方面来解读特斯拉FSD的算法和模型进展。01 感知 Occupancy Network特斯拉今年在感知方面的一个重点技术是Occupancy Network (占据网络)。研究机器人技术的同学肯定对occupancy grid不会陌生,occupancy表示空间中每个3D体素(voxel)是否被占据,可以是0/1二元表示,也可以是[0, 1]之间的

基于因果森林算法的决策定位应用基于因果森林算法的决策定位应用Apr 08, 2023 am 11:21 AM

译者 | 朱先忠​审校 | 孙淑娟​在我之前的​​博客​​中,我们已经了解了如何使用因果树来评估政策的异质处理效应。如果你还没有阅读过,我建议你在阅读本文前先读一遍,因为我们在本文中认为你已经了解了此文中的部分与本文相关的内容。为什么是异质处理效应(HTE:heterogenous treatment effects)呢?首先,对异质处理效应的估计允许我们根据它们的预期结果(疾病、公司收入、客户满意度等)选择提供处理(药物、广告、产品等)的用户(患者、用户、客户等)。换句话说,估计HTE有助于我

Mango:基于Python环境的贝叶斯优化新方法Mango:基于Python环境的贝叶斯优化新方法Apr 08, 2023 pm 12:44 PM

译者 | 朱先忠审校 | 孙淑娟引言模型超参数(或模型设置)的优化可能是训练机器学习算法中最重要的一步,因为它可以找到最小化模型损失函数的最佳参数。这一步对于构建不易过拟合的泛化模型也是必不可少的。优化模型超参数的最著名技术是穷举网格搜索和随机网格搜索。在第一种方法中,搜索空间被定义为跨越每个模型超参数的域的网格。通过在网格的每个点上训练模型来获得最优超参数。尽管网格搜索非常容易实现,但它在计算上变得昂贵,尤其是当要优化的变量数量很大时。另一方面,随机网格搜索是一种更快的优化方法,可以提供更好的

因果推断主要技术思想与方法总结因果推断主要技术思想与方法总结Apr 12, 2023 am 08:10 AM

导读:因果推断是数据科学的一个重要分支,在互联网和工业界的产品迭代、算法和激励策略的评估中都扮演者重要的角色,结合数据、实验或者统计计量模型来计算新的改变带来的收益,是决策制定的基础。然而,因果推断并不是一件简单的事情。首先,在日常生活中,人们常常把相关和因果混为一谈。相关往往代表着两个变量具有同时增长或者降低的趋势,但是因果意味着我们想要知道对一个变量施加改变的时候会发生什么样的结果,或者说我们期望得到反事实的结果,如果过去做了不一样的动作,未来是否会发生改变?然而难点在于,反事实的数据往往是

使用Pytorch实现对比学习SimCLR 进行自监督预训练使用Pytorch实现对比学习SimCLR 进行自监督预训练Apr 10, 2023 pm 02:11 PM

SimCLR(Simple Framework for Contrastive Learning of Representations)是一种学习图像表示的自监督技术。 与传统的监督学习方法不同,SimCLR 不依赖标记数据来学习有用的表示。 它利用对比学习框架来学习一组有用的特征,这些特征可以从未标记的图像中捕获高级语义信息。SimCLR 已被证明在各种图像分类基准上优于最先进的无监督学习方法。 并且它学习到的表示可以很容易地转移到下游任务,例如对象检测、语义分割和小样本学习,只需在较小的标记

​盒马供应链算法实战​盒马供应链算法实战Apr 10, 2023 pm 09:11 PM

一、盒马供应链介绍1、盒马商业模式盒马是一个技术创新的公司,更是一个消费驱动的公司,回归消费者价值:买的到、买的好、买的方便、买的放心、买的开心。盒马包含盒马鲜生、X 会员店、盒马超云、盒马邻里等多种业务模式,其中最核心的商业模式是线上线下一体化,最快 30 分钟到家的 O2O(即盒马鲜生)模式。2、盒马经营品类介绍盒马精选全球品质商品,追求极致新鲜;结合品类特点和消费者购物体验预期,为不同品类选择最为高效的经营模式。盒马生鲜的销售占比达 60%~70%,是最核心的品类,该品类的特点是用户预期时

人类反超 AI:DeepMind 用 AI 打破矩阵乘法计算速度 50 年记录一周后,数学家再次刷新人类反超 AI:DeepMind 用 AI 打破矩阵乘法计算速度 50 年记录一周后,数学家再次刷新Apr 11, 2023 pm 01:16 PM

10 月 5 日,AlphaTensor 横空出世,DeepMind 宣布其解决了数学领域 50 年来一个悬而未决的数学算法问题,即矩阵乘法。AlphaTensor 成为首个用于为矩阵乘法等数学问题发现新颖、高效且可证明正确的算法的 AI 系统。论文《Discovering faster matrix multiplication algorithms with reinforcement learning》也登上了 Nature 封面。然而,AlphaTensor 的记录仅保持了一周,便被人类

研究表明强化学习模型容易受到成员推理攻击研究表明强化学习模型容易受到成员推理攻击Apr 09, 2023 pm 08:01 PM

​译者 | 李睿 审校 | 孙淑娟​随着机器学习成为人们每天都在使用的很多应用程序的一部分,人们越来越关注如何识别和解决机器学习模型的安全和隐私方面的威胁。 然而,不同机器学习范式面临的安全威胁各不相同,机器学习安全的某些领域仍未得到充分研究。尤其是强化学习算法的安全性近年来并未受到太多关注。 加拿大的麦吉尔大学、机器学习实验室(MILA)和滑铁卢大学的研究人员开展了一项新研究,主要侧重于深度强化学习算法的隐私威胁。研究人员提出了一个框架,用于测试强化学习模型对成员推理攻击的脆弱性。 研究

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Dreamweaver Mac version

Dreamweaver Mac version

Visual web development tools

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

VSCode Windows 64-bit Download

VSCode Windows 64-bit Download

A free and powerful IDE editor launched by Microsoft

SublimeText3 Linux new version

SublimeText3 Linux new version

SublimeText3 Linux latest version