search
HomeTechnology peripheralsAIMachine learning essentials: How to prevent overfitting?
Machine learning essentials: How to prevent overfitting?Apr 13, 2023 pm 06:37 PM
machine learningalgorithmdeep learning

In fact, the essence of regularization is very simple. It is a means or operation that imposes a priori restrictions or constraints on a certain problem to achieve a specific purpose. The purpose of using regularization in an algorithm is to prevent the model from overfitting. When it comes to regularization, many students may immediately think of the commonly used L1 norm and L2 norm. Before summarizing, let's first take a look at what the LP norm is?

LP norm

The norm can be simply understood as used to represent the distance in the vector space, and the definition of distance is very abstract. It can be called as long as it satisfies non-negative, reflexive, and triangle inequalities. It's distance.

LP norm is not a norm, but a set of norms, which is defined as follows:

Machine learning essentials: How to prevent overfitting?

##The range of p is [1,∞). p is not defined as a norm in the range of (0,1) because it violates the triangle inequality.

According to the change of pp, the norm also has different changes. Borrowing a classic change diagram of P norm as follows:

Machine learning essentials: How to prevent overfitting?

The above picture represents It shows the change of the unit ball when p changes from 0 to positive infinity. The unit ball defined under the P norm is a convex set, but when 0

Then the question comes, what is the L0 norm? The L0 norm represents the number of non-zero elements in the vector, expressed as follows:

Machine learning essentials: How to prevent overfitting?

We can find the least optimal sparse features by minimizing the L0 norm. item. But unfortunately, the optimization problem of L0 norm is an NP hard problem (L0 norm is also non-convex). Therefore, in practical applications, we often perform convex relaxation of L0. It is theoretically proven that the L1 norm is the optimal convex approximation of the L0 norm, so the L1 norm is usually used instead of directly optimizing the L0 norm.

L1 norm

According to the definition of LP norm we can easily get the mathematical form of L1 norm:

Machine learning essentials: How to prevent overfitting?

Pass As can be seen from the above formula, the L1 norm is the sum of the absolute values ​​of each element of the vector, also known as the "sparse regularization operator" (Lasso regularization). So the question is, why do we want sparsification? There are many benefits to sparsification, the two most direct ones are:

    Feature selection
  • Interpretability
L2 norm

The L2 norm is the most familiar. It is the Euclidean distance. The formula is as follows:

Machine learning essentials: How to prevent overfitting?

The L2 norm has many names. Some people call its regression "ridge regression." "(Ridge Regression), some people also call it "Weight Decay". Using the L2 norm as a regularization term can obtain a dense solution, that is, the parameter ww corresponding to each feature is very small, close to 0 but not 0; in addition, the L2 norm as a regularization term can prevent the model from catering to the training set. Too much complexity leads to overfitting, thereby improving the generalization ability of the model.

The difference between L1 norm and L2 norm

Introduce a classic diagram of PRML to illustrate the difference between L1 and L2 norm, as shown in the following figure:

Machine learning essentials: How to prevent overfitting?

Machine learning essentials: How to prevent overfitting?

As shown in the figure above, the blue circle represents the possible solution range of the problem, and the orange circle represents the possible solution range of the regular term. The entire objective function (regular term of the original problem) has a solution if and only if the two solution ranges are tangent. It can be easily seen from the above figure that since the L2 norm solution range is a circle, the tangent point is most likely not on the coordinate axis, and since the L1 norm is a rhombus (the vertex is convex), its tangent point The tangent point is more likely to be on the coordinate axis, and the point on the coordinate axis has a characteristic that only one coordinate component is non-zero, and the other coordinate components are zero, that is, it is sparse. Therefore, there is the following conclusion: L1 norm can lead to sparse solutions, and L2 norm can lead to dense solutions.

From a Bayesian prior perspective, when training a model, it is not enough to rely solely on the current training data set. In order to achieve better generalization capabilities, it is often necessary to add prior terms, and regular terms It is equivalent to adding a priori.

  • The L1 norm is equivalent to adding a Laplacean prior;
  • The L2 norm is equivalent to adding a Gaussian prior.

As shown in the figure below:

Machine learning essentials: How to prevent overfitting?

Dropout

Dropout is a regularization method often used in deep learning. Its approach can be simply understood as discarding some neurons with probability p during the training process of DNNs, that is, the output of the discarded neurons is 0. Dropout can be instantiated as shown in the figure below:

Machine learning essentials: How to prevent overfitting?

## We can intuitively understand the regularization effect of Dropout from two aspects:

    The operation of randomly losing neurons during each round of Dropout training is equivalent to averaging multiple DNNs, so it has the effect of voting when used for prediction.
  • Reduce the complex co-adaptation between neurons. When the hidden layer neurons are randomly deleted, the fully connected network becomes sparse to a certain extent, thus effectively reducing the synergistic effects of different features. In other words, some features may rely on the joint action of hidden nodes with fixed relationships, and through Dropout, it effectively organizes the situation where some features are effective only in the presence of other features, increasing the robustness of the neural network. Great sex.
Batch Normalization

Batch Normalization is strictly a normalization method, mainly used to accelerate the convergence of the network, but it also has a certain degree of regularization effect. .

Here is a reference to the explanation of covariate shift in Dr. Wei Xiushen’s Zhihu answer.

Note: The following content is quoted from Dr. Wei Xiushen’s Zhihu answer. Everyone knows that a classic assumption in statistical machine learning is “the data distribution of source space (source domain) and target space (target domain)” (distribution) is consistent”. If they are inconsistent, then new machine learning problems arise, such as transfer learning/domain adaptation, etc. Covariate shift is a branch problem under the assumption of inconsistent distribution. It means that the conditional probabilities of the source space and the target space are consistent, but their marginal probabilities are different. If you think about it carefully, you will find that indeed, for the output of each layer of the neural network, because they have undergone intra-layer operations, their distribution is obviously different from the distribution of the input signals corresponding to each layer, and the difference will increase as the depth of the network increases. are large, but the sample labels they can "indicate" remain unchanged, which meets the definition of covariate shift.

The basic idea of ​​BN is actually quite intuitive, because the activation input value of the neural network before nonlinear transformation (X=WU B, U is the input) as the depth of the network deepens, its distribution gradually shifts or Change (ie the above-mentioned covariate shift). The reason why training converges slowly is generally because the overall distribution gradually approaches the upper and lower limits of the value range of the nonlinear function (for the Sigmoid function, it means that the activation input value X=WU B is a large negative or positive value) , so this causes the gradient of the low-level neural network to disappear during backpropagation, which is the essential reason why the convergence of deep neural networks in training becomes slower and slower. BN uses a certain standardization method to force the distribution of the input value of any neuron in each layer of the neural network back to the standard normal distribution with a mean of 0 and a variance of 1, to avoid the gradient dispersion problem caused by the activation function. So rather than saying that the role of BN is to alleviate covariate shift, it is better to say that BN can alleviate the gradient dispersion problem.

Normalization, Standardization & Regularization

We have mentioned regularization before, here we briefly mention normalization and standardization. Normalization: The goal of normalization is to find a certain mapping relationship to map the original data to the [a, b] interval. Generally, a and b will take combinations of [−1,1], [0,1]. There are generally two application scenarios:

    Convert the number into a decimal between (0, 1)
  • Convert the dimensional number into a dimensionless number
Commonly used min-max normalization:

Machine learning essentials: How to prevent overfitting?

Standardization (Standardization): Use the theorem of large numbers to transform the data into a standard normal distribution. The standardization formula is:

Machine learning essentials: How to prevent overfitting?

The difference between normalization and standardization:

We can explain it simply like this: normalized scaling is "flattened" to the interval (determined only by extreme values), while standardized scaling is It is more "elastic" and "dynamic" and has a lot to do with the distribution of the overall sample. Note:

  • Normalization: Scaling is only related to the difference between the maximum and minimum values.
  • Standardization: Scaling is related to each point and is reflected by variance. Contrast this with normalization, in which all data points contribute (through the mean and standard deviation).

Why standardization and normalization?

  • Improve model accuracy: After normalization, the features between different dimensions are numerically comparable, which can greatly improve the accuracy of the classifier.
  • Accelerate model convergence: After standardization, the optimization process of the optimal solution will obviously become smoother, making it easier to correctly converge to the optimal solution. As shown below:

Machine learning essentials: How to prevent overfitting?

Machine learning essentials: How to prevent overfitting?

The above is the detailed content of Machine learning essentials: How to prevent overfitting?. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
人工智能(AI)、机器学习(ML)和深度学习(DL):有什么区别?人工智能(AI)、机器学习(ML)和深度学习(DL):有什么区别?Apr 12, 2023 pm 01:25 PM

人工智能Artificial Intelligence(AI)、机器学习Machine Learning(ML)和深度学习Deep Learning(DL)通常可以互换使用。但是,它们并不完全相同。人工智能是最广泛的概念,它赋予机器模仿人类行为的能力。机器学习是将人工智能应用到系统或机器中,帮助其自我学习和不断改进。最后,深度学习使用复杂的算法和深度神经网络来重复训练特定的模型或模式。让我们看看每个术语的演变和历程,以更好地理解人工智能、机器学习和深度学习实际指的是什么。人工智能自过去 70 多

深度学习GPU选购指南:哪款显卡配得上我的炼丹炉?深度学习GPU选购指南:哪款显卡配得上我的炼丹炉?Apr 12, 2023 pm 04:31 PM

众所周知,在处理深度学习和神经网络任务时,最好使用GPU而不是CPU来处理,因为在神经网络方面,即使是一个比较低端的GPU,性能也会胜过CPU。深度学习是一个对计算有着大量需求的领域,从一定程度上来说,GPU的选择将从根本上决定深度学习的体验。但问题来了,如何选购合适的GPU也是件头疼烧脑的事。怎么避免踩雷,如何做出性价比高的选择?曾经拿到过斯坦福、UCL、CMU、NYU、UW 博士 offer、目前在华盛顿大学读博的知名评测博主Tim Dettmers就针对深度学习领域需要怎样的GPU,结合自

字节跳动模型大规模部署实战字节跳动模型大规模部署实战Apr 12, 2023 pm 08:31 PM

一. 背景介绍在字节跳动,基于深度学习的应用遍地开花,工程师关注模型效果的同时也需要关注线上服务一致性和性能,早期这通常需要算法专家和工程专家分工合作并紧密配合来完成,这种模式存在比较高的 diff 排查验证等成本。随着 PyTorch/TensorFlow 框架的流行,深度学习模型训练和在线推理完成了统一,开发者仅需要关注具体算法逻辑,调用框架的 Python API 完成训练验证过程即可,之后模型可以很方便的序列化导出,并由统一的高性能 C++ 引擎完成推理工作。提升了开发者训练到部署的体验

基于深度学习的Deepfake检测综述基于深度学习的Deepfake检测综述Apr 12, 2023 pm 06:04 PM

深度学习 (DL) 已成为计算机科学中最具影响力的领域之一,直接影响着当今人类生活和社会。与历史上所有其他技术创新一样,深度学习也被用于一些违法的行为。Deepfakes 就是这样一种深度学习应用,在过去的几年里已经进行了数百项研究,发明和优化各种使用 AI 的 Deepfake 检测,本文主要就是讨论如何对 Deepfake 进行检测。为了应对Deepfake,已经开发出了深度学习方法以及机器学习(非深度学习)方法来检测 。深度学习模型需要考虑大量参数,因此需要大量数据来训练此类模型。这正是

聊聊实时通信中的AI降噪技术聊聊实时通信中的AI降噪技术Apr 12, 2023 pm 01:07 PM

Part 01 概述 在实时音视频通信场景,麦克风采集用户语音的同时会采集大量环境噪声,传统降噪算法仅对平稳噪声(如电扇风声、白噪声、电路底噪等)有一定效果,对非平稳的瞬态噪声(如餐厅嘈杂噪声、地铁环境噪声、家庭厨房噪声等)降噪效果较差,严重影响用户的通话体验。针对泛家庭、办公等复杂场景中的上百种非平稳噪声问题,融合通信系统部生态赋能团队自主研发基于GRU模型的AI音频降噪技术,并通过算法和工程优化,将降噪模型尺寸从2.4MB压缩至82KB,运行内存降低约65%;计算复杂度从约186Mflop

地址标准化服务AI深度学习模型推理优化实践地址标准化服务AI深度学习模型推理优化实践Apr 11, 2023 pm 07:28 PM

导读深度学习已在面向自然语言处理等领域的实际业务场景中广泛落地,对它的推理性能优化成为了部署环节中重要的一环。推理性能的提升:一方面,可以充分发挥部署硬件的能力,降低用户响应时间,同时节省成本;另一方面,可以在保持响应时间不变的前提下,使用结构更为复杂的深度学习模型,进而提升业务精度指标。本文针对地址标准化服务中的深度学习模型开展了推理性能优化工作。通过高性能算子、量化、编译优化等优化手段,在精度指标不降低的前提下,AI模型的模型端到端推理速度最高可获得了4.11倍的提升。1. 模型推理性能优化

深度学习撞墙?LeCun与Marcus到底谁捅了马蜂窝深度学习撞墙?LeCun与Marcus到底谁捅了马蜂窝Apr 09, 2023 am 09:41 AM

今天的主角,是一对AI界相爱相杀的老冤家:Yann LeCun和Gary Marcus在正式讲述这一次的「新仇」之前,我们先来回顾一下,两位大神的「旧恨」。LeCun与Marcus之争Facebook首席人工智能科学家和纽约大学教授,2018年图灵奖(Turing Award)得主杨立昆(Yann LeCun)在NOEMA杂志发表文章,回应此前Gary Marcus对AI与深度学习的评论。此前,Marcus在杂志Nautilus中发文,称深度学习已经「无法前进」Marcus此人,属于是看热闹的不

英伟达首席科学家:深度学习硬件的过去、现在和未来英伟达首席科学家:深度学习硬件的过去、现在和未来Apr 12, 2023 pm 03:07 PM

过去十年是深度学习的“黄金十年”,它彻底改变了人类的工作和娱乐方式,并且广泛应用到医疗、教育、产品设计等各行各业,而这一切离不开计算硬件的进步,特别是GPU的革新。 深度学习技术的成功实现取决于三大要素:第一是算法。20世纪80年代甚至更早就提出了大多数深度学习算法如深度神经网络、卷积神经网络、反向传播算法和随机梯度下降等。 第二是数据集。训练神经网络的数据集必须足够大,才能使神经网络的性能优于其他技术。直至21世纪初,诸如Pascal和ImageNet等大数据集才得以现世。 第三是硬件。只有

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Tools

SublimeText3 English version

SublimeText3 English version

Recommended: Win version, supports code prompts!

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

WebStorm Mac version

WebStorm Mac version

Useful JavaScript development tools

SublimeText3 Linux new version

SublimeText3 Linux new version

SublimeText3 Linux latest version

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.