搜索
首页后端开发Python教程优化你的神经网络

优化你的神经网络

Oct 13, 2024 am 06:15 AM

上週我發布了一篇關於如何建立簡單神經網絡,特別是多層感知器的文章。本文將深入探討神經網路的細節,討論如何透過調整神經網路的配置來最大限度地提高神經網路的效能。

訓練模型多長時間

訓練模型時,您可能會認為如果模型訓練得夠多,模型就會變得完美無缺。這可能是真的,但這只適用於它所訓練的資料集。事實上,如果你給它另一組值不同的數據,模型可能會輸出完全錯誤的預測。

為了進一步理解這一點,假設您每天都在不移動方向盤的情況下直線駕駛來練習駕駛考試。 (請不要這樣做。)雖然您可能會在直線加速賽中表現出色,但如果在實際考試中被告知要左轉,您最終可能會變成停車標誌。

這種現象稱為過度擬合。您的模型可以學習其訓練資料的所有方面和模式,但如果它學習的模式與訓練資料集過於緊密地結合,那麼當給定新資料集時,您的模型將表現不佳。同時,如果您沒有充分訓練模型,那麼您的模型將無法正確識別其他資料集中的模式。在這種情況下,您將欠擬合

Optimizing Your Neural Networks


過度擬合的範例。驗證損失(以橙色線表示)逐漸增加,而訓練損失(以藍色線表示)逐漸減少。

在上面的範例中,當驗證損失達到最小值時,停止訓練模型的最佳位置是正確的。可以透過早期停止來做到這一點,一旦在任意數量的訓練週期(時期)後驗證損失沒有改善,它就會停止訓練。

訓練模型就是在過度擬合和欠擬合之間找到平衡,同時在必要時利用提前停止。這就是為什麼您的訓練資料集應該盡可能代表總體人群,以便您的模型可以更準確地對其未見過的資料進行預測。

損失函數

也許可以調整的最重要的訓練配置之一是損失函數,它是模型的預測與其實際值之間的「不準確性」。 「不準確度」可以用多種不同的數學方式來表示,最常見的一種是均方誤差(MSE)

MSE=i=1n(yiˉyi)2ntext{MSE} = frac{sum_{i=1}^n (bar{y_i} - y_i)^2}{n} MSE=ni=1n(yiˉyi)2

哪里 yiˉ栏{y_i} yˉ 是模型的预测, yiy_i yi 是真正的价值。有一个类似的变体,称为平均绝对误差(MAE)

MAE=Σi=1nyiˉyin文本{MAE} = frac{sum_{i=1}^n |bar{y_i} - y_i|}{n} MAE= nΣi=1 nyiˉyi

What's the difference between these two and which one is better? The real answer is that it depends on a variety of factors. Let's consider a simple 2-dimensional linear regression example.

In many cases, there can be data points that act outliers, points that are far away from other data points. In terms of linear regression, this means that there are a few points on the xyxy xy -plane that are far away from the rest of them. If you remember from your statistics classes, it's points like these that can significantly affect the linear regression line that's calculated.

Optimizing Your Neural Networks
A simple graph with points on (1, 1), (2, 2), (3, 3), and (4, 4)

If you wanted to think of a line that could cross all four points, then y=xy = x y=x would be a great choice because this line would go through all the points.

Optimizing Your Neural Networks
A simple graph with points on (1, 1), (2, 2), (3, 3), and (4, 4) and the line y=xy = x y=x going through it

However, let's say I decide to add another point at (5,1)(5, 1) (5,1) . Now what should the regression line be? Well, it turns out that it's completely different: y=0.2x 1.6y = 0.2x 1.6 y=0.2x 1.6

Optimizing Your Neural Networks


A simple graph with points on (1, 1), (2, 2), (3, 3), (4, 4), and (5,1) with a linear regression line going through it.

Given the previous data points, the line would expect that the value of yy y when x=5x = 5 x=5 is 5, but because of the outlier and its MSE, the regression line is "pulled downwards" significantly.

This is just a simple example, but this poses a question that you, as a machine learning developer, need to stop and think about: How sensitive should my model be to outliers? If you want your model to be more sensitive to outliers, then you would choose a metric like MSE, because in that case, errors involving outliers are more pronounced due to the squaring and your model will adjust itself to minimize that. Otherwise, you would choose a metric like MAE, which doesn't care as much about outliers.

Optimizers

In my previous post, I also discussed the concept of backpropagation, gradient descent, and how they work to minimize the loss of the model. The gradient is a vector that points towards the direction of greatest change. A gradient descent algorithm will calculate this vector and move in the exact opposite direction so that it eventually reaches a minimum.

Most optimizers have a specific learning rate, commonly denoted as αalpha α that they adhere to. Essentially, this represents how much the algorithm will move towards the minimum each time it calculates the gradient. Be careful of setting your learning rate to be too large! Your algorithm may never reach the minimum due to the large steps it takes that could repeatedly skip over the minimum.

Optimizing Your Neural Networks
[Tensorflow's neural network playground](https://playground.tensorflow.org) showing what can happen if you set the learning rate to be too large. Notice how the testing and training loss are both `NaN`.

Going back to gradient descent, while it is effective in minimizing loss, this might significantly slow down the training process as the loss function is calculated on the entire dataset. There are several alternatives to gradient descent that are more efficient but have their respective downsides.

Stochastic Gradient Descent

One of the most popular alternatives to standard gradient descent is a variant called stochastic gradient descent (SGD). As with gradient descent, SGD has a fixed learning rate. But rather than running through the entire dataset like gradient descent, SGD takes a small sample is randomly selected and the weights of your neural network are updated based on the sample instead. Eventually, the parameter values converge to a point that approximately (but not exactly) minimizes the loss function. This is one of the downsides of SGD, as it doesn't always reach the exact minimum. Additionally, similar to gradient descent, it remains sensitive to the learning rate that you set.

The Adam Optimizer

The name, Adam, is derived from adaptive moment estimation. It essentially combines two variants of SGD to adjust the learning rate for each input parameter based on how often it gets updated during each training iteration (adaptive learning rate). At the same time, it also keeps track of past gradient calculations as a moving average to smooth out updates (momentum). However, because of its momentum characteristic, it can sometimes take longer to converge than other algorithms.

Putting it All Together

Now for an example!

I've created an example walkthrough on Google Colab that uses PyTorch to create a neural network that learns a simple linear relationship.

If you're a bit new to Python, don't worry! I've included some explanations that discuss what's going on in each section.

Reflection

While this obviously doesn't cover everything about optimizing neural networks, I wanted to at least cover a few of the most important concepts that you can take advantage of while training your own models. Hopefully you've learned something this week and thanks for reading!

以上是优化你的神经网络的详细内容。更多信息请关注PHP中文网其他相关文章!

声明
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系admin@php.cn
Python:自动化,脚本和任务管理Python:自动化,脚本和任务管理Apr 16, 2025 am 12:14 AM

Python在自动化、脚本编写和任务管理中表现出色。1)自动化:通过标准库如os、shutil实现文件备份。2)脚本编写:使用psutil库监控系统资源。3)任务管理:利用schedule库调度任务。Python的易用性和丰富库支持使其在这些领域中成为首选工具。

Python和时间:充分利用您的学习时间Python和时间:充分利用您的学习时间Apr 14, 2025 am 12:02 AM

要在有限的时间内最大化学习Python的效率,可以使用Python的datetime、time和schedule模块。1.datetime模块用于记录和规划学习时间。2.time模块帮助设置学习和休息时间。3.schedule模块自动化安排每周学习任务。

Python:游戏,Guis等Python:游戏,Guis等Apr 13, 2025 am 12:14 AM

Python在游戏和GUI开发中表现出色。1)游戏开发使用Pygame,提供绘图、音频等功能,适合创建2D游戏。2)GUI开发可选择Tkinter或PyQt,Tkinter简单易用,PyQt功能丰富,适合专业开发。

Python vs.C:申请和用例Python vs.C:申请和用例Apr 12, 2025 am 12:01 AM

Python适合数据科学、Web开发和自动化任务,而C 适用于系统编程、游戏开发和嵌入式系统。 Python以简洁和强大的生态系统着称,C 则以高性能和底层控制能力闻名。

2小时的Python计划:一种现实的方法2小时的Python计划:一种现实的方法Apr 11, 2025 am 12:04 AM

2小时内可以学会Python的基本编程概念和技能。1.学习变量和数据类型,2.掌握控制流(条件语句和循环),3.理解函数的定义和使用,4.通过简单示例和代码片段快速上手Python编程。

Python:探索其主要应用程序Python:探索其主要应用程序Apr 10, 2025 am 09:41 AM

Python在web开发、数据科学、机器学习、自动化和脚本编写等领域有广泛应用。1)在web开发中,Django和Flask框架简化了开发过程。2)数据科学和机器学习领域,NumPy、Pandas、Scikit-learn和TensorFlow库提供了强大支持。3)自动化和脚本编写方面,Python适用于自动化测试和系统管理等任务。

您可以在2小时内学到多少python?您可以在2小时内学到多少python?Apr 09, 2025 pm 04:33 PM

两小时内可以学到Python的基础知识。1.学习变量和数据类型,2.掌握控制结构如if语句和循环,3.了解函数的定义和使用。这些将帮助你开始编写简单的Python程序。

如何在10小时内通过项目和问题驱动的方式教计算机小白编程基础?如何在10小时内通过项目和问题驱动的方式教计算机小白编程基础?Apr 02, 2025 am 07:18 AM

如何在10小时内教计算机小白编程基础?如果你只有10个小时来教计算机小白一些编程知识,你会选择教些什么�...

See all articles

热AI工具

Undresser.AI Undress

Undresser.AI Undress

人工智能驱动的应用程序,用于创建逼真的裸体照片

AI Clothes Remover

AI Clothes Remover

用于从照片中去除衣服的在线人工智能工具。

Undress AI Tool

Undress AI Tool

免费脱衣服图片

Clothoff.io

Clothoff.io

AI脱衣机

AI Hentai Generator

AI Hentai Generator

免费生成ai无尽的。

热门文章

R.E.P.O.能量晶体解释及其做什么(黄色晶体)
4 周前By尊渡假赌尊渡假赌尊渡假赌
R.E.P.O.最佳图形设置
4 周前By尊渡假赌尊渡假赌尊渡假赌
R.E.P.O.如果您听不到任何人,如何修复音频
4 周前By尊渡假赌尊渡假赌尊渡假赌
R.E.P.O.聊天命令以及如何使用它们
4 周前By尊渡假赌尊渡假赌尊渡假赌

热工具

mPDF

mPDF

mPDF是一个PHP库,可以从UTF-8编码的HTML生成PDF文件。原作者Ian Back编写mPDF以从他的网站上“即时”输出PDF文件,并处理不同的语言。与原始脚本如HTML2FPDF相比,它的速度较慢,并且在使用Unicode字体时生成的文件较大,但支持CSS样式等,并进行了大量增强。支持几乎所有语言,包括RTL(阿拉伯语和希伯来语)和CJK(中日韩)。支持嵌套的块级元素(如P、DIV),

Atom编辑器mac版下载

Atom编辑器mac版下载

最流行的的开源编辑器

EditPlus 中文破解版

EditPlus 中文破解版

体积小,语法高亮,不支持代码提示功能

PhpStorm Mac 版本

PhpStorm Mac 版本

最新(2018.2.1 )专业的PHP集成开发工具

WebStorm Mac版

WebStorm Mac版

好用的JavaScript开发工具