


After years of development, the DALL-E and GPT-3 generative AI systems launched by OpenAI have become popular all over the world and currently highlight their amazing application potential. However, there is a problem with this explosion of generative AI: every time DALL-E creates an image or GPT-3 predicts the next word, it requires multiple inference calculations, thus taking up a lot of resources and Consumes more electricity. Current GPU and CPU architectures cannot operate efficiently to meet the imminent computing demands, creating huge challenges for hyperscale data center operators.
Research institutions predict that data centers have become the world’s largest energy consumers, accounting for 3% of total electricity consumption in 2017, rising to 4.5% in 2025. %. Taking China as an example, the electricity consumption of data centers operating nationwide is expected to exceed 400 billion kWh in 2030, accounting for 4% of the country's total electricity consumption.
Cloud computing providers also recognize that their data centers use large amounts of electricity and have taken steps to improve efficiency, such as building and operating data centers in the Arctic to take advantage of renewable energy and natural cooling conditions. However, this is not enough to meet the explosive growth of AI applications.
Lawrence Berkeley National Laboratory in the United States found in research that improvements in data center efficiency have been controlling the growth of energy consumption over the past 20 years, but research shows that current energy efficiency measures may not be enough to meet the needs of future data centers. needs, therefore a better approach is needed.
Data transmission is a fatal bottleneck
The root of efficiency lies in the way GPU and CPU work, especially when running AI inference models and training models. Many people understand "beyond Moore's Law" and the physical limitations of packing more transistors on larger chip sizes. More advanced chips are helping to solve these challenges, but current solutions have a critical weakness in AI inference: the significantly reduced speed at which data can be transferred in random-access memory.
Traditionally, it has been cheaper to separate the processor and memory chips, and for years processor clock speed has been a key limiting factor in computer performance. Today, what's holding back progress is the interconnect between chips.
Jeff Shainline, a researcher at the National Institute of Standards and Technology (NIST), explained: "When memory and processor are separated, the communication link connecting the two domains becomes the main bottleneck of the system." Professor Jack Dongarra, a researcher at Oak Ridge National Laboratory in the United States, said succinctly: "When we look at the performance of today's computers, we find that data transmission is the fatal bottleneck."
AI inference vs.AI training
AI systems use different types of calculations when training an AI model compared to using an AI model to make predictions. AI training loads tens of thousands of image or text samples into a Transformer-based model as a reference, and then starts processing. Thousands of cores in a GPU process large, rich data sets such as images or videos very efficiently, and if you need results faster, more cloud-based GPUs can be rented.
Although AI inference requires less energy to perform calculations, in auto-completion by hundreds of millions of users, a lot of calculations and predictions are required to decide which word is next What, this consumes more energy than long-term training.
For example, Facebook’s AI systems observe trillions of inferences in its data centers every day, a number that has more than doubled in the past three years. Research has found that running language translation inference on a large language model (LLM) consumes two to three times more energy than initial training.
Surge in demand tests computing efficiency
ChatGPT became popular around the world at the end of last year, and GPT-4 is even more impressive. If more energy-efficient methods can be adopted, AI inference can be extended to a wider range of devices and create new ways of computing.
For example, Microsoft’s Hybrid Loop is designed to build AI experiences that dynamically leverage cloud computing and edge devices. This allows developers to make late-stage decisions while running AI inference on the Azure cloud platform, local client computers, or mobile devices. Bind decisions to maximize efficiency. Facebook introduced AutoScale to help users efficiently decide where to compute inferences at runtime.
In order to improve efficiency, it is necessary to overcome the obstacles that hinder the development of AI and find effective methods.
Sampling and pipelining can speed up deep learning by reducing the amount of data processed. SALIENT (for Sampling, Slicing, and Data Movement) is a new approach developed by researchers at MIT and IBM to address critical bottlenecks. This approach can significantly reduce the need to run neural networks on large datasets containing 100 million nodes and 1 billion edges. But it also affects accuracy and precision—which is acceptable for selecting the next social post to display, but not if trying to identify unsafe conditions on a worksite in near real-time.
Tech companies such as Apple, Nvidia, Intel, and AMD have announced the integration of dedicated AI engines into processors, and AWS is even developing a new Inferentia 2 processor. But these solutions still use traditional von Neumann processor architecture, integrated SRAM and external DRAM memory - all of which require more power to move data in and out of memory.
In-memory computing may be the solution
In addition, researchers have discovered another way to break the "memory wall", which is to bring computing closer Memory.
The memory wall refers to the physical barrier that limits the speed of data entering and exiting the memory. This is a basic limitation of traditional architecture. In-memory computing (IMC) solves this challenge by running AI matrix calculations directly in the memory module, avoiding the overhead of sending data over the memory bus.
IMC is suitable for AI inference because it involves a relatively static but large weighted data set that can be accessed repeatedly. While there is always some data input and output, AI eliminates much of the energy transfer expense and latency of data movement by keeping data in the same physical unit so it can be efficiently used and reused for multiple calculations.
This approach improves scalability because it works well with chip designs. With the new chip, AI inference technology can be tested on developers' computers and then deployed to production environments through data centers. Data centers can use a large fleet of equipment with many chip processors to efficiently run enterprise-level AI models.
Over time, IMC is expected to become the dominant architecture for AI inference use cases. This makes perfect sense when users are dealing with massive data sets and trillions of calculations. Because no more resources are wasted transferring data between memory walls, and this approach can be easily scaled to meet long-term needs.
Summary:
The AI industry is now at an exciting turning point. Technological advances in generative AI, image recognition, and data analytics are revealing unique connections and uses for machine learning, but first a technology solution that can meet this need needs to be built. Because according to Gartner’s predictions, unless more sustainable options are available now, AI will consume more energy than human activities by 2025. Need to figure out a better way before this happens!
The above is the detailed content of Cold thoughts under the ChatGPT craze: AI energy consumption in 2025 may exceed that of humans, and AI computing needs to improve quality and efficiency. For more information, please follow other related articles on the PHP Chinese website!

自从 ChatGPT、Stable Diffusion 发布以来,各种相关开源项目百花齐放,着实让人应接不暇。今天,着重挑选几个优质的开源项目分享给大家,对我们的日常工作、学习生活,都会有很大的帮助。

Word文档拆分后的子文档字体格式变了的解决办法:1、在大纲模式拆分文档前,先选中正文内容创建一个新的样式,给样式取一个与众不同的名字;2、选中第二段正文内容,通过选择相似文本的功能将剩余正文内容全部设置为新建样式格式;3、进入大纲模式进行文档拆分,操作完成后打开子文档,正文字体格式就是拆分前新建的样式内容。

用 ChatGPT 辅助写论文这件事,越来越靠谱了。 ChatGPT 发布以来,各个领域的从业者都在探索 ChatGPT 的应用前景,挖掘它的潜力。其中,学术文本的理解与编辑是一种极具挑战性的应用场景,因为学术文本需要较高的专业性、严谨性等,有时还需要处理公式、代码、图谱等特殊的内容格式。现在,一个名为「ChatGPT 学术优化(chatgpt_academic)」的新项目在 GitHub 上爆火,上线几天就在 GitHub 上狂揽上万 Star。项目地址:https://github.com/

阅读论文可以说是我们的日常工作之一,论文的数量太多,我们如何快速阅读归纳呢?自从ChatGPT出现以后,有很多阅读论文的服务可以使用。其实使用ChatGPT API非常简单,我们只用30行python代码就可以在本地搭建一个自己的应用。 阅读论文可以说是我们的日常工作之一,论文的数量太多,我们如何快速阅读归纳呢?自从ChatGPT出现以后,有很多阅读论文的服务可以使用。其实使用ChatGPT API非常简单,我们只用30行python代码就可以在本地搭建一个自己的应用。使用 Python 和 C

面对一夜爆火的 ChatGPT ,我最终也没抵得住诱惑,决定体验一下,不过这玩意要注册需要外国手机号以及科学上网,将许多人拦在门外,本篇博客将体验当下爆火的 ChatGPT 以及无需注册和科学上网,拿来即用的 ChatGPT 使用攻略,快来试试吧!

ChatGPT可以联网后,OpenAI还火速介绍了一款代码生成器,在这个插件的加持下,ChatGPT甚至可以自己生成机器学习模型了。 上周五,OpenAI刚刚宣布了惊爆的消息,ChatGPT可以联网,接入第三方插件了!而除了第三方插件,OpenAI也介绍了一款自家的插件「代码解释器」,并给出了几个特别的用例:解决定量和定性的数学问题;进行数据分析和可视化;快速转换文件格式。此外,Greg Brockman演示了ChatGPT还可以对上传视频文件进行处理。而一位叫Andrew Mayne的畅销作

本篇文章给大家带来了关于php的相关知识,其中主要介绍了我是怎么用ChatGPT学习PHP中AOP的实现,感兴趣的朋友下面一起来看一下吧,希望对大家有帮助。


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

VSCode Windows 64-bit Download
A free and powerful IDE editor launched by Microsoft

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Zend Studio 13.0.1
Powerful PHP integrated development environment

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

SAP NetWeaver Server Adapter for Eclipse
Integrate Eclipse with SAP NetWeaver application server.
