Home  >  Article  >  Technology peripherals  >  The cost and sustainability of generative AI

The cost and sustainability of generative AI

青灯夜游
青灯夜游forward
2023-03-31 22:40:391611browse

Everyone using DALL-E to create images or letting ChatGPT write a term paper is consuming a lot of cloud resources. Who will pay for all this?

Translator|Bugatti

Reviewer|Sun Shujuan​

Artificial intelligence (AI) is a resource-intensive technology for any platform (including public cloud) . Most AI technologies require a large amount of inference calculations, thereby increasing the demand for processor, network and storage resources, ultimately increasing electricity bills, infrastructure costs and carbon emissions. ​

The cost and sustainability of generative AI

The rise of generative AI systems such as ChatGPT has once again brought this issue to the forefront. Given the popularity of this technology and the likely widespread use of it by companies, governments, and the public, we can expect a worrying arc in the power consumption growth curve. ​

AI has been feasible since the 1970s, but initially did not have much commercial impact, given that mature AI systems require significant resources to work properly. I remember an AI-based system I designed in my 20s that required over $40 million in hardware, software, and data center space to get it running. Incidentally, this project, like many other AI projects, never saw a release date, and the commercial solution was simply not viable. ​

Cloud computing changes everything. With the public cloud, tasks that were once out of reach can now be handled with significant enough cost effectiveness. In fact, as you might have guessed, the rise of cloud computing coincides with the rise of AI over the past 10 to 15 years, and I would say the two are now closely related. ​

Sustainability and Cost of Cloud Resources

It doesn’t take much research to predict what will happen in this field. Market demand for AI services will soar, such as generative AI systems and other AI and machine learning systems that are now very popular. Leading the charge will be companies seeking advantage through innovation (such as smart supply chains), or even the thousands of college students looking to generative AI systems to write their term papers. ​

Increased demand for AI means increased demand for the resources used by these AI systems, such as public clouds and the services they provide. This demand is likely to be met by more data centers housing power-hungry servers and network equipment. ​

Public cloud providers, like any other utility resource provider, will increase prices as demand increases, just like we see seasonal increases in residential electricity bills (again based on demand). Therefore, we usually control electricity consumption and turn up the air conditioner temperature higher in summer. ​

However, higher cloud computing costs may not have the same impact on businesses. Enterprises may find that these AI systems are not dispensable, but necessary to drive certain key business processes. In many cases, they may try to save money internally, perhaps by reducing headcount to offset the cost of AI systems. It’s no secret that generative AI systems will soon replace many information workers. ​

What can we do?

If the demand for resources to run AI systems results in higher computing costs and carbon emissions, what can we do about it? The answer may lie in finding more efficient ways for AI to make full use of resources such as processors, networks, and storage. ​

For example, sampling the pipeline can speed up deep learning by reducing the amount of data processed. Research from the Massachusetts Institute of Technology (MIT) and IBM shows that using this approach can reduce the resources required to run neural networks on large data sets. However this also limits accuracy, which is acceptable for some business use cases but not for all. ​

Another approach that has been used in other technology areas is in-memory computing. This architecture can speed up AI processing by avoiding data moving in and out of memory. Instead, AI calculations run directly in the memory module, which speeds things up significantly. ​

Other approaches are being developed, such as changing the physical processor (using co-processors to handle AI calculations to increase speed) or adopting next-generation computing models such as quantum computing. You can expect large public cloud providers to announce technologies that address many of these issues in the near future. ​

What should you do?

This article is not about avoiding AI to reduce cloud computing costs or save the planet. AI is a fundamental computing method that most businesses can use to create tremendous value. ​

It is recommended that when undertaking an AI-based development project or a new AI system development project, you should clearly understand the impact on cost and sustainability, as the two are closely related. You have to make a cost/benefit choice, which really goes back to the old topic of what value you can bring to the company for the cost and risk you have to take. Nothing new here. ​

I believe that this problem is basically expected to be solved through innovation, whether the innovation is in-memory computing, quantum computing or other technologies that have not yet emerged. AI technology providers and cloud computing providers are keen to make AI more cost-effective, energy-efficient and environmentally friendly, which is good news. ​

Original title: ​​The cost and sustainability of generative AI​​, author: David S. Linthicum

The above is the detailed content of The cost and sustainability of generative AI. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete