Home  >  Article  >  Technology peripherals  >  Using a distributed network to connect idle GPUs in series, this startup claims to reduce AI model training costs by 90%

Using a distributed network to connect idle GPUs in series, this startup claims to reduce AI model training costs by 90%

WBOY
WBOYforward
2023-06-15 14:18:36885browse


Using a distributed network to connect idle GPUs in series, this startup claims to reduce AI model training costs by 90%

Monster API utilizes GPU computing power such as mining equipment to train AI models

GPUs are often used to mine cryptocurrencies such as Bitcoin , and mining is a resource-intensive process that requires powerful computing power.

Cryptocurrency hype once led to a shortage of GPUs on the market, and as prices soared, businesses and individuals turned to Nvidia's GPUs for the gaming industry and turned them into dedicated GPUs for crypto mining rigs.

However, as the cryptocurrency craze subsided, many crypto mining equipment were shut down or even abandoned. This made Monster API founder Gaurav Vij realize that they could repurpose and repurpose these devices to adapt to the latest computing-intensive development trend, which is to train and run basic AI models.

While these GPUs don’t have the power of dedicated AI appliances deployed by the likes of AWS or Google Cloud, Gaurav Vij said they are capable of training optimized open source models at the cost of using cloud computing providers. A small part of a very large scale computing device.

Monster API co-founder Saurabh Vij said: "The machine learning field is actually struggling with computing power because demand has exceeded supply. Today, many machine learning developers are spending a lot of money relying on AWS , Google Cloud, Microsoft Azure and other cloud service providers to obtain resources.”

Distributed computing power network can significantly reduce the cost of AI basic model training

In fact, in addition to encryption mining equipment, Unused GPUs can also be found in gaming systems like the PlayStation 5 and smaller data centers. Saurabh Vij said: “Crypto mining platforms use GPUs, gaming systems use GPUs, and GPUs become more powerful every year.”

Joining a distributed network requires multiple steps, including data security checks wait. Involving the computing power of enterprises and individuals. The demander adds equipment according to needs and expands and shrinks the computing power network. The supply side can obtain part of the revenue from selling idle computing power.

Saurabh Vij emphasized that distributed computing systems reduce the cost of training basic AI models to the point that they can be trained by open source and non-profit organizations in the future, not just large technology companies with deep financial resources. Compared to building a basic AI model that costs $1 million, a decentralized network like ours only costs $100,000. ”

Geek.com learned that Monster API now also provides “code-free” tools to fine-tune models and is open to users without technical expertise or resources, allowing them to train models from scratch to further “democratize” Computing power and AI basic models.

It is important to fine-tune because many developers do not have a sufficient amount of data and funds to retrain the model. He said that due to the optimization of the Monster API, the cost of fine-tuning has been reduced 90%, resulting in a fine-tuning fee of approximately $30 per model.

Open source model training can help developers innovate with AI

While AI developers face looming regulations, this may There is a direct impact on these training models and open source, but Saurabh Vij believes that open source model training has its positives. Monster API has recognized the need to manage potential risks in its decentralized network and ensure "traceability, transparency and accountability" ".

"Although regulatory authorities may win in the short term, I have great confidence in the open source community and its incredibly rapid development.". There are 25 million registered developers on Postman (API development platform) , a big part of it is building generative AI, which opens up new businesses and new opportunities for everyone," he said.

Geek.com learned that by training low-cost AI models, the goal of Monster API is to allow developers to maximize the use of machine learning for innovation. At present, they already have some well-known AI models (such as Stable Diffusion and Whisper) that can be fine-tuned. In addition, users can also use these GPU computing power to train their own AI basic models from scratch.

Saurabh Vij said: "We have conducted text and image generation experiments on Macbooks and can output at least 10 images per minute. We hope to connect millions of Macbooks to the network to allow users to sleep while sleeping They can also use their Macbooks to run Stable Diffusion, Whisper or other AI models to make money.

"Eventually, Playstation, Xbox, Macbook will become powerful computing resources, and even Tesla cars - because Tesla cars also use powerful GPUs and spend most of their time parked in garages. ” Saurabh Vij added.

The above is the detailed content of Using a distributed network to connect idle GPUs in series, this startup claims to reduce AI model training costs by 90%. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete