Home  >  Article  >  Technology peripherals  >  OpenAI has stopped serving, and large domestic models are available for free! Developer Token is freely implemented

OpenAI has stopped serving, and large domestic models are available for free! Developer Token is freely implemented

王林
王林Original
2024-06-25 20:56:11878browse
Early this morning, OpenAI suddenly announced that it would terminate its API services to China, further restricting domestic developers’ access to high-level large models such as GPT. It’s really difficult for domestic developers.

OpenAI has stopped serving, and large domestic models are available for free! Developer Token is freely implemented

Fortunately, as the level of open source large models is getting higher and higher, developers already have many good "replacements", such as Qwen2, DeepSeek V2 and other models. In order to provide developers with open source large model APIs that are faster, cheaper, more comprehensive, and have a smoother experience, SiliconFlow, a professional player in the field of AI Infra, has launched SiliconCloud, a one-stop large model API platform.

Just now, Silicon-based Flow has presented an unprecedented gift to domestic developers: Qwen2 (7B), GLM4 (9B), Yi1.5 (9B) and other top open source large models are permanently free.

From now on, "Token freedom" is no longer a dream for developers, and they can develop super applications without any worries.

"Token Free" portal, one-click direct access:

cloud.siliconflow.cn/s/free

OpenAI has stopped serving, and large domestic models are available for free! Developer Token is freely implemented

Just like in the industrialization era, mechanized factories promoted the efficiency of mass production of goods. In the era of large models, the prosperity of generative AI applications urgently requires cost-effective token production factories.

Accessing large model APIs through cloud services has become the best choice for developers. However, many platforms only provide their own large model APIs and do not include other top large models. At the same time, in terms of model response speed, user experience and cost, they are far from meeting the needs of developers.

Now, the silicon-based liquid super Token factory SiliconCloud allows developers to no longer spend energy deploying large models, and massively reduces the threshold and cost of AI-Native application development.

Large Model Token Factory

Instant update, extremely fast output, affordable price

Since it is called Token Factory, all the models that users like can be found directly on Silicon Cloud.

Recently, the large model community is quite lively, and open source models are constantly refreshing SOTA, taking turns to dominate the list.

Silicon Fluid quickly uploaded these large models to Silicon Cloud as soon as possible, including the most powerful open source code generation model DeepSeek-Coder-V2, large language models that surpass Llama3 Qwen2, GLM-4-9B-Chat, and DeepSeek V2 series models . It also supports Vincent graph models such as Stable Diffusion 3 Medium and InstantID.

It is worth mentioning that for models that are extremely difficult to deploy such as DeepSeek V2, SiliconCloud is the only cloud service platform other than the official platform that supports these large open source models.

Considering the need to choose a suitable large model in different application scenarios, developers can switch freely in SiliconCloud.

OpenAI has stopped serving, and large domestic models are available for free! Developer Token is freely implemented

Such an open source large model aggregation platform has provided developers with a lot of conveniences, but it is far from enough. As the world's top AI Infra team, Silicon Flow is committed to reducing large model deployment costs by 10,000 times.

To achieve this goal, the core challenge is how to significantly improve the inference speed of large models. To what extent has SiliconCloud achieved this?

Visit the picture above to feel the response speed of Qwen2-72B-Instruct on SiliconCloud.

OpenAI has stopped serving, and large domestic models are available for free! Developer Token is freely implemented

The image generation time of SD3 Medium, which has just been open sourced, is about 1 second.

OpenAI has stopped serving, and large domestic models are available for free! Developer Token is freely implemented

The response speed of these large open source models has become faster, the output of the same computing power is higher, and the price has naturally come down.

The price of the large model API on Silicon Cloud is also (very) more affordable. Even for Qwen2-72B, the official website shows that it only costs 4.13 yuan / 1M Token. New users can also enjoy 20 million Tokens for free.

Developer comments: “It’s so fast that you can never go back”

As soon as SiliconCloud was released, many developers shared their experience on major social platforms. Some "tap water" users commented like this:

On Zhihu, machine learning system expert @方佳瑞 praised SiliconCloud's output speed, "After using it for a long time, I can't stand the response speed of other large model manufacturers on the web side."

OpenAI has stopped serving, and large domestic models are available for free! Developer Token is freely implemented

Weibo user @ Zhu William II said that several other platforms dare not put Qwen2 large-scale parameter models, but SiliconCloud has put them all. It is very fast and very cheap, so he will definitely pay.

He also mentioned that The final product of the large model is Token. In the future, Token production will be completed by Token factories such as Silicon-based Liquidity, or large model companies or cloud vendors such as OpenAI and Alibaba Cloud.

OpenAI has stopped serving, and large domestic models are available for free! Developer Token is freely implemented

Also, X users strongly recommend SiliconCloud. The experience is so smooth, especially the considerate and first-class after-sales service team.

OpenAI has stopped serving, and large domestic models are available for free! Developer Token is freely implemented

WeChat official account blogger’s evaluation: SiliconCloud has the best experience among similar products in China.

OpenAI has stopped serving, and large domestic models are available for free! Developer Token is freely implemented

These reviews have an obvious commonality, they all mentioned the speed of the SiliconCloud platform. Why does it respond so quickly?

The answer is simple: The Silicon Flow team has done a lot of performance optimization work.

As early as 2016, the OneFlow team, the predecessor of Silicon-based Flow, devoted itself to large model infrastructure and was the only entrepreneurial team in the world to develop a general deep learning framework. Starting a business again, they took the lead in developing a high-performance large model inference engine based on their rich experience in AI infrastructure and acceleration optimization. In some scenarios, the throughput of large models can be accelerated by up to 10 times. This engine has also been integrated into the Silicon Cloud platform.

In other words, allowing developers to use large model services with faster output and affordable prices is the specialty of the Silicon-based mobile team.

After Token is free, are phenomenal applications still far away?

Previously, a major factor that hindered domestic developers from developing AI applications was the inconvenience of accessing high-level large models. Even if they made high-quality applications, they did not dare to promote them on a large scale because they would burn money too quickly and they could not afford it.

With the continuous iteration of domestic open source large models, models represented by Qwen2 and DeepSeek V2 are enough to support super applications. More importantly, the emergence of Token factory SiliconCloud can solve the worries of super individuals, they no longer have to worry about applications Instead of computing power costs caused by R&D and large-scale promotion, you only need to focus on realizing product ideas and making generative AI applications that users need.

It can be said that now is the best "nugget" moment for super individual developers and product managers, and SiliconCloud, a useful nugget tool, has been prepared for you.

Another reminder: Qwen2 (7B), GLM4 (9B) and other top open source large models are permanently free.

Welcome to Token Factory SiliconCloud:

cloud.siliconflow.cn/s/free

The above is the detailed content of OpenAI has stopped serving, and large domestic models are available for free! Developer Token is freely implemented. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn