search
HomeTechnology peripheralsAIBeaten by PyTorch! Google dumps TensorFlow, bets on JAX
Beaten by PyTorch! Google dumps TensorFlow, bets on JAXMay 04, 2023 am 08:16 AM
Googletensorflowmeta

I really like what some netizens said:

"This kid is really not good, let's get another one."

Google really did this.

After seven years of development, TensorFlow was finally defeated by Meta's PyTorch, to a certain extent.

Seeing something was wrong, Google quickly asked for another one - "JAX", a brand new machine learning framework.

Beaten by PyTorch! Google dumps TensorFlow, bets on JAX

As you all know about the recently super popular DALL·E Mini, its model is programmed based on JAX, thus making full use of the advantages brought by Google TPU.

The Dusk of TensorFlow and the Rise of PyTorch

In 2015, TensorFlow, the machine learning framework developed by Google, came out.

At that time, TensorFlow was just a small project of Google Brain.

No one expected that TensorFlow would become very popular as soon as it came out.

Big companies like Uber and Airbnb are using it, as are national agencies like NASA. And they are all used on their most complex projects.

As of November 2020, TensorFlow has been downloaded 160 million times.

Beaten by PyTorch! Google dumps TensorFlow, bets on JAX

#However, Google does not seem to care much about the feelings of so many users.

The strange interface and frequent updates make TensorFlow increasingly unfriendly to users and increasingly difficult to operate.

Even within Google, they feel that this framework is going downhill.

In fact, it is really helpless for Google to update so frequently. After all, this is the only way to keep up with the rapid iteration in the field of machine learning.

As a result, more and more people joined the project, causing the entire team to slowly lose focus.

The shining points that originally made TensorFlow the tool of choice have been buried in so many factors and are no longer taken seriously.

This phenomenon is described by Insider as a "cat-and-mouse game." The company is like a cat, and the new needs that emerge through constant iteration are like mice. Cats should always be alert and ready to pounce on mice.

Beaten by PyTorch! Google dumps TensorFlow, bets on JAX

#This dilemma is unavoidable for companies that are the first to enter a certain market.

For example, as far as search engines are concerned, Google is not the first. Therefore, Google can learn from the failures of its predecessors (AltaVista, Yahoo, etc.) and apply it to its own development.

Unfortunately, when it comes to TensorFlow, Google is the one who is trapped.

It is precisely because of the above reasons that developers who originally worked for Google gradually lost confidence in their old employer.

The ubiquitous TensorFlow in the past has gradually declined, losing to Meta’s rising star-PyTorch.

Beaten by PyTorch! Google dumps TensorFlow, bets on JAX

In 2017, the beta version of PyTorch was open sourced.

In 2018, Facebook’s Artificial Intelligence Research Laboratory released a full version of PyTorch.

It is worth mentioning that PyTorch and TensorFlow are both developed based on Python, while Meta pays more attention to maintaining the open source community and even invests a lot of resources.

Moreover, Meta is paying attention to Google’s problems and believes that it cannot repeat the same mistakes. They focus on a small set of features and make them the best they can be.

Meta is not following in Google’s footsteps. This framework, first developed at Facebook, has slowly become an industry benchmark.

A research engineer at a machine learning start-up company said, "We basically use PyTorch. Its community and open source are the best. Not only do you answer all questions, but the examples given are also very practical."

Beaten by PyTorch! Google dumps TensorFlow, bets on JAX

Faced with this situation, Google developers, hardware experts, cloud providers, and anyone related to Google machine learning all said the same thing in interviews. They believe that TensorFlow has lost the hearts of developers.

After a series of open and covert struggles, Meta finally gained the upper hand.

Some experts say that Google’s opportunity to continue to lead machine learning in the future is slowly disappearing.

PyTorch has gradually become the tool of choice for ordinary developers and researchers.

Judging from the interaction data provided by Stack Overflow, there are more and more questions about PyTorch on developer forums, while questions about TensorFlow have been at a standstill in recent years.

Beaten by PyTorch! Google dumps TensorFlow, bets on JAX

Even companies such as Uber mentioned at the beginning of the article have also turned to PyTorch.

In fact, every subsequent update of PyTorch seems to be a slap in the face of TensorFlow.

The future of Google machine learning - JAX

Just when TensorFlow and PyTorch were fighting in full swing, a "small dark horse research team" within Google began to work on developing a brand new framework. TPU can be used more conveniently.

Beaten by PyTorch! Google dumps TensorFlow, bets on JAX

In 2018, a paper titled "Compiling machine learning programs via high-level tracing" brought the JAX project to the surface. The authors were Roy Frostig and Matthew James. Johnson and Chris Leary.

Beaten by PyTorch! Google dumps TensorFlow, bets on JAX

From left to right are these three great gods

Then, Adam Paszke, one of the original authors of PyTorch, also joined JAX full-time in early 2020 team.

Beaten by PyTorch! Google dumps TensorFlow, bets on JAX

#JAX provides a more direct way to deal with one of the most complex problems in machine learning: multi-core processor scheduling problem.

According to the application situation, JAX will automatically combine several chips into a small group, rather than letting one go alone.

The advantage of this is that as many TPUs as possible can respond in a moment, thereby burning our "alchemy universe".

In the end, compared to the bloated TensorFlow, JAX solved a major problem within Google: how to quickly access the TPU.

The following is a brief introduction to Autograd and XLA that constitute JAX.

Beaten by PyTorch! Google dumps TensorFlow, bets on JAX

Autograd is mainly used for gradient-based optimization and can automatically distinguish Python and Numpy code.

It can be used to handle a subset of Python, including loops, recursion, and closures, and it can also perform derivatives of derivatives.

In addition, Autograd supports backpropagation of gradients, which means that it can effectively obtain the gradient of a scalar-valued function relative to an array-valued parameter, as well as forward mode differentiation, and both can be used arbitrarily combination.

Beaten by PyTorch! Google dumps TensorFlow, bets on JAX

XLA (Accelerated Linear Algebra) can accelerate TensorFlow models without changing the source code.

When a program is running, all operations are performed individually by the executor. Each operation has a precompiled GPU kernel implementation to which executors are dispatched.

For example:

<span style="color: rgb(89, 89, 89); margin: 0px; padding: 0px; background: none 0% 0% / auto repeat scroll padding-box border-box rgba(0, 0, 0, 0);">def</span> <span style="color: rgb(89, 89, 89); margin: 0px; padding: 0px; background: none 0% 0% / auto repeat scroll padding-box border-box rgba(0, 0, 0, 0);">model_fn</span>(<span style="color: rgb(89, 89, 89); margin: 0px; padding: 0px; background: none 0% 0% / auto repeat scroll padding-box border-box rgba(0, 0, 0, 0);">x</span>, <span style="color: rgb(89, 89, 89); margin: 0px; padding: 0px; background: none 0% 0% / auto repeat scroll padding-box border-box rgba(0, 0, 0, 0);">y</span>, <span style="color: rgb(89, 89, 89); margin: 0px; padding: 0px; background: none 0% 0% / auto repeat scroll padding-box border-box rgba(0, 0, 0, 0);">z</span>):<br><span style="color: rgb(215, 58, 73); margin: 0px; padding: 0px; background: none 0% 0% / auto repeat scroll padding-box border-box rgba(0, 0, 0, 0);">return</span> <span style="color: rgb(89, 89, 89); margin: 0px; padding: 0px; background: none 0% 0% / auto repeat scroll padding-box border-box rgba(0, 0, 0, 0);">tf</span>.<span style="color: rgb(0, 92, 197); margin: 0px; padding: 0px; background: none 0% 0% / auto repeat scroll padding-box border-box rgba(0, 0, 0, 0);">reduce_sum</span>(<span style="color: rgb(89, 89, 89); margin: 0px; padding: 0px; background: none 0% 0% / auto repeat scroll padding-box border-box rgba(0, 0, 0, 0);">x</span> <span style="color: rgb(215, 58, 73); margin: 0px; padding: 0px; background: none 0% 0% / auto repeat scroll padding-box border-box rgba(0, 0, 0, 0);">+</span> <span style="color: rgb(89, 89, 89); margin: 0px; padding: 0px; background: none 0% 0% / auto repeat scroll padding-box border-box rgba(0, 0, 0, 0);">y</span> <span style="color: rgb(215, 58, 73); margin: 0px; padding: 0px; background: none 0% 0% / auto repeat scroll padding-box border-box rgba(0, 0, 0, 0);">*</span> <span style="color: rgb(89, 89, 89); margin: 0px; padding: 0px; background: none 0% 0% / auto repeat scroll padding-box border-box rgba(0, 0, 0, 0);">z</span>)

When running without XLA, this part starts three cores: one for multiplication, one for addition, and one for subtraction.

XLA can achieve optimization by "merging" addition, multiplication and subtraction into a single GPU core.

This fusion operation does not write the intermediate values ​​generated by the memory into the y*z memory x y*z; instead, it "streams" the results of these intermediate calculations directly to the user, while completely Saved in GPU.

In practice, XLA can achieve approximately 7x performance improvement and approximately 5x batch size improvement.

In addition, XLA and Autograd can be combined in any way, and you can even use the pmap method to program with multiple GPU or TPU cores at once.

By combining JAX with Autograd and Numpy, you can get an easy-to-program and high-performance machine learning system for CPU, GPU and TPU.

Beaten by PyTorch! Google dumps TensorFlow, bets on JAX

Obviously, Google has learned its lesson this time. In addition to fully rolling out its own products, it is also particularly active in promoting the construction of an open source ecosystem.

In 2020, DeepMind officially entered the embrace of JAX, and this also announced the end of Google itself. Since then, various open source libraries have emerged in endlessly.

Beaten by PyTorch! Google dumps TensorFlow, bets on JAX

Beaten by PyTorch! Google dumps TensorFlow, bets on JAX

Looking at the entire "infighting", Jia Yangqing said that in the process of criticizing TensorFlow, the AI ​​system believed that Pythonic scientific research was all need.

But on the one hand, pure Python cannot achieve efficient software and hardware co-design, on the other hand, the upper-level distributed system still requires efficient abstraction.

And JAX is looking for a better balance. Google's pragmatism that is willing to subvert itself is worth learning.

Beaten by PyTorch! Google dumps TensorFlow, bets on JAX

causact The author of the R package and related Bayesian analysis textbook said he was pleased to see Google transition from TF to JAX, a cleaner solution.

Beaten by PyTorch! Google dumps TensorFlow, bets on JAX

Google’s Challenge

As a rookie, although Jax can learn from the advantages of the two old predecessors, PyTorch and TensorFlow, sometimes he may be a latecomer. It also brings disadvantages.

Beaten by PyTorch! Google dumps TensorFlow, bets on JAX

First of all, JAX is still too "young". As an experimental framework, it is far from reaching the standards of a mature Google product.

In addition to various hidden bugs, JAX still depends on other frameworks for some issues.

For loading and preprocessing data, you need to use TensorFlow or PyTorch to handle most of the settings.

Obviously, this is still far from the ideal "one-stop" framework.

Beaten by PyTorch! Google dumps TensorFlow, bets on JAX

Secondly, JAX is highly optimized mainly for TPU, but when it comes to GPU and CPU, it is much worse.

On the one hand, Google’s organizational and strategic chaos from 2018 to 2021 resulted in insufficient funds for research and development to support GPUs, and low priority in dealing with related issues.

At the same time, they are probably too focused on making their own TPUs share more of the cake in AI acceleration. Naturally, cooperation with NVIDIA is very lacking, let alone improving details such as GPU support. Problem.

On the other hand, Google’s own internal research, needless to say, is all focused on TPU, which causes Google to lose a good feedback loop on GPU usage.

In addition, longer debugging time, not being compatible with Windows, the risk of not tracking side effects, etc., all increase the threshold and friendliness of Jax.

Now, PyTorch is almost 6 years old, but it does not have the decline that TensorFlow showed back then.

It seems that Jax still has a long way to go if he wants to catch up with the latecomers.


The above is the detailed content of Beaten by PyTorch! Google dumps TensorFlow, bets on JAX. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
谷歌三件套指的是哪三个软件谷歌三件套指的是哪三个软件Sep 30, 2022 pm 01:54 PM

谷歌三件套指的是:1、google play商店,即下载各种应用程序的平台,类似于移动助手,安卓用户可以在商店下载免费或付费的游戏和软件;2、Google Play服务,用于更新Google本家的应用和Google Play提供的其他第三方应用;3、谷歌服务框架(GMS),是系统软件里面可以删除的一个APK程序,通过谷歌平台上架的应用和游戏都需要框架的支持。

为什么中国不卖google手机为什么中国不卖google手机Mar 30, 2023 pm 05:31 PM

中国不卖google手机的原因:谷歌已经全面退出中国市场了,所以不能在中国销售,在国内是没有合法途径销售。在中国消费市场中,消费者大都倾向于物美价廉以及功能实用的产品,所以竞争实力本就因政治因素大打折扣的谷歌手机主体市场一直不在中国大陆。

谷歌超强AI超算碾压英伟达A100!TPU v4性能提升10倍,细节首次公开谷歌超强AI超算碾压英伟达A100!TPU v4性能提升10倍,细节首次公开Apr 07, 2023 pm 02:54 PM

虽然谷歌早在2020年,就在自家的数据中心上部署了当时最强的AI芯片——TPU v4。但直到今年的4月4日,谷歌才首次公布了这台AI超算的技术细节。论文地址:https://arxiv.org/abs/2304.01433相比于TPU v3,TPU v4的性能要高出2.1倍,而在整合4096个芯片之后,超算的性能更是提升了10倍。另外,谷歌还声称,自家芯片要比英伟达A100更快、更节能。与A100对打,速度快1.7倍论文中,谷歌表示,对于规模相当的系统,TPU v4可以提供比英伟达A100强1.

谷歌并未放弃TensorFlow,将于2023年发布新版,明确四大支柱谷歌并未放弃TensorFlow,将于2023年发布新版,明确四大支柱Apr 12, 2023 am 11:52 AM

2015 年,谷歌大脑开放了一个名为「TensorFlow」的研究项目,这款产品迅速流行起来,成为人工智能业界的主流深度学习框架,塑造了现代机器学习的生态系统。从那时起,成千上万的开源贡献者以及众多的开发人员、社区组织者、研究人员和教育工作者等都投入到这一开源软件库上。然而七年后的今天,故事的走向已经完全不同:谷歌的 TensorFlow 失去了开发者的拥护。因为 TensorFlow 用户已经开始转向 Meta 推出的另一款框架 PyTorch。众多开发者都认为 TensorFlow 已经输掉

LLM之战,谷歌输了!越来越多顶尖研究员跳槽OpenAILLM之战,谷歌输了!越来越多顶尖研究员跳槽OpenAIApr 07, 2023 pm 05:48 PM

​前几天,谷歌差点遭遇一场公关危机,Bert一作、已跳槽OpenAI的前员工Jacob Devlin曝出,Bard竟是用ChatGPT的数据训练的。随后,谷歌火速否认。而这场争议,也牵出了一场大讨论:为什么越来越多Google顶尖研究员跳槽OpenAI?这场LLM战役它还能打赢吗?知友回复莱斯大学博士、知友「一堆废纸」表示,其实谷歌和OpenAI的差距,是数据的差距。「OpenAI对LLM有强大的执念,这是Google这类公司完全比不上的。当然人的差距只是一个方面,数据的差距以及对待数据的态度才

参数少量提升,性能指数爆发!谷歌:大语言模型暗藏「神秘技能」参数少量提升,性能指数爆发!谷歌:大语言模型暗藏「神秘技能」Apr 11, 2023 pm 11:16 PM

由于可以做一些没训练过的事情,大型语言模型似乎具有某种魔力,也因此成为了媒体和研究员炒作和关注的焦点。当扩展大型语言模型时,偶尔会出现一些较小模型没有的新能力,这种类似于「创造力」的属性被称作「突现」能力,代表我们向通用人工智能迈进了一大步。如今,来自谷歌、斯坦福、Deepmind和北卡罗来纳大学的研究人员,正在探索大型语言模型中的「突现」能力。解码器提示的 DALL-E神奇的「突现」能力自然语言处理(NLP)已经被基于大量文本数据训练的语言模型彻底改变。扩大语言模型的规模通常会提高一系列下游N

四分钟对打300多次,谷歌教会机器人打乒乓球四分钟对打300多次,谷歌教会机器人打乒乓球Apr 10, 2023 am 09:11 AM

让一位乒乓球爱好者和机器人对打,按照机器人的发展趋势来看,谁输谁赢还真说不准。​机器人拥有灵巧的可操作性、腿部运动灵活、抓握能力出色…… 已被广泛应用于各种挑战任务。但在与人类互动紧密的任务中,机器人的表现又如何呢?就拿乒乓球来说,这需要双方高度配合,并且球的运动非常快速,这对算法提出了重大挑战。在乒乓球比赛中,首要的就是速度和精度,这对学习算法提出了很高的要求。同时,这项运动具有高度结构化(具有固定的、可预测的环境)和多智能体协作(机器人可以与人类或其他机器人一起对打)两大特点,使其成为研究人

超5800亿美元!微软谷歌神仙打架,让英伟达市值飙升,约为5个英特尔超5800亿美元!微软谷歌神仙打架,让英伟达市值飙升,约为5个英特尔Apr 11, 2023 pm 04:31 PM

ChatGPT在手,有问必答。你可知,与它每次对话的计算成本简直让人泪目。此前,分析师称ChatGPT回复一次,需要2美分。要知道,人工智能聊天机器人所需的算力背后烧的可是GPU。这恰恰让像英伟达这样的芯片公司豪赚了一把。2月23日,英伟达股价飙升,使其市值增加了700多亿美元,总市值超5800亿美元,大约是英特尔的5倍。在英伟达之外,AMD可以称得上是图形处理器行业的第二大厂商,市场份额约为20%。而英特尔持有不到1%的市场份额。ChatGPT在跑,英伟达在赚随着ChatGPT解锁潜在的应用案

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Tools

EditPlus Chinese cracked version

EditPlus Chinese cracked version

Small size, syntax highlighting, does not support code prompt function

SublimeText3 English version

SublimeText3 English version

Recommended: Win version, supports code prompts!

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

SublimeText3 Linux new version

SublimeText3 Linux new version

SublimeText3 Linux latest version

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.