


In October 2021, Jeff Dean personally wrote an article introducing a new machine learning architecture-Pathways.
The purpose is very simple, which is to enable an AI to span tens of thousands of tasks, understand different types of data, and achieve it with extremely high efficiency at the same time:
More than half a year later, in March 2022, Jeff Dean finally released the Pathways paper.
Paper link: https://arxiv.org/abs/2203.12533
, which adds a lot of technical information Details, such as the most basic system architecture, etc.
In April 2022, Google used Pathways’ PaLM language model to emerge, successively breaking the SOTA of multiple natural language processing tasks. This Transformer language has 540 billion parameters. The model once again proves that “power can work miracles”.
In addition to using the powerful Pathways system, the paper introduces that the training of PaLM uses 6144 TPU v4, using a high-quality data set of 780 billion tokens, and a certain proportion of them non-English multilingual corpus.
Paper address: https://arxiv.org/abs/2204.02311
Recently, a new work by Jeff Dean caused another stir Everyone’s guesses about Pathways.
Another piece of the Pathways puzzle has been put together?
There are only two authors of this paper: the famous Jeff Dean and the Italian engineer Andrea Gesmundo.
What’s interesting is that not only is Gesmundo very low-key, but Jeff Dean, who just blew up his Imagen two days ago, didn’t mention it at all on Twitter.
After reading it, some netizens speculated that this may be a component of the next generation AI architecture Pathways.
Paper address: https://arxiv.org/abs/2205.12755
The idea of this article is as follows:
By dynamically incorporating new tasks into a large-scale running system, fragments of sparse multi-task machine learning models can be used to improve the quality of new tasks and can be automatically shared among related tasks. Fragments of the model.
This approach can improve the quality of each task and improve model efficiency in terms of convergence time, number of training instances, energy consumption, etc. The machine learning problem framework proposed in this article can be regarded as a generalization and synthesis of standard multi-task and continuous learning formalizations.
Under this framework, no matter how large the task set is, it can be jointly solved.
Moreover, over time, the task set can be expanded by adding a continuous stream of new tasks. The distinction between pre-training tasks and downstream tasks also disappears.
Because, as new tasks are added, the system will look for how to combine existing knowledge and representations with new model capabilities to achieve the high quality level of each new task. . The knowledge and representations learned while solving a new task can also be used in any future tasks, or to continue learning on existing tasks.
This method is called "Mutation Multitasking Network" or µ2Net. (μ=Mutation)
Two types of mutation models for large-scale continuous learning experiments
To put it simply, it is to generate a large-scale multi-task network to jointly solve multiple tasks. Not only is the quality and efficiency of each task improved, but the model can also be expanded by dynamically adding new tasks.
The more knowledge is accumulated into the system through learning from previous tasks, the higher the quality of the solutions to subsequent tasks.
In addition, in terms of reducing the newly added parameters of each task, the efficiency of solving new tasks can be continuously improved. The generated multi-task model is sparsely activated, and the model integrates a task-based routing mechanism. As the model expands, the increase in computational cost of each task is guaranteed to be bounded.
The activated and increased parameters of each task account for the percentage of the total number of multi-task system parameters
Learned from each task The knowledge is divided into parts that can be reused by multiple tasks. Experiments have shown that this chunking technique avoids common problems of multi-task and continuous learning models, such as catastrophic forgetting, gradient interference and negative transfer.
The exploration of the task route space and the identification of the most relevant subset of prior knowledge for each task is guided by an evolutionary algorithm designed to dynamically adjust the exploration/ Exploit balance without the need to manually adjust meta-parameters. The same evolutionary logic is used to dynamically adjust hyperparameter multi-task model components.
Since it is called "mutation network", how do you explain this mutation?
Deep neural networks are often defined by architecture and hyperparameters. The architecture in this article is composed of a series of neural network layers. Each layer maps an input vector to a variable-dimensional output vector, and details of network instantiation, such as the configuration of the optimizer or data preprocessing, are determined by hyperparameters.
So the mutations mentioned here are also divided into two categories, layer cloning mutations and hyperparameter mutations.
The layer clone mutation creates a copy of any parent model layer that can be trained by the child model. If a layer of the parent model is not selected for cloning, the current state is frozen and shared with the child model to ensure the invariance of the pre-existing model.
Hyperparameter mutation is used to modify the configuration inherited by the child layer from the parent layer. The new value for each hyperparameter can be drawn from a set of valid values. For numeric hyperparameters, the set of valid values is sorted as a list, and sampling is limited to adjacent values to apply an incremental change constraint.
Let’s take a look at the actual effect:
On the three data sets of ImageNet 2012, cifar100, and cifar10, µ2Net is in 5 The performance after task iteration and 10 task iterations exceeded the current most versatile and best-performing ViT pre-trained fine-tuning model.
In terms of task expansion, after adding VTAB-full and VDD continuous learning tasks, the performance of µ2Net has been further improved. The VDD continuous learning task performance on the cifar10 data set reached 99.43 % of the best results.
On the multi-task character classification benchmark task, after two task iterations, µ2Net refreshed the SOTA level on most data sets, and the data set size ranged from 2.5k to 240k sample sizes vary.
Simply put, under this architecture, the more tasks the model learns, the more knowledge the system learns, and the easier it is to solve new tasks.
For example, a ViT-L architecture (307 million parameters) can evolve into a multi-task system with 130.87 billion parameters and solve 69 tasks.
Additionally, as the system grows, the sparsity of parameter activations keeps the computational effort and memory usage of each task constant. Experiments show that the average number of added parameters for each task is reduced by 38%, while the multi-task system only activates 2.3% of the total parameters for each task.
Of course, at this point, it's just an architecture and preliminary experiment.
Netizen: The paper is very good, but...
Although the paper is great, some people don’t seem to buy it.
Some netizens who love to expose the emperor's new clothes posted on reddit, saying that they no longer believe in love... Oh no, AI papers produced by "top laboratories/research institutions" .
The netizen with the ID "Mr. Acurite" said that he naturally believed in the data and model operation results in these papers.
But, take this paper by teacher Jeff Dean as an example. The 18-page paper talks about a particularly complex evolutionary convolution and multi-task learning algorithm. It is powerful and eye-catching. Very thumbs up.
However, there are two points that have to be made:
First, what Jeff Dean proposed in the paper proves that he is better than the competition. The running score result is that the CIFAR-10 benchmark test accuracy is 99.43, which is better than the current SOTA's 99.40... wording to describe.
#Second, at the end of the paper, there is a time consumption table using the TPU running algorithm to obtain the final results, totaling 17810 hours.Assuming someone who is not working at Google wants to reproduce the results of the paper and rents a TPU at the market price of US$3.22 per hour to run it again, the cost will be US$57,348.
What does it mean? Do you need to set a competency threshold even for daily papers?
Of course, this approach is now a trend in the industry, including but not limited to big players such as Google and OpenAI. Everyone pours into the model some ideas to improve the status quo, and a lot of preprocessed data and benchmarks.
Then, as long as the running result is numerically higher than that of the opponent by even two decimal places after a percentage point, the researcher can confidently add a new line of thesis title to the resume!
What real impetus will be given to academia and industry by doing this? Ordinary graduate students cannot afford to spend money to verify your conclusions, and ordinary companies cannot use such boring benchmarks in projects.
The same sentence, what does it mean?
Is this the acceptable comfort zone in the AI world? A small group of big companies, and occasionally top schools, show off every day that I have money and can do whatever I want, but you have no money and have to lag behind?
If we continue like this, we might as well open another computer science journal and collect papers whose results can be reproduced in eight hours on a consumer-grade stand-alone graphics card.
In the thread, graduate students with thesis assignments complained one after another.
A netizen whose ID is "Support Vector Machine" said that he is a practitioner in a small laboratory. Because of this momentum, he has almost completely lost the motivation to continue to engage in deep learning. .
Because with the budget of my own laboratory, there is no way to compare with these giants, and I cannot produce benchmark results that show bottom-line performance.
Even if you have a new theoretical idea, it is difficult to write a paper that can pass the review. This is because the current paper reviewers have developed a "beautiful picture bias" due to the ability of major manufacturers. The images used for testing in the paper are not good-looking, and everything is in vain.
This is not to say that the giant manufacturers are useless. Projects such as GPT and DALL-E are really groundbreaking. But if my own machine can't run, why should I be excited?
Another netizen who is a doctoral student showed up and commented on the post to support the "support vector machine".
The doctoral student submitted a paper on the flow model two years ago, which mainly focused on discovering the potential space of data that can be sampled, and had no impact on the quality of the model's image generation.
As a result, the critical comment given by the grader of the paper was: "The generated images do not look as good as those generated with GAN."
Another graduate student with the ID name "Uday" also said that the grader's critical comment on the paper he submitted for conferences in 2021 was: "The data is not fancy enough."
It seems that manpower is not as good as money's ability. This is really a worldwide trend in which the psychology of the East and the West is the same, and the Taoism of China and foreign countries is not broken.But in thirty years in Hedong and thirty years in Hexi, maybe the grass-roots implementation of algorithms and universal capitalization will bring about the second miracle of garage startups defeating IBM.
The above is the detailed content of Jeff Dean's large-scale multi-task learning SOTA was criticized, and it would cost US$60,000 to reproduce it. For more information, please follow other related articles on the PHP Chinese website!

1 前言在发布DALL·E的15个月后,OpenAI在今年春天带了续作DALL·E 2,以其更加惊艳的效果和丰富的可玩性迅速占领了各大AI社区的头条。近年来,随着生成对抗网络(GAN)、变分自编码器(VAE)、扩散模型(Diffusion models)的出现,深度学习已向世人展现其强大的图像生成能力;加上GPT-3、BERT等NLP模型的成功,人类正逐步打破文本和图像的信息界限。在DALL·E 2中,只需输入简单的文本(prompt),它就可以生成多张1024*1024的高清图像。这些图像甚至

Wav2vec 2.0 [1],HuBERT [2] 和 WavLM [3] 等语音预训练模型,通过在多达上万小时的无标注语音数据(如 Libri-light )上的自监督学习,显著提升了自动语音识别(Automatic Speech Recognition, ASR),语音合成(Text-to-speech, TTS)和语音转换(Voice Conversation,VC)等语音下游任务的性能。然而这些模型都没有公开的中文版本,不便于应用在中文语音研究场景。 WenetSpeech [4] 是

“Making large models smaller”这是很多语言模型研究人员的学术追求,针对大模型昂贵的环境和训练成本,陈丹琦在智源大会青源学术年会上做了题为“Making large models smaller”的特邀报告。报告中重点提及了基于记忆增强的TRIME算法和基于粗细粒度联合剪枝和逐层蒸馏的CofiPruning算法。前者能够在不改变模型结构的基础上兼顾语言模型困惑度和检索速度方面的优势;而后者可以在保证下游任务准确度的同时实现更快的处理速度,具有更小的模型结构。陈丹琦 普

由于复杂的注意力机制和模型设计,大多数现有的视觉 Transformer(ViT)在现实的工业部署场景中不能像卷积神经网络(CNN)那样高效地执行。这就带来了一个问题:视觉神经网络能否像 CNN 一样快速推断并像 ViT 一样强大?近期一些工作试图设计 CNN-Transformer 混合架构来解决这个问题,但这些工作的整体性能远不能令人满意。基于此,来自字节跳动的研究者提出了一种能在现实工业场景中有效部署的下一代视觉 Transformer——Next-ViT。从延迟 / 准确性权衡的角度看,

3月27号,Stability AI的创始人兼首席执行官Emad Mostaque在一条推文中宣布,Stable Diffusion XL 现已可用于公开测试。以下是一些事项:“XL”不是这个新的AI模型的官方名称。一旦发布稳定性AI公司的官方公告,名称将会更改。与先前版本相比,图像质量有所提高与先前版本相比,图像生成速度大大加快。示例图像让我们看看新旧AI模型在结果上的差异。Prompt: Luxury sports car with aerodynamic curves, shot in a

人工智能就是一个「拼财力」的行业,如果没有高性能计算设备,别说开发基础模型,就连微调模型都做不到。但如果只靠拼硬件,单靠当前计算性能的发展速度,迟早有一天无法满足日益膨胀的需求,所以还需要配套的软件来协调统筹计算能力,这时候就需要用到「智能计算」技术。最近,来自之江实验室、中国工程院、国防科技大学、浙江大学等多达十二个国内外研究机构共同发表了一篇论文,首次对智能计算领域进行了全面的调研,涵盖了理论基础、智能与计算的技术融合、重要应用、挑战和未来前景。论文链接:https://spj.scien

译者 | 李睿审校 | 孙淑娟近年来, Transformer 机器学习模型已经成为深度学习和深度神经网络技术进步的主要亮点之一。它主要用于自然语言处理中的高级应用。谷歌正在使用它来增强其搜索引擎结果。OpenAI 使用 Transformer 创建了著名的 GPT-2和 GPT-3模型。自从2017年首次亮相以来,Transformer 架构不断发展并扩展到多种不同的变体,从语言任务扩展到其他领域。它们已被用于时间序列预测。它们是 DeepMind 的蛋白质结构预测模型 AlphaFold

说起2010年南非世界杯的最大网红,一定非「章鱼保罗」莫属!这只位于德国海洋生物中心的神奇章鱼,不仅成功预测了德国队全部七场比赛的结果,还顺利地选出了最终的总冠军西班牙队。不幸的是,保罗已经永远地离开了我们,但它的「遗产」却在人们预测足球比赛结果的尝试中持续存在。在艾伦图灵研究所(The Alan Turing Institute),随着2022年卡塔尔世界杯的持续进行,三位研究员Nick Barlow、Jack Roberts和Ryan Chan决定用一种AI算法预测今年的冠军归属。预测模型图


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

EditPlus Chinese cracked version
Small size, syntax highlighting, does not support code prompt function

SublimeText3 English version
Recommended: Win version, supports code prompts!

MinGW - Minimalist GNU for Windows
This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

SublimeText3 Linux new version
SublimeText3 Linux latest version

SAP NetWeaver Server Adapter for Eclipse
Integrate Eclipse with SAP NetWeaver application server.
