Home  >  Article  >  Technology peripherals  >  After basic models with tens of billions and hundreds of billions of parameters, are we entering a data-centric era?

After basic models with tens of billions and hundreds of billions of parameters, are we entering a data-centric era?

王林
王林forward
2023-05-08 08:46:361205browse

In recent years, the emergence of basic models such as GPT-3, CLIP, DALL-E, Imagen, and Stabile Diffusion has been amazing. The powerful generative capabilities and contextual learning capabilities demonstrated by these models were unimaginable just a few years ago. This article explores the commercialization of these large-scale technologies. These models are now not just the domain of industry giants. Their value is increasingly reflected in the description of the field and key issues, and at their core is data. The impact of the rapid development of the underlying model has yet to be determined, so much is based on speculation.

After basic models with tens of billions and hundreds of billions of parameters, are we entering a data-centric era?

##prompt: "taco cat" (don't take it too seriously)

From a machine learning perspective, the concept of a task is absolutely fundamental - we create training data to specify the task and generalize through training. Therefore, for decades, there have been two main views in the industry:

  • #"Useless input, useless output", that is, the data/feature information input to the model determines the success or failure of the model.
  • "Too many parameters will lead to overfitting." In the past 20 years, the development of general and sparse models has become popular. The common belief is that sparse models have fewer parameters, which helps reduce overfitting and thus generalizes better.

These views are generally reasonable, but they are also somewhat misleading.

Basic models are changing our understanding of tasks because they can be trained on a wide range of data and used for a variety of tasks. Even if some users do not clearly understand their target tasks, they can easily apply these models without requiring specific training. These models can be controlled using natural language or an interface, allowing domain experts to drive use of the models and want to immediately experience the magic in new environments. In this exploration process, the first step for users is not to curate a specific training data set, but to play with, ideate, and quickly iterate on their ideas. With the basic model in hand, we wanted to learn more about how it transferred to a range of tasks, including many we hadn't yet envisioned.

In order to benefit from the next wave of artificial intelligence development, we may need to re-examine the limitations (and wisdom) of previous mainstream views. In this article we will start from there, explore what changes can be seen in the base model, and end with a discussion of how we see the base model fitting in with traditional approaches.

Useless input, useless output—that’s it?

Taskless base models are exploding. So far, a lot of it has been about model architecture and engineering, but signs of how these models are coming together are starting to show. Is there any precedent for data becoming the foundation and the fundamental point of differentiation? We have seen the back and forth between model-centric and data-centric approaches in supervised machine learning.

In a series of projects in the second half of the 2010s, feature quality was key. In the old model, features were tools that encoded domain knowledge. These features are less stable, and processing practitioners need to master low-level details on how to characterize this information to obtain more stable and reliable predictions.

Deep learning succeeds because people are poor at these things. The deep learning revolution is in full swing, and new models are emerging one after another on arXiv, which is really shocking. These models take previously manual operations, such as feature engineering, and fully automate them. The model is excellent and can successfully characterize raw data such as text and images through deep learning. This is a huge increase in productivity. However, these models are not perfect and continued understanding of this area remains important. So, how do you incorporate this into your model?

We can see that users use training data as a carrier to efficiently input information, interpret applications and interact with the model. This all happens in the "dark", without tools, theories and abstracts. We thought users should be able to make some basic programming abstractions over their own data, and so the Snorkel project (and then the company) was born. At the knowledge level, we have thus entered the era of data-centric AI and weak supervision. We can learn two important lessons from this:

  • Once a certain technology stabilizes, its value will return to data. In this case, with the emergence of technologies such as TensorFlow, PyTorch, MXNet, Theano, etc., deep learning technology began to be commercialized, but the description of a specific problem did not give a wide range of data distribution, task specifications, etc. Therefore, success depends on how to introduce relevant information into the model;
  • We can (and need to) deal with noise. Basic mathematics and engineering can in principle help with noise processing. It is difficult for users to perfectly express their knowledge in training data, and the quality of different data sources may vary. When studying the basic theory of weak supervision, we found that models can learn a lot from noisy data (not all useless data is bad). That said, avoid entering useless information — but don’t be too picky about the data, either.

prompt: "noisy image". Have you seen anything interesting from the noisy image?

# Simply put, data encodes your questions and analysis - even if the technology is commoditized, the value of data remains. So, it’s not that useless information is good, but don’t make this distinction too absolute. Data is useful or useless depending on whether it is exploited in the most effective way.

The basic model is trained based on a large amount of data and is widely used in various tasks, bringing new challenges to data management. As models/architectures continue to become commoditized, we need to understand how to efficiently manage large amounts of data to ensure the generalizability of models.

Will too many parameters lead to overfitting?

Why do we see magical contextual features? How do modeling choices (architecture and algorithms) contribute to this? Do the magic properties of large language models come from mysterious model configurations?

About a decade ago, a rough machine learning generalization theory held that if a model was too parsimonious (i.e., unable to fit too many spurious features), then it would generalize. One may have a more precise description of this, which are major achievements in theoretical fields such as VC dimension, Rademacher complexity, etc. In the process, we discovered that it seems that a small number of parameters are also necessary for generalization. But this is not the case. Overparameterization is a major problem, but now we have large models as counterexamples: these large models (more parameters than data points) can fit all kinds of functions that are mind-bogglingly complex, but they are still general ized (even with random labels).

The idea of ​​over-parameterization is misleading to us, and recent insights have opened up new directions. We see some magical features emerge in these large models, but the prevailing belief is that these features are only enabled by certain machine-trained architectures that few people have access to. One direction for our and other research efforts is to try to implement these magical features in simple, classical models. Our recent state-space models are based on decades of signal processing work (and therefore fit classical models) and exhibit some contextual capabilities.

What’s even more surprising is that even the classic BERT bidirectional model has contextual capabilities! I believe there are still many people writing related papers. You can send them to us and we will read them carefully and cite them. We believe that the magical features of contextual learning are all around us, and that the universe is more magical than we understand. Or, looking at it more dispassionately, maybe humans are just not that good at understanding conditional probability.

Things all seem to be working fine within the large model framework. The magic features of the underlying model appear stable and commercializable, and the data is seen as the point of differentiation within it.

Maybe now is the era of data-centric basic models?

Are we repeating the data-centric supervised learning shift? In other words, are models and engineering becoming commoditized?

The rise of commoditized models and open source information. We are seeing basic models being commoditized and put into use – well, it feels very “deep learning”. For us, the greatest evidence of a model's commoditization is its availability. There are two main types of influence: people have a need (stability, etc.) and big companies can take advantage. Open source arose not because of hobbyist interest, but because large corporations and others outside of government decided they needed something like this (see The Rise of Python ).

Waiting for the latest super company to launch a new super model?

Where does the biggest difference come from? data! These tools are increasingly available, but the underlying models are not necessarily immediately available. How does that handle deployment? Waiting for the new super company to launch a new super model? This can be said to be a way! But we call it nihilism! Whether this model will be open source or not is hard to say - but what about underlying model applications that sit on private data that can't be sent to the API? Will the model have 100 trillion parameters – and how many users can access and use it? What is the training content of the model? The model is mainly trained based on public data...

So there is almost no guarantee that it will know what you care about? How do you maintain the magical properties of the base model so that it works for you? It’s necessary to effectively manage underlying model data (data is critical!) and to take full advantage of great open source models when testing (adapting input and contextual data while testing is critical!):

Data management and data-centric scaling? Prediction: Smarter methods of collecting data sets lead to small, beautiful models. Those scaling law papers that opened our eyes deserve attention: such as OpenAI, which originally studied scaling law, and DeepMind’s Chinchilla. Although we have a default reference architecture (transforms), the number of tokens represents the information content of the data to a certain extent. Experience tells us that data vary widely in subject matter and quality. We have a hunch that what really matters is the actual bits of information with overlap and order—information-theoretic concepts like entropy may drive the evolution of large- and small-based models.

Information input and calculation during testing. The base model isn't necessarily immediately available, but the calculations can make a big difference when tested in new ways. Given the cost and lack of privacy of using closed source model APIs, we recently launched an open source base model with 30x smaller parameters that can be beaten at the specification benchmark level by efficiently using small models at test time OpenAI's closed-source model - This approach is called Ask Me Anything (AMA) Prompting. At test time, users control the underlying model through prompts or natural language descriptions of tasks they are interested in, and prompt design can have a huge impact on performance. Obtaining prompts accurately is complex and arduous, so the AMA recommends using a series of noisy prompts of different qualities and using statistical theory to deal with the noise problem. There are many sources of inspiration for AMA: Maieutic Prompting, Reframing GPT-k, AI chain and more! The key is that we can do calculations at test time in new ways - no need to prompt the model just once! This is not only about data management at training time, but also about adjusting input and contextual data at test time.

After basic models with tens of billions and hundreds of billions of parameters, are we entering a data-centric era?

prompt: "really small AI model"

From the AMA We see that small models already have excellent reasoning capabilities that match a variety of tasks, while the key value of large models seems to be in memorizing factual data. Small models perform poorly on facts, so how do we introduce data and information to solve this problem? Oddly enough, we use SGD to store facts in a neural network, converting them into fuzzy floating point values... the abstraction seems much less efficient than a DRAM-backed key-value store. However, looking at the results of the AMA, the difference between small and large models is much smaller in terms of time-varying or domain-specialized facts... We at Apple need to be able to edit the facts we return when building self-supervised models (for business reasons), and also need to be fitted with other software tools to run the service. So it is very important to have the model call index. Time will tell whether the above is sufficient reason to use this type of model.

Where will this lead us? Basic models sit alongside traditional methods. Assuming that data-centric models are progressing at both the exploration and deployment ends, for fast iteration and task-agnostic workflows - the exploration phase, we make the ready-made general base model more useful and efficient through data management/test time strategies. Users will leave the exploration phase with a clearer task definition, use data-centric AI and manage training data (your own data is important), in a Snorkel manner by leveraging and combining multiple prompts and/or base models. Train smaller, faster “proprietary” models. These models can be deployed in real production environments and are more accurate for specific tasks and specific data! Or the underlying model can be used to improve weakly supervised techniques—for which some lab and Snorkel members won UAI awards.

In the final analysis, data is related to the final production of the model. Data is the only thing that is not commoditized. We still believe that Snorkel's view of data is the way forward - you need programming abstractions, a way to express, combine and iteratively correct disparate data sources and supervision signals to train deployable models for the ultimate task.

The above is the detailed content of After basic models with tens of billions and hundreds of billions of parameters, are we entering a data-centric era?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete