search
HomeTechnology peripheralsAIFor the first time, 'Teaching Director' is introduced into model distillation, and large-scale compression is better than 24 SOTA methods.

Faced with increasingly sophisticated deep learning models and massive video big data, artificial intelligence algorithms are increasingly dependent on computing resources. In order to effectively improve the performance and efficiency of deep models, by exploring the distillability and sparsity of the model, this paper proposes a unified model compression technology based on the " Dean-Teacher-Student" model.

This result was completed by a joint research team of the People's National Science and Technology Institute of Technology and the Institute of Automation, Chinese Academy of Sciences. The relevant paper was published in the top international journal on artificial intelligence, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) on. This achievement is the first time that the role of “teaching director” has been introduced into model distillation technology, unifying the distillation and tailoring of deep models.

For the first time, Teaching Director is introduced into model distillation, and large-scale compression is better than 24 SOTA methods.

##Paper address: https://ieeexplore.ieee.org/abstract/document/9804342

At present, this achievement has been applied to the cross-modal intelligent search engine "Baize" independently developed by People's Science and Technology. "Baize" breaks the barriers of information expression between different modalities such as graphics, text, audio and video, and maps different modal information such as text, pictures, voice and video into a unified feature representation space, with video as the core, learning multiple modalities A unified distance measurement can be used to bridge the semantic gap of multi-modal content such as text, voice, and video to achieve unified search capabilities.

However, in the face of massive Internet data, especially video big data, the consumption of computing resources by cross-modal deep models is gradually increasing. Based on this research result, "Baize" can compress the model size on a large scale while ensuring algorithm performance, thereby achieving high-throughput and low-power cross-modal intelligent understanding and search capabilities. According to preliminary practical applications, this technology can compress the parameter scale of large models by more than four times on average. On the one hand, it can greatly reduce the model's consumption of high-performance computing resources such as GPU servers. On the other hand, large models that cannot be deployed on the edge can be distilled and compressed to achieve low-power deployment on the edge.

Federated learning framework for model compression

Compression and acceleration of deep algorithm models can be achieved through distillation learning or structured sparse clipping, but this There are some limitations in both areas. For the distillation learning method, it aims to train a lightweight model (i.e., student network) to simulate a complex and large model (i.e., teacher network). Under the guidance of the teacher network, the student network can achieve better performance than training alone.

However, distillation learning algorithms only focus on improving the performance of student networks and often ignore the importance of network structure. The structure of the student network is generally predefined and fixed during the training process.

For structured sparse clipping or filter clipping, these methods aim to clip a redundant and complex network into a sparse and compact network. However, model cropping is only used to obtain a compact structure. None of the existing methods make full use of the "knowledge" contained in the original complex model. Recent research combines distillation learning with structured sparse pruning in order to balance model performance and size. But these methods are limited to simple combinations of loss functions.

In order to analyze the above issues in depth, this study first trained the model based on compressed sensing. By analyzing the model performance and structure, it was found that there are two important attributes for deep algorithm models: distillability Distillability and sparsability.

Specifically, distillability refers to the density of effective knowledge that can be distilled from the teacher network. It can be measured by the performance gains achieved by a student network under the guidance of a teacher network. For example, student networks with higher distillability can achieve higher performance. Distillability can also be quantitatively analyzed at the network layer level.

As shown in Figure 1-(a), the bar graph represents the cosine similarity (Cosine Similarity) between the distillation learning loss gradient and the true value classification loss gradient. A larger cosine similarity indicates that the knowledge of the current distillation is more helpful for model performance. In this way, cosine similarity can also be a measure of distillability. It can be seen from Figure 1-(a) that the distillability gradually increases as the number of model layers becomes deeper. This also explains why supervision commonly used in distillation learning is applied in the last few layers of the model. Moreover, in different training rounds, the student model also has different distillability, because the cosine similarity also changes as the training time changes. Therefore, it is necessary to dynamically analyze the distillability of different layers during the training process.

On the other hand, sparsity refers to the cropping rate (or compression rate) that the model can obtain under limited accuracy loss. Higher sparsability corresponds to the potential for higher cropping rates. As shown in Figure 1-(b), different layers or modules of the network exhibit different sparsibility. Similar to distillability, sparsibility can also be analyzed at the network layer level and in the time dimension. However, there are currently no methods to explore and analyze distillability and rarefaction. Existing methods often use a fixed training mechanism, which makes it difficult to achieve an optimal result.

For the first time, Teaching Director is introduced into model distillation, and large-scale compression is better than 24 SOTA methods.

For the first time, Teaching Director is introduced into model distillation, and large-scale compression is better than 24 SOTA methods.

Figure 1 Schematic diagram of distillability and sparsity of deep neural networks

In order to solve the above problems, this study analyzes the training process of model compression to obtain relevant findings about distillability and sparsability. Inspired by these findings, this study proposes a model compression method based on joint learning of dynamic distillability and sparsity. It can dynamically combine distillation learning and structured sparse clipping, and adaptively adjust the joint training mechanism by learning distillability and sparsity.

Different from the conventional "Teacher-Student" framework, the method proposed in this article can be described as a "Learning-in-School" framework. Because it contains three major modules: teacher network, student network and dean network.

Specifically, the same as before, the teacher network teaches the student network. The teaching director network is responsible for controlling the intensity of students' online learning and the way they learn. By obtaining the status of the current teacher network and student network, the dean network can evaluate the distillability and sparsibility of the current student network, and then dynamically balance and control the strength of distillation learning supervision and structured sparse clipping supervision.

In order to optimize the method in this article, this research also proposes a joint optimization algorithm of distillation learning & tailoring based on the alternating direction multiplier method to update the student network. In order to optimize and update the teaching director network, this paper proposes a teaching director optimization algorithm based on meta-learning. Distillability can in turn be influenced by dynamically adjusting the supervision signal. As shown in Figure 1-(a), the method in this paper proves to be able to delay the downward trend of distillability and improve the overall distillability by rationally utilizing the knowledge of distillation.

The overall algorithm framework and flow chart of this article’s method are shown in the figure below. The framework contains three major modules, teacher network, student network and dean network. Among them, the initial complex redundant network to be compressed and trimmed is regarded as the teacher network, and in the subsequent training process, the original network that is gradually sparse is regarded as the student network. The dean network is a meta-network that inputs the information of the teacher network and the student network to measure the current distillability and sparsity, thereby controlling the supervision intensity of distillation learning and sparseness.

In this way, at every moment, the student network can be guided and sparsified by dynamically distilled knowledge. For example, when the student network has a higher distillability, the dean will let a stronger distillation supervision signal guide the student network (see the pink arrow signal in Figure 2); on the contrary, when the student network has a higher sparseness Therefore, the dean will exert a stronger sparse supervision signal on the student network (see the orange arrow signal in Figure 2).

For the first time, Teaching Director is introduced into model distillation, and large-scale compression is better than 24 SOTA methods.

Figure 2 Schematic diagram of model compression algorithm based on joint learning of distillability and sparsity

Experimental results

The experiment compares the method proposed in this article with 24 mainstream model compression methods (including sparse clipping methods and distillation learning methods) on the small-scale data set CIFAR and the large-scale data set ImageNet. The experimental results are shown in the figure below, which prove the superiority of the method proposed in this article.

Table 1 Performance comparison of model cropping results on CIFAR10:

For the first time, Teaching Director is introduced into model distillation, and large-scale compression is better than 24 SOTA methods.

Table 2 on ImageNet Performance comparison of model cropping results:

For the first time, Teaching Director is introduced into model distillation, and large-scale compression is better than 24 SOTA methods.

For more research details, please refer to the original paper.

The above is the detailed content of For the first time, 'Teaching Director' is introduced into model distillation, and large-scale compression is better than 24 SOTA methods.. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
A Comprehensive Guide to ExtrapolationA Comprehensive Guide to ExtrapolationApr 15, 2025 am 11:38 AM

Introduction Suppose there is a farmer who daily observes the progress of crops in several weeks. He looks at the growth rates and begins to ponder about how much more taller his plants could grow in another few weeks. From th

The Rise Of Soft AI And What It Means For Businesses TodayThe Rise Of Soft AI And What It Means For Businesses TodayApr 15, 2025 am 11:36 AM

Soft AI — defined as AI systems designed to perform specific, narrow tasks using approximate reasoning, pattern recognition, and flexible decision-making — seeks to mimic human-like thinking by embracing ambiguity. But what does this mean for busine

Evolving Security Frameworks For The AI FrontierEvolving Security Frameworks For The AI FrontierApr 15, 2025 am 11:34 AM

The answer is clear—just as cloud computing required a shift toward cloud-native security tools, AI demands a new breed of security solutions designed specifically for AI's unique needs. The Rise of Cloud Computing and Security Lessons Learned In th

3 Ways Generative AI Amplifies Entrepreneurs: Beware Of Averages!3 Ways Generative AI Amplifies Entrepreneurs: Beware Of Averages!Apr 15, 2025 am 11:33 AM

Entrepreneurs and using AI and Generative AI to make their businesses better. At the same time, it is important to remember generative AI, like all technologies, is an amplifier – making the good great and the mediocre, worse. A rigorous 2024 study o

New Short Course on Embedding Models by Andrew NgNew Short Course on Embedding Models by Andrew NgApr 15, 2025 am 11:32 AM

Unlock the Power of Embedding Models: A Deep Dive into Andrew Ng's New Course Imagine a future where machines understand and respond to your questions with perfect accuracy. This isn't science fiction; thanks to advancements in AI, it's becoming a r

Is Hallucination in Large Language Models (LLMs) Inevitable?Is Hallucination in Large Language Models (LLMs) Inevitable?Apr 15, 2025 am 11:31 AM

Large Language Models (LLMs) and the Inevitable Problem of Hallucinations You've likely used AI models like ChatGPT, Claude, and Gemini. These are all examples of Large Language Models (LLMs), powerful AI systems trained on massive text datasets to

The 60% Problem — How AI Search Is Draining Your TrafficThe 60% Problem — How AI Search Is Draining Your TrafficApr 15, 2025 am 11:28 AM

Recent research has shown that AI Overviews can cause a whopping 15-64% decline in organic traffic, based on industry and search type. This radical change is causing marketers to reconsider their whole strategy regarding digital visibility. The New

MIT Media Lab To Put Human Flourishing At The Heart Of AI R&DMIT Media Lab To Put Human Flourishing At The Heart Of AI R&DApr 15, 2025 am 11:26 AM

A recent report from Elon University’s Imagining The Digital Future Center surveyed nearly 300 global technology experts. The resulting report, ‘Being Human in 2035’, concluded that most are concerned that the deepening adoption of AI systems over t

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
WWE 2K25: How To Unlock Everything In MyRise
1 months agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

SublimeText3 English version

SublimeText3 English version

Recommended: Win version, supports code prompts!

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Dreamweaver Mac version

Dreamweaver Mac version

Visual web development tools