search
HomeTechnology peripheralsAIMicrosoft proposes OTO, an automated neural network training pruning framework, to obtain high-performance lightweight models in one stop

OTO is the industry’s first automated, one-stop, user-friendly and versatile neural network training and structure compression framework.

In the era of artificial intelligence, how to deploy and maintain neural networks is a key issue for productization. In order to save computing costs while minimizing the loss of model performance as much as possible, compressing neural networks has become one of the keys to productizing DNN. .

Microsoft proposes OTO, an automated neural network training pruning framework, to obtain high-performance lightweight models in one stop

#DNN compression generally has three methods, pruning, knowledge distillation and quantization. Pruning aims to identify and remove redundant structures, slim down DNN while maintaining model performance as much as possible. It is the most versatile and effective compression method. Generally speaking, the three methods can complement each other and work together to achieve the best compression effect.

Microsoft proposes OTO, an automated neural network training pruning framework, to obtain high-performance lightweight models in one stop

However, most of the existing pruning methods only target specific models and specific tasks, and require strong professional domain knowledge, so it usually requires AI developers to spend a lot of energy to Applying these methods to your own scenarios consumes a lot of manpower and material resources.

OTO Overview

In order to solve the problems of existing pruning methods and provide convenience to AI developers, the Microsoft team proposed the Only-Train-Once OTO framework. OTO is the industry's first automated, one-stop, user-friendly and universal neural network training and structure compression framework. A series of work has been published in ICLR2023 and NeurIPS2021.

By using OTO, AI engineers can easily train target neural networks and obtain high-performance and lightweight models in one stop. OTO minimizes the developer's investment in engineering time and effort, and does not require the time-consuming pre-training and additional model fine-tuning that existing methods usually require.

  • Paper link:
  • OTOv2 ICLR 2023: https://openreview.net/pdf?id=7ynoX1ojPMt
  • OTOv1 NeurIPS 2021: https://proceedings .neurips.cc/paper_files/paper/2021/file/a376033f78e144f494bfc743c0be3330-Paper.pdf
  • Code link:
    https://github.com/tianyic/only_train_once

Framework Core Algorithm

The ideal structural pruning algorithm should be able to: automatically train from scratch in one stop for general neural networks, while achieving high performance and lightweight models, without the need for follow-up Fine tune. But because of the complexity of neural networks, achieving this goal is extremely challenging. To achieve this ultimate goal, the following three core questions need to be systematically addressed:

  • How to find out which network structures can be removed?
  • How to remove the network structure without losing model performance as much as possible?
  • How can we accomplish the above two points automatically?

The Microsoft team designed and implemented three sets of core algorithms, systematically and comprehensively solving these three core problems for the first time.

Automated Zero-Invariant Groups (zero invariant group) grouping

Due to the complexity and correlation of the network structure, deleting any network structure may result in remaining The network structure is invalid. Therefore, one of the biggest problems in automated network structure compression is how to find the model parameters that must be pruned together so that the remaining network is still valid. To solve this problem, the Microsoft team proposed Zero-Invariant Groups (ZIGs) in OTOv1. The zero-invariant group can be understood as a type of smallest removable unit, so that the remaining network is still valid after the corresponding network structure of the group is removed. Another great property of a zero-invariant group is that if a zero-invariant group is equal to zero, then no matter what the input value is, the output value is always zero. In OTOv2, the researchers further proposed and implemented a set of automated algorithms to solve the grouping problem of zero-invariant groups in general networks. The automated grouping algorithm is a carefully designed combination of a series of graph algorithms. The entire algorithm is very efficient and has linear time and space complexity.

Microsoft proposes OTO, an automated neural network training pruning framework, to obtain high-performance lightweight models in one stop

Double half-plane projected gradient optimization algorithm (DHSPG)

After dividing all zero-invariant groups of the target network, then The following model training and pruning tasks need to find out which zero-invariant groups are redundant and which ones are important. The network structure corresponding to the redundant zero-invariant groups needs to be deleted, and the important zero-invariant groups need to be retained to ensure the performance of the compression model. The researchers formulated this problem as a structural sparsification problem and proposed a new Dual Half-Space Projected Gradient (DHSPG) optimization algorithm to solve it.

Microsoft proposes OTO, an automated neural network training pruning framework, to obtain high-performance lightweight models in one stop

DHSPG can very effectively find redundant zero-invariant groups and project them to zero, and continuously train important zero-invariant groups to achieve performance comparable to the original model. performance.

Compared with traditional sparse optimization algorithms, DHSPG has stronger and more stable sparse structure exploration capabilities, and expands the training search space and therefore usually achieves higher actual performance results.

Microsoft proposes OTO, an automated neural network training pruning framework, to obtain high-performance lightweight models in one stop

Automatically build a lightweight compression model

By using DHSPG to train the model, we will get a zero-invariant A solution with high structural sparsity of groups, that is, a solution with many zero-invariant groups that are projected to zero, will also have high model performance. Next, the researchers deleted all structures corresponding to redundant zero-invariant groups to automatically build a compression network. Due to the characteristics of zero-invariant groups, that is, if a zero-invariant group is equal to zero, then no matter what the input value is, the output value will always be zero, so deleting redundant zero-invariant groups will not have any impact on the network. Therefore, the compressed network obtained through OTO will have the same output as the complete network, without the need for further model fine-tuning required by traditional methods.

Numerical experiment

Classification task

Microsoft proposes OTO, an automated neural network training pruning framework, to obtain high-performance lightweight models in one stop

##Table 1: VGG16 and VGG16- in CIFAR10 BN model performance

In the VGG16 experiment of CIFAR10, OTO reduced the floating point number by 86.6% and the number of parameters by 97.5%. The performance was impressive.

Microsoft proposes OTO, an automated neural network training pruning framework, to obtain high-performance lightweight models in one stop

Table 2: ResNet50 experiment of CIFAR10

In the ResNet50 experiment of CIFAR10, OTO outperforms without quantization The SOTA neural network compression frameworks AMC and ANNC use only 7.8% of FLOPs and 4.1% of parameters.

Microsoft proposes OTO, an automated neural network training pruning framework, to obtain high-performance lightweight models in one stop

Table 3. ImageNet’s ResNet50 experiment

In the ImageNet’s ResNet50 experiment, OTOv2 under different structural sparseness targets, It shows performance that is comparable to or even better than existing SOTA methods.

Microsoft proposes OTO, an automated neural network training pruning framework, to obtain high-performance lightweight models in one stop

Table 4: More structures and data sets

OTO has also achieved more data sets and model structures Not a bad performance.

Low-Level Vision Task

Microsoft proposes OTO, an automated neural network training pruning framework, to obtain high-performance lightweight models in one stop

Table 4: Experiment of CARNx2

In the super-resolution task, OTO one-stop training compressed the CARNx2 network, achieving competitive performance with the original model and compressing the calculation amount and model size by more than 75%.

Language model task

Microsoft proposes OTO, an automated neural network training pruning framework, to obtain high-performance lightweight models in one stop

#In addition, the researchers also conducted one of the core algorithms on Bert, the DHSPG optimization algorithm. Comparative experiments verify its high performance compared with other sparse optimization algorithms. It can be found that on Squad, the parameter reduction and model performance obtained by using DHSPG for training are far superior to other sparse optimization algorithms.

Conclusion

The Microsoft team proposed an automated one-stop neural network training structure pruning framework called OTO (Only-Train-Once). It can automatically compress a complete neural network into a lightweight network while maintaining high performance. OTO greatly simplifies the complex multi-stage process of existing structure pruning methods, is suitable for various network architectures and applications, and minimizes users' additional engineering investment. It is versatile, effective and easy to use.

The above is the detailed content of Microsoft proposes OTO, an automated neural network training pruning framework, to obtain high-performance lightweight models in one stop. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
Are You At Risk Of AI Agency Decay? Take The Test To Find OutAre You At Risk Of AI Agency Decay? Take The Test To Find OutApr 21, 2025 am 11:31 AM

This article explores the growing concern of "AI agency decay"—the gradual decline in our ability to think and decide independently. This is especially crucial for business leaders navigating the increasingly automated world while retainin

How to Build an AI Agent from Scratch? - Analytics VidhyaHow to Build an AI Agent from Scratch? - Analytics VidhyaApr 21, 2025 am 11:30 AM

Ever wondered how AI agents like Siri and Alexa work? These intelligent systems are becoming more important in our daily lives. This article introduces the ReAct pattern, a method that enhances AI agents by combining reasoning an

Revisiting The Humanities In The Age Of AIRevisiting The Humanities In The Age Of AIApr 21, 2025 am 11:28 AM

"I think AI tools are changing the learning opportunities for college students. We believe in developing students in core courses, but more and more people also want to get a perspective of computational and statistical thinking," said University of Chicago President Paul Alivisatos in an interview with Deloitte Nitin Mittal at the Davos Forum in January. He believes that people will have to become creators and co-creators of AI, which means that learning and other aspects need to adapt to some major changes. Digital intelligence and critical thinking Professor Alexa Joubin of George Washington University described artificial intelligence as a “heuristic tool” in the humanities and explores how it changes

Understanding LangChain Agent FrameworkUnderstanding LangChain Agent FrameworkApr 21, 2025 am 11:25 AM

LangChain is a powerful toolkit for building sophisticated AI applications. Its agent architecture is particularly noteworthy, allowing developers to create intelligent systems capable of independent reasoning, decision-making, and action. This expl

What are the Radial Basis Functions Neural Networks?What are the Radial Basis Functions Neural Networks?Apr 21, 2025 am 11:13 AM

Radial Basis Function Neural Networks (RBFNNs): A Comprehensive Guide Radial Basis Function Neural Networks (RBFNNs) are a powerful type of neural network architecture that leverages radial basis functions for activation. Their unique structure make

The Meshing Of Minds And Machines Has ArrivedThe Meshing Of Minds And Machines Has ArrivedApr 21, 2025 am 11:11 AM

Brain-computer interfaces (BCIs) directly link the brain to external devices, translating brain impulses into actions without physical movement. This technology utilizes implanted sensors to capture brain signals, converting them into digital comman

Insights on spaCy, Prodigy and Generative AI from Ines MontaniInsights on spaCy, Prodigy and Generative AI from Ines MontaniApr 21, 2025 am 11:01 AM

This "Leading with Data" episode features Ines Montani, co-founder and CEO of Explosion AI, and co-developer of spaCy and Prodigy. Ines offers expert insights into the evolution of these tools, Explosion's unique business model, and the tr

A Guide to Building Agentic RAG Systems with LangGraphA Guide to Building Agentic RAG Systems with LangGraphApr 21, 2025 am 11:00 AM

This article explores Retrieval Augmented Generation (RAG) systems and how AI agents can enhance their capabilities. Traditional RAG systems, while useful for leveraging custom enterprise data, suffer from limitations such as a lack of real-time dat

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

SecLists

SecLists

SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

DVWA

DVWA

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

Safe Exam Browser

Safe Exam Browser

Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.