search
HomeTechnology peripheralsAIGoogle and MIT's 'Iterative Joint Certification” video question and answer model: SOTA performance, using 80% less computing power

Video is a ubiquitous source of media content that touches many aspects of people’s daily lives. An increasing number of real-world video applications, such as video subtitling, content analysis, and video question answering (VideoQA), rely on models that can connect video content to text or natural language.

Among them, the video question and answer model is particularly challenging because it requires simultaneous grasp of semantic information, such as targets in the scene, and temporal information, such as how things move and interact. Both types of information must be placed in the context of a natural language question with a specific intent. Additionally, since videos have many frames, processing all of them to learn spatiotemporal information may be computationally prohibitive.

Google and MITs Iterative Joint Certification” video question and answer model: SOTA performance, using 80% less computing power

Paper link: https://arxiv.org/pdf/2208.00934.pdf For To solve this problem, in the article "Video Question Answering with Iterative Video-Text Co-Tokenization", researchers from Google and MIT introduced a new method of video-text learning, called "Iterative Co-Tokenization", which can effectively Fusion of spatial, temporal and linguistic information for information processing in video question answering.

Google and MITs Iterative Joint Certification” video question and answer model: SOTA performance, using 80% less computing power

This method is multi-stream, using independent backbone models to handle different scales Video, produces video representations that capture different features, such as high spatial resolution or long duration videos. The model applies the "co-authentication" module to learn effective representations from the fusion of video streams and text. The model is very computationally efficient, requiring only 67 GFLOPs, which is at least 50% lower than the previous method, and has better performance than other SOTA models.

Video-Text Iteration

The main goal of this model is to generate features from video and text (i.e. user questions) that together allow their corresponding inputs to interact. The second goal is to do this in an efficient way, which is very important for videos since they contain tens to hundreds of frames of input.

The model learns to label joint video-language input into smaller sets of labels to jointly and efficiently represent both modalities. When tokenizing, researchers use both modes to produce a joint compact representation, which is fed into a transformation layer to produce the next-level representation.

A challenge here, which is also a typical problem in cross-modal learning, is that video frames often do not directly correspond to related text. The researchers solved this problem by adding two learnable linear layers to unify the visual and textual feature dimensions before tokenization. This allowed the researchers to have both video and text condition how video tags were learned.

Furthermore, a single tokenization step does not allow further interaction between the two modes. To do this, the researchers use this new feature representation to interact with the video input features and produce another set of tokenized features, which are then fed into the next transformer layer. This iterative process creates new features or markers that represent the continuous improvement of the joint representation of the two modes. Finally, these features are fed into a decoder that generates text output.

Google and MITs Iterative Joint Certification” video question and answer model: SOTA performance, using 80% less computing power

As is common practice in video quality assessment, the researchers fine-tuned the model before fine-tuning it on individual video quality assessment datasets. Do pre-training. In this work, the researchers automatically annotated videos with text based on speech recognition, using the HowTo100M dataset instead of pre-training on the large VideoQA dataset. This weaker pre-training data still enabled the researchers' model to learn video-text features.

Implementation of efficient video question answering

The researchers applied the video language iterative co-authentication algorithm to three major VideoQA benchmarks, MSRVTT-QA, MSVD-QA and IVQA , and demonstrate that this approach achieves better results than other state-of-the-art models without making the model too large. In addition, iterative co-label learning also requires lower computing power on video-text learning tasks.

Google and MITs Iterative Joint Certification” video question and answer model: SOTA performance, using 80% less computing power

This model only uses 67GFLOPS computing power, which is the computing power required for 3D-ResNet video model and text (360GFLOPS ), which is more than twice as efficient as the X3D model. and generates highly accurate results, exceeding state-of-the-art methods.

Multi-stream video input

For VideoQA or some other tasks involving video input, researchers found that multi-stream input is more accurate for answering questions about spatial and temporal relationships The question is very important.

The researchers utilized three video streams of different resolutions and frame rates: a low-resolution, high-frame-rate input video stream (32 frames per second, spatial resolution 64x64, denoted as 32x64x64); a high-resolution, low-frame-rate video (8x224x224); and one in between (16x112x112).

Although there is obviously more information to process with three data streams, a very efficient model is obtained due to the iterative co-labeling method. At the same time, these additional data streams allow the most relevant information to be extracted.

For example, as shown below, questions related to a specific activity will produce higher activations in a video input with a lower resolution but a higher frame rate than with a general activity Related questions can be answered from high-resolution inputs with few frames.

Google and MITs Iterative Joint Certification” video question and answer model: SOTA performance, using 80% less computing power

Another benefit of this algorithm is that the tokenization will be based on the question asked. Different and changed.

Conclusion

The researchers proposed a new video language learning method that focuses on joint learning across video-text modalities. Researchers tackle the important and challenging task of video question answering. The researchers' approach is efficient and accurate, outperforming current state-of-the-art models despite being more efficient.

The Google researchers' approach has a modest model size and could gain further performance improvements with larger models and data. The researchers hope this work will spark more research in visual language learning to enable more seamless interactions with visual-based media.

The above is the detailed content of Google and MIT's 'Iterative Joint Certification” video question and answer model: SOTA performance, using 80% less computing power. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
Are You At Risk Of AI Agency Decay? Take The Test To Find OutAre You At Risk Of AI Agency Decay? Take The Test To Find OutApr 21, 2025 am 11:31 AM

This article explores the growing concern of "AI agency decay"—the gradual decline in our ability to think and decide independently. This is especially crucial for business leaders navigating the increasingly automated world while retainin

How to Build an AI Agent from Scratch? - Analytics VidhyaHow to Build an AI Agent from Scratch? - Analytics VidhyaApr 21, 2025 am 11:30 AM

Ever wondered how AI agents like Siri and Alexa work? These intelligent systems are becoming more important in our daily lives. This article introduces the ReAct pattern, a method that enhances AI agents by combining reasoning an

Revisiting The Humanities In The Age Of AIRevisiting The Humanities In The Age Of AIApr 21, 2025 am 11:28 AM

"I think AI tools are changing the learning opportunities for college students. We believe in developing students in core courses, but more and more people also want to get a perspective of computational and statistical thinking," said University of Chicago President Paul Alivisatos in an interview with Deloitte Nitin Mittal at the Davos Forum in January. He believes that people will have to become creators and co-creators of AI, which means that learning and other aspects need to adapt to some major changes. Digital intelligence and critical thinking Professor Alexa Joubin of George Washington University described artificial intelligence as a “heuristic tool” in the humanities and explores how it changes

Understanding LangChain Agent FrameworkUnderstanding LangChain Agent FrameworkApr 21, 2025 am 11:25 AM

LangChain is a powerful toolkit for building sophisticated AI applications. Its agent architecture is particularly noteworthy, allowing developers to create intelligent systems capable of independent reasoning, decision-making, and action. This expl

What are the Radial Basis Functions Neural Networks?What are the Radial Basis Functions Neural Networks?Apr 21, 2025 am 11:13 AM

Radial Basis Function Neural Networks (RBFNNs): A Comprehensive Guide Radial Basis Function Neural Networks (RBFNNs) are a powerful type of neural network architecture that leverages radial basis functions for activation. Their unique structure make

The Meshing Of Minds And Machines Has ArrivedThe Meshing Of Minds And Machines Has ArrivedApr 21, 2025 am 11:11 AM

Brain-computer interfaces (BCIs) directly link the brain to external devices, translating brain impulses into actions without physical movement. This technology utilizes implanted sensors to capture brain signals, converting them into digital comman

Insights on spaCy, Prodigy and Generative AI from Ines MontaniInsights on spaCy, Prodigy and Generative AI from Ines MontaniApr 21, 2025 am 11:01 AM

This "Leading with Data" episode features Ines Montani, co-founder and CEO of Explosion AI, and co-developer of spaCy and Prodigy. Ines offers expert insights into the evolution of these tools, Explosion's unique business model, and the tr

A Guide to Building Agentic RAG Systems with LangGraphA Guide to Building Agentic RAG Systems with LangGraphApr 21, 2025 am 11:00 AM

This article explores Retrieval Augmented Generation (RAG) systems and how AI agents can enhance their capabilities. Traditional RAG systems, while useful for leveraging custom enterprise data, suffer from limitations such as a lack of real-time dat

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

SecLists

SecLists

SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

DVWA

DVWA

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

Safe Exam Browser

Safe Exam Browser

Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.