


TRIBE achieves domain adaptation robustness and reaches SOTA's AAAII 2024 in multiple real-life scenarios.
The test data flow should be time-varying distribution ( Rather than a fixed distribution in traditional domain adaptation) The test data stream may have local class correlations (rather than completely independent and identically distributed sampling) The test data stream still shows global category imbalance for a long time
The success of deep neural networks relies on generalizing the trained model to i.i.d. assumptions in the test domain . However, in practical applications, the robustness of out-of-distribution test data, such as visual damage caused by different lighting conditions or severe weather, is a concern. Recent research shows that this data loss can seriously affect the performance of pre-trained models. Importantly, the corruption (distribution) of test data is often unknown and sometimes unpredictable before deployment.
Therefore, adjusting the pre-trained model to adapt to the test data distribution in the inference phase is a worthy new topic, namely test-time domain adaptation (TTA). Previously, TTA was mainly implemented through distribution alignment (TTAC, TTT), self-supervised training (AdaContrast) and self-training (Conjugate PL), which have brought significant and robust improvements in a variety of visual damage test data.
Existing test-time domain adaptation (TTA) methods are usually based on some strict test data assumptions, such as stable class distribution, samples obey independent and identically distributed sampling, and fixed domain offset. These assumptions have inspired many researchers to explore real-world test data flows, such as CoTTA, NOTE, SAR, and RoTTA.
Recently, research on real-world TTA, such as SAR (ICLR 2023) and RoTTA (CVPR 2023), has mainly focused on the challenges posed by local class imbalance and continuous domain shift to TTA. Local class imbalance usually results from the fact that the test data is not sampled independently and identically distributedly. Direct indiscriminate domain adaptation will lead to biased distribution estimates.
Recent research has proposed exponentially updated batch normalized statistics (RoTTA) or instance-level discriminative updated batch normalized statistics (NOTE) to solve this challenge. The research goal is to transcend the challenge of local class imbalance, considering that the overall distribution of test data may be severely imbalanced and the distribution of classes may also change over time. A diagram of a more challenging scenario can be seen in Figure 1 below.
Domain shifts occur frequently in real-world test data over time, such as gradual changes in lighting/weather conditions. This brings another challenge to existing TTA methods, the TTA model may become inconsistent when switching from domain A to domain B due to over-adaptation to domain A.
In order to alleviate over-adaptation to a certain short-term domain, CoTTA randomly restores parameters, and EATA uses fisher information to regularize the parameters. Nonetheless, these methods still do not explicitly address the emerging challenges in the field of test data.
This article introduces an anchor network (Anchor Network) to form a three-network self-training model (Tri-Net Self-Training) based on the two-branch self-training architecture. The anchor network is a frozen source model but allows tuning statistics rather than parameters in the batch normalization layer via test samples. And an anchoring loss is proposed to use the output of the anchor network to regularize the output of the teacher model to avoid the network from over-adapting to the local distribution.
The final model combines a three-net self-training model and a balanced batch normalization layer (TRI-net self-training with BalancEd normalization, TRIBE) to perform well in a wider range of adjustable learning rates. Consistently superior performance. It shows substantial performance improvements under four data sets and multiple real-world data streams, demonstrating the unique stability and robustness.
- Introducing the TTA protocol in the real world;
- Balanced batch normalization;
- Three-network self-training model.
The following figure shows the frame diagram of the TRIBE network: or the pseudo-label accuracy is low (accuracy
The above is the detailed content of TRIBE achieves domain adaptation robustness and reaches SOTA's AAAII 2024 in multiple real-life scenarios.. For more information, please follow other related articles on the PHP Chinese website!

Harness the Power of On-Device AI: Building a Personal Chatbot CLI In the recent past, the concept of a personal AI assistant seemed like science fiction. Imagine Alex, a tech enthusiast, dreaming of a smart, local AI companion—one that doesn't rely

Their inaugural launch of AI4MH took place on April 15, 2025, and luminary Dr. Tom Insel, M.D., famed psychiatrist and neuroscientist, served as the kick-off speaker. Dr. Insel is renowned for his outstanding work in mental health research and techno

"We want to ensure that the WNBA remains a space where everyone, players, fans and corporate partners, feel safe, valued and empowered," Engelbert stated, addressing what has become one of women's sports' most damaging challenges. The anno

Introduction Python excels as a programming language, particularly in data science and generative AI. Efficient data manipulation (storage, management, and access) is crucial when dealing with large datasets. We've previously covered numbers and st

Before diving in, an important caveat: AI performance is non-deterministic and highly use-case specific. In simpler terms, Your Mileage May Vary. Don't take this (or any other) article as the final word—instead, test these models on your own scenario

Building a Standout AI/ML Portfolio: A Guide for Beginners and Professionals Creating a compelling portfolio is crucial for securing roles in artificial intelligence (AI) and machine learning (ML). This guide provides advice for building a portfolio

The result? Burnout, inefficiency, and a widening gap between detection and action. None of this should come as a shock to anyone who works in cybersecurity. The promise of agentic AI has emerged as a potential turning point, though. This new class

Immediate Impact versus Long-Term Partnership? Two weeks ago OpenAI stepped forward with a powerful short-term offer, granting U.S. and Canadian college students free access to ChatGPT Plus through the end of May 2025. This tool includes GPT‑4o, an a


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

WebStorm Mac version
Useful JavaScript development tools

Atom editor mac version download
The most popular open source editor

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

SublimeText3 English version
Recommended: Win version, supports code prompts!

SublimeText3 Mac version
God-level code editing software (SublimeText3)