Home >Technology peripherals >AI >Meta releases FACET dataset for evaluating AI fairness

Meta releases FACET dataset for evaluating AI fairness

WBOY
WBOYforward
2023-09-10 10:57:07707browse

Meta releases FACET dataset for evaluating AI fairness

News on September 4th, Meta recently released an open source data set called FACET, designed to help researchers audit biases in computer vision models.

In a blog post, Meta detailed that it is difficult to assess AI fairness using current benchmarking methods. According to Meta, FACET will simplify this task by providing a large evaluation dataset that researchers can use to audit several different types of computer vision models.

Meta researchers detailed in a blog post: "The dataset consists of 32,000 images of 50,000 people, labeled by expert human annotators with demographic attributes such as perceived gender presentation, perceived age groups, additional physical attributes such as perceived skin color, hairstyle, and person-related categories such as basketball players, doctors, etc. FACET also contains person, hair, and clothing tags for the 69,000 masks in SA-1B."

Researchers can examine fairness issues by letting computer vision models process photos in FACET. From there, they can perform analysis to determine whether the accuracy of the model's results varies from photo to photo. This variation in accuracy could be a sign that the AI ​​is biased.

Researchers can use this dataset to detect bias in neural networks optimized for classification, the task of grouping similar images together. Furthermore, it makes it easier to evaluate object detection models. This model is designed to automatically detect items of interest in photos.

FACET can also audit AI applications that perform instance segmentation and visual grounding, two specialized object detection tasks. Instance segmentation is the process of highlighting items of interest in a photo, such as drawing a box around them. The vision base model, in turn, is a neural network that scans photos looking for objects that users describe in natural language.

Meta researchers stated: “Although FACET is only for research evaluation purposes and cannot be used for training, our purpose of releasing the dataset and dataset browser is so that FACET can become a standard fair for computer vision models. Sexual assessment benchmark.”

The above is the detailed content of Meta releases FACET dataset for evaluating AI fairness. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete