Home  >  Article  >  Technology peripherals  >  ICLR2024 | Harvard FairSeg: The first large-scale medical segmentation dataset to study the fairness of segmentation algorithms

ICLR2024 | Harvard FairSeg: The first large-scale medical segmentation dataset to study the fairness of segmentation algorithms

PHPz
PHPzOriginal
2024-07-17 05:46:461084browse

ICLR2024 | Harvard FairSeg: 第一个研究分割算法公平性的大型医疗分割数据集

Author | Tian Yu

Editor | Cabbage Leaf

In recent years, the issue of fairness of artificial intelligence models has received more and more attention, especially in the medical field, because the fairness of medical models has a negative impact on people's health. Health and life matter. High-quality medical equity datasets are necessary to advance equitable learning research.

Existing medical fairness datasets are all aimed at classification tasks, and there is no fairness dataset available for medical segmentation. However, medical segmentation is a very important medical AI task like classification. In some scenarios, segmentation is even superior to classification because it provides detailed spatial information on organ abnormalities to be evaluated by the clinician.

In the latest research, the Harvard-Ophthalmology-AI-Lab team at Harvard University proposed the first fairness dataset for medical segmentation, called Harvard-FairSeg, containing 10,000 patient samples. Additionally, a fair error bound scaling method is proposed by using the latest Segment Anything Model (SAM) to reweight the loss function based on the upper bound error for each identity group.

To facilitate fair comparisons, the team utilized a novel criterion for assessing fairness in segmentation tasks called equity-scaled segmentation performance. Through comprehensive experiments, the researchers demonstrate that their approach is either superior or comparable in fairness performance to state-of-the-art fairness learning models.

Here, researchers from Harvard University share with you a wave of ICLR 2024 final draft work "Harvard FairSeg: A Large-Scale Medical Image Segmentation Dataset for Fairness Learning Using Segment Anything Model with Fair Error-Bound Scaling".

ICLR2024 | Harvard FairSeg: 第一个研究分割算法公平性的大型医疗分割数据集
Article address: https://arxiv.org/pdf/2311.02189.pdf

Code address: https://github.com/Harvard-Ophthalmology-AI-Lab/Harvard-FairSeg

Dataset website: https://ophai.hms.harvard.edu/datasets/harvard-fairseg10k/

Dataset download link: https://drive.google.com/drive/u/1/folders /1tyhEhYHR88gFkVzLkJI4gE1BoOHoHdWZ

Harvard-Ophthalmology-AI-Lab is committed to providing high-quality fairness datasets, and more datasets include fairness classification tasks for three ophthalmic diseases.

Dataset webpage of Harvard-Ophthalmology-AI-Lab: https://ophai.hms.harvard.edu/datasets/

Background

With the increasing application of artificial intelligence in medical imaging diagnosis , it becomes critical to ensure the fairness of these deep learning models and to delve into the hidden biases that may arise in complex real-world situations. Unfortunately, machine learning models may inadvertently include sensitive attributes related to medical images (such as race and gender), which may impact the model's ability to distinguish anomalies. This challenge has spurred numerous efforts in machine learning and computer vision to investigate bias, advocate for fairness, and introduce new datasets.

ICLR2024 | Harvard FairSeg: 第一个研究分割算法公平性的大型医疗分割数据集

As of now, only a few public fairness datasets have been proposed for studying fairness classification. The main thing is that most of these datasets are just tabular data, so they are not suitable for developing fairness computer vision that requires imaging data. Model. This lack of fairness in computer vision is of particular concern, especially given the growing influence of deep learning models that rely on such data. In the field of medical imaging, only a few datasets have been used for fair learning.

Most of these datasets are not specifically designed for fairness modeling (the only medical image datasets currently are listed in table 1). They typically contain only a limited range of sensitive attributes such as age, gender, and race, thus limiting the scope for examining fairness across different populations. Furthermore, they also lack a comprehensive benchmarking framework. More importantly, although these previous datasets and methods provide solutions for medical classification, they ignore the more critical area of ​​medical segmentation.

However, creating such a new large dataset for fair learning faces multiple challenges. First, there is a lack of large-scale, high-quality medical data and manual pixel-level annotation, which require a lot of labor and time to collect and annotate. Second, existing methods to improve fairness are mainly designed for medical classification, and their performance remains questionable when adapted to segmentation tasks. It is also uncertain whether the unfairness present in the segmentation task can be effectively mitigated algorithmically. Finally, evaluation metrics for assessing the fairness of medical segmentation models remain elusive. Additionally, there can be challenges in adapting existing fairness metrics designed for classification to segmentation tasks.

ICLR2024 | Harvard FairSeg: 第一个研究分割算法公平性的大型医疗分割数据集

To address these challenges, we propose the first large-scale fairness dataset in the field of medical segmentation, Harvard-FairSeg. This dataset is designed to be used to study fair cup-disc segmentation for diagnosing glaucoma from SLO fundus images, as shown in Figure 1.

Glaucoma is one of the leading causes of irreversible blindness worldwide, with a prevalence of 3.54% in the 40-80 age group, affecting approximately 80 million people. Early glaucoma is often asymptomatic, which emphasizes the need for prompt professional examination. Accurate segmentation of cup-discs is critical for early diagnosis of glaucoma by medical professionals.

Notably, black people have twice the risk of developing glaucoma compared to other groups, yet this group generally has the lowest segmentation accuracy. This motivates us to compile a dataset to study the problem of segmentation fairness. The highlights of our proposed Harvard-FairSeg dataset are as follows:

(1) The first fairness learning dataset in the field of medical segmentation. This dataset provides cup-disc segmentation of SLO fundus imaging data; (2) This dataset is equipped with six sensitive attributes collected from real-life hospital clinical scenarios for studying the fairness learning problem; (3) We Multiple SOTA fairness learning algorithms are evaluated on the proposed new dataset and evaluated using multiple segmentation performance metrics including Dice and IoU.

How to obtain a large number of high-quality segmentation annotations

The subjects tested in this study came from a large academic eye hospital, and the time span was from 2010 to 2021. This study will publish three types of data: (1) SLO fundus scan images; (2) patient demographic information containing six different attributes; (3) automatically annotated by OCT machines and manually rated by professional medical practitioners How to obtain a large number of high-quality segmentation annotations with pixel-level annotation has always been a very important part of medical segmentation.

Our novel method is to first obtain the pixel annotation of the cup and disc areas from the OCT machine, where the disc boundary is divided into Bruch's membrane openings in 3D OCT, which is implemented by the OCT manufacturer software, and the cup boundary is detected as the inner limit membrane ( The intersection between ILM) and the plane that results in the minimum surface area and the intersection of the disc boundary on the plane. Roughly speaking, the cup border can be thought of as the location on the ILM closest to the optic disc border, defined as Bruch’s membrane opening.

Bruch’s membrane opening and inner limiting membrane are easily segmented due to the high contrast between them and the background. So because OCT maker software utilizes 3D information, segmentation of cups and discs using OCT machines is generally reliable.

In contrast, 2Dcup and disc segmentation on fundus photographs can be challenging due to various factors including attenuated imaging signals and vascular occlusion. However, as OCT machines are quite expensive and less common in primary care, we propose to migrate these annotations from 3D OCT to 2D SLO fundus images to have a wider impact in early glaucoma screening in primary care.

Specifically, we first use the NiftyReg tool to align the SLO fundus image with the OCT-derived fundus image (OCT fundus). Subsequently, we apply the affine metric of NiftyReg to the cup-disc mask of the OCT fundus image to align it with the SLO Fundus image alignment. This process effectively produces a large number of high-quality SLO fundus mask annotations, avoiding the labor-intensive manual pixel annotation process.

It is worth noting that this medical registration operation demonstrates quite high accuracy in real-world scenarios, and our empirical observations show that the medical registration success rate is approximately 80%. Following this automated process, the generated masks are rigorously reviewed and manually rated by a panel of five medical professionals to ensure precise annotation of cup-disc regions and exclude misplaced cup or disc masks and registration failures. Condition.

Data Features: Our Harvard-FairSeg dataset contains 10,000 samples from 10,000 subjects. We split the data into a training set of 8,000 samples and a test set of 2,000 samples. The mean age of the data set was 60.3±16.5 years. In this dataset, six sensitive attributes are included for in-depth fairness learning research, including age, gender, race, ethnicity, preferred language, and marital status.

In terms of racial demographics, the dataset includes samples from three main groups: Asians, with 919 samples; Blacks, with 1,473 samples; and Whites, with 7,608 samples. In terms of gender, women comprised 58.5% of the subjects and the remainder were men. The ethnic distribution was 90.6% non-Hispanic, 3.7% Hispanic, and 5.7% unspecified. In terms of preferred language, 92.4% of the subjects preferred English, 1.5% preferred Spanish, 1% preferred other languages, and 5.1% were undecided. From the perspective of marital status, 57.7% were married or partnered, 27.1% were single, 6.8% had experienced divorce, 0.8% were legally separated, 5.2% were widowed, and 2.4% did not specify.

Our approach to improving fairness, Fair Error-Bound Scaling

We assume that sample groups that obtain a smaller overall Dice loss means that the model learns better for that specific group of samples, therefore, these sample groups need to be smaller Small weight. Conversely, sample groups with larger overall Dice loss (i.e., intractable cases) may lead to worse generalization capabilities and induce more algorithm bias, which requires assigning larger learning weights to these sample groups.

Therefore, we propose a new fair error bound scaling method for scaling Dice loss between different population groups during training. We first define the standard Dice loss between predicted pixel scores and ground truth targets as:

ICLR2024 | Harvard FairSeg: 第一个研究分割算法公平性的大型医疗分割数据集

To ensure fairness among different attribute groups, we use a novel fair error bound scaling mechanism to enhance the above Dice loss. Loss function:

ICLR2024 | Harvard FairSeg: 第一个研究分割算法公平性的大型医疗分割数据集

ICLR2024 | Harvard FairSeg: 第一个研究分割算法公平性的大型医疗分割数据集

By adjusting the predicted pixel scores with these attribute weights, this loss ensures that different attribute groups contribute to the loss function in a balanced manner during model training, thereby promoting fairness.

Metrics for evaluating fair segmentation accuracy: Traditional segmentation metrics such as Dice and IoU provide insights into segmentation performance, but may not effectively capture fairness across different groups. With this in mind, we aim to propose a new metric that encompasses both segmentation accuracy and fairness across different groups. This results in a comprehensive perspective, ensuring the model is both accurate and fair.

To incorporate group fairness, we need to evaluate group accuracy individually. We first define a segmentation measure accuracy difference Δ as follows:

ICLR2024 | Harvard FairSeg: 第一个研究分割算法公平性的大型医疗分割数据集

Here, Δ measures the overall deviation of each population’s accuracy from the overall accuracy. It approaches zero when all groups achieve similar segmentation accuracy.

When we consider fairness across different groups, we need to calculate the relative difference between overall segmentation accuracy and accuracy within each demographic group. Based on this, we define the Equity-Scaled Segmentation Performance (ESSP) metric as defined below:

ICLR2024 | Harvard FairSeg: 第一个研究分割算法公平性的大型医疗分割数据集

This formula ensures that ESSP is always less than or equal to I. As Δ decreases (indicating equal segmentation performance among groups), ESSP tends to the traditional segmentation metric. In contrast, a higher Δ indicates greater differences in segmentation performance between groups, resulting in lower ESSP scores.

This approach allows us to evaluate segmentation models not only on accuracy (via Dice, IoU, etc. metrics) but also on fairness across different groups. This makes the ESSP scoring function a key metric to ensure segmentation accuracy and fairness in medical imaging tasks. This metric can be combined with traditional dice IoU to become ES-Dice and ES-IoU.

Experiment

We chose two segmentation networks as backbone. Among them, we chose the recently launched large segmentation model Segment Anything Model (SAM) to experiment with the segmentation accuracy of SOTA, and for the other backbone we chose TransUNet.

ICLR2024 | Harvard FairSeg: 第一个研究分割算法公平性的大型医疗分割数据集

ICLR2024 | Harvard FairSeg: 第一个研究分割算法公平性的大型医疗分割数据集

ICLR2024 | Harvard FairSeg: 第一个研究分割算法公平性的大型医疗分割数据集

ICLR2024 | Harvard FairSeg: 第一个研究分割算法公平性的大型医疗分割数据集

We also used other segmentation metrics such as HD95 ASD and NSD for testing. The following are the results on race:

ICLR2024 | Harvard FairSeg: 第一个研究分割算法公平性的大型医疗分割数据集

The above is the detailed content of ICLR2024 | Harvard FairSeg: The first large-scale medical segmentation dataset to study the fairness of segmentation algorithms. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn