Home >Technology peripherals >AI >Selective risk could improve AI fairness and accuracy
Researchers at the MIT Artificial Intelligence Laboratory have published a new paper that seeks to condemn the use of selective regression in certain scenarios because this technique can reduce the overall performance of models for groups that are underrepresented in a data set. .
These underrepresented groups tend to be women and people of color, and this neglect of them has led to some concerns about racism and artificial intelligence. Reports of sexism. In one account, artificial intelligence used for risk assessment incorrectly flagged black prisoners as twice as likely as white prisoners. In another case, photos of men without any background were identified as doctors and housewives at higher rates than women.
With selective regression, the AI model can make two choices for each input: predict or abstain. The model only makes predictions if it is confident about the decision, and over several tests the model's performance is improved by excluding inputs that cannot be evaluated correctly.
However, when input is removed, it amplifies biases that are already present in the data set. This will lead to further inaccuracies for underrepresented groups once the AI model is deployed into real life because it cannot remove or reject underrepresented groups as it could during development. Ultimately you want to make sure you consider error rates across groups in a sensible way, rather than just minimizing some broad error rate for your model.
The MIT researchers also introduced a new technique designed to improve model performance in each subgroup. This technique is called monotonic selective risk, in which one model does not abstain and instead includes sensitive attributes such as race and gender, while the other does not. At the same time, both models make decisions, and the model without sensitive data is used as a calibration for bias in the dataset.
Coming up with the right concept of fairness for this particular problem is a challenge. But by enforcing this criterion, monotonic selection risk, we can ensure that when reducing coverage, model performance actually gets better across all subgroups.
When tested using the Medicare dataset and the crime dataset, the new technique was able to reduce error rates for underrepresented groups without significantly affecting the overall performance of the model. The researchers plan to apply the technology to new applications, such as housing prices, student grade point averages and loan interest rates, and see if it can be used for other tasks.
The above is the detailed content of Selective risk could improve AI fairness and accuracy. For more information, please follow other related articles on the PHP Chinese website!