Can Monotonic Risk Selection Make AI Predictions Accurate?
Artificial intelligence (AI) is a significant technological advancement. Every organization is buzzing about its incredible potential. However, algorithmic bias is the AI industry’s most prolific area of scrutiny. Experts believe that AI technology can become biased and create harmful outcomes in multiple ways. So, how can organizations improve AI’s accuracy? In this article at RT Insights, David Curry explains how the monotonic risk selection technique can help you reduce the error rate for underrepresented groups in AI models.
Techniques That Helped Leaders Make Accurate Decisions
Users utilizing machine-learning models to make judgments often find it difficult to trust a model’s predictions, especially when the models are complex. Several reports identified AI as racist and sexist, especially in the underrepresented group—women and people of color. Citing an example, Curry explains, “In one account, an AI used for risk assessment wrongly flagged black prisoners at twice the rate of white prisoners. In another, pictures of men without any context were identified as doctors and homemakers more than women.” Can risk selection methods reduce such errors?
Professionals used a technique known as selective regression. The method identified how sure AI is about each prediction and rejected those for which it was not confident enough. Then, humans could examine those cases, gather additional information, and manually decide each case. However, researchers identified that selective regression had the opposite effect on underrepresented groups of people in the dataset. The selective regression technique amplified biases that already existed in the dataset. This led to further inaccuracies in underrepresented groups.
After analyzing the issue, researchers at MIT and the MIT-IBM Watson AI Lab created two neural network algorithms to address them, and this technique is called monotonic selective risk.
Role of Monotonic Risk Selection in Improving AI Accuracy
Under this technique:
One algorithm ensures that the features in the model make predictions that include information about the sensitive attributes in the dataset, such as race and gender.The second algorithm uses a calibration method to ensure that the model makes the exact prediction for each input, even if that input has sensitive attributes.
The two neural network algorithms helped users reduce errors and disparities. Furthermore, the new technique did not significantly impact the model’s overall performance.
To learn more about risk selection techniques, click on https://www.rtinsights.com/ai-model-technique-monotonic-selective-risk/.
The post Can Monotonic Risk Selection Make AI Predictions Accurate? appeared first on AITS CAI’s Accelerating IT Success.