top of page

How can machine learning be both fair and accurate?

When it comes to utilising machine learning to make public policy decisions, researchers at Carnegie Mellon University are questioning a long-held belief that there is a trade-off between accuracy and fairness.

Concerns have grown as the use of machine learning has grown in areas such as criminal justice, hiring, health care delivery, and social service interventions, raising questions about whether such applications introduce new or amplify existing inequities, particularly among racial minorities and people with low income. Adjustments are made to the data, labels, model training, scoring systems, and other parts of the machine learning system to defend against this bias. The underlying theoretical assumption is that the system will become less accurate as a result of these modifications.

In new research just published in Nature Machine Intelligence, a CMU team hopes to refute that belief. Rayid Ghani, a professor in the School of Computer Science's Machine Learning Department and the Heinz College of Information Systems and Public Policy; Kit Rodolfa, a research scientist in ML; and Hemank Lamba, a post-doctoral researcher in SCS, tested that assumption in real-world applications and discovered that the trade-off was negligible in practice across a range of policy domains.

"You can truly obtain both. You don't have to forgo precision to create fair and equal processes "Ghani said. "However, it does need the conscious creation of fair and equitable processes. Off-the-shelf solutions aren't going to cut it."

Ghani and Rodolfa concentrated on circumstances in which in-demand resources are restricted and machine learning techniques are utilised to assist in resource allocation. The researchers looked at four systems: prioritising limited mental health care outreach based on a person's risk of returning to jail to reduce reincarceration; predicting serious safety violations to better deploy a city's limited housing inspectors; modelling the risk of students not graduating from high school on time to identify those who need additional support; and assisting teachers in reaching crowdfunding goals for classroom needs.

In each case, the researchers discovered that models tuned for accuracy—a common strategy in machine learning—could accurately predict the desired results, but there were significant differences in intervention recommendations. When the researchers made tweaks to the models' outputs aimed at increasing fairness, they observed that discrepancies based on race, age, or income—depending on the situation—could be addressed without sacrificing accuracy.

Ghani and Rodolfa believe that their findings will persuade other researchers and policymakers to reconsider using machine learning in decision-making.

"We urge the artificial intelligence, computer science, and machine learning groups to stop assuming that accuracy and justice are mutually exclusive and instead start creating systems that optimise both," Rodolfa said. "We expect that policymakers will use machine learning as a decision-making tool to assist them to attain more egalitarian outcomes."


reference- techexplore


35 views3 comments

Recent Posts

See All


Crane Steve
Crane Steve
Nov 01, 2022

Really enjoyed reading your blog.It is highly informative and builds great interest for the readers. For the people like us your blogs helps to get ideal information and knowledge for tunnel rush.


This air conditioner additionally functions an HD filter out with an "Anti-Virus" safety layer that deactivates over 99 percent of viruses and micro organism in contact. Finally, for similarly sturdiness, the sea black 10 Best Air Conditioners (AC) in India coating is utilized in each indoor and outside gadget. The Godrej 1.Five ton is an anti-corrosive, durable, and stylish AC.


Oct 25, 2021

Assignment Help Experts is the best Assignment Firm who helps students in writing online assignments, dissertation, and academic content at affordable prices. Contact us today!

Contact us at or call us at +61-3-9088-1335 for more information.

bottom of page