Enhancing Fairness In Supervised Machine Learning


Enhancing Fairness In Supervised Machine Learning pdf

Download Enhancing Fairness In Supervised Machine Learning PDF/ePub or read online books in Mobi eBooks. Click Download or Read Online button to get Enhancing Fairness In Supervised Machine Learning book now. This website allows unlimited access to, at the time of writing, more than 1.5 million titles, including hundreds of thousands of titles in various foreign languages.

Download

Enhancing Fairness in Supervised Machine Learning


Enhancing Fairness in Supervised Machine Learning

Author: Bita Omidi

language: en

Publisher:

Release Date: 2021


DOWNLOAD





The increasing influence of machine learning algorithms and artificial intelligence on the high-impact domains of decision-making has led to an increasing concern for the ethical and legal challenges posed by sensitive data-driven systems. Machine learning can identify the statistical patterns in the historically collected big data generated by a huge number of instances that might be affected by human and structural biases. ML algorithms have the potential to amplify these inequities. Lately, there have been several attempts to reduce bias in artificial intelligence in order to maintain fairness in machine learning projects. These methods fall under three categories of pre-processing, in-processing, and post-processing techniques. There are at least 21 notations of fairness in the recent literature, which not only provide different measurement methods of fairness but also lead to completely different concepts. It is worth mentioning that, it is impossible to satisfy all of the definitions of fairness at the same time and some of them are incompatible with each other. As a result, it is important to choose a fairness definition that need to be satisfied according to the context that we are working on. The current study investigates some of the most common definitions and metrics for fairness introduced by researchers to compare three of the proposed de-biasing techniques regarding their effects on the performance and fairness measures through empirical experiments on four different datasets. The de-biasing methods include the "Reweighing Algorithm", "Adversarial De-biasing Method", the "Reject Option Classification Method" performed on the classification tasks of "Survival of patients with heart failure"(Heart Failure Dataset), "Prediction of hospital readmission among diabetes patients" (Diabetes Dataset), "Credit classification of bank account holders" (German Credit Dataset), and "The COVID19 related anxiety level classification of Canadians" (CAMH Dataset). Findings show that the adversarial de-biasing in-processing method can be the best technique for mitigating bias working with the deep learning classifiers when we are capable of changing the classification process. This method has not led to a considerable reduction of accuracy except for the CAMH dataset. The "Reject Option Classification" which is a post-processing method, causes the most deterioration of prediction accuracy in all datasets. On the other hand, this method has the best performance in alleviating the bias generated through the classification process. The "Reweighing Algorithm" as a pre-processing technique does not cause a considerable reduction in the accuracy and is capable of reducing bias in classification tasks, although its performance is not as strong as the Reject Option classifier.

Enhancing fairness in machine learning through reweighting


Enhancing fairness in machine learning through reweighting

Author: Xuan Zhao

language: de

Publisher:

Release Date: 2024


DOWNLOAD





Mitigating Bias in Machine Learning


Mitigating Bias in Machine Learning

Author: Carlotta A. Berry

language: en

Publisher: McGraw Hill Professional

Release Date: 2024-10-18


DOWNLOAD





This practical guide shows, step by step, how to use machine learning to carry out actionable decisions that do not discriminate based on numerous human factors, including ethnicity and gender. The authors examine the many kinds of bias that occur in the field today and provide mitigation strategies that are ready to deploy across a wide range of technologies, applications, and industries. Edited by engineering and computing experts, Mitigating Bias in Machine Learning includes contributions from recognized scholars and professionals working across different artificial intelligence sectors. Each chapter addresses a different topic and real-world case studies are featured throughout that highlight discriminatory machine learning practices and clearly show how they were reduced. Mitigating Bias in Machine Learning addresses: Ethical and Societal Implications of Machine Learning Social Media and Health Information Dissemination Comparative Case Study of Fairness Toolkits Bias Mitigation in Hate Speech Detection Unintended Systematic Biases in Natural Language Processing Combating Bias in Large Language Models Recognizing Bias in Medical Machine Learning and AI Models Machine Learning Bias in Healthcare Achieving Systemic Equity in Socioecological Systems Community Engagement for Machine Learning