Skip to content

Shows how machine learning bias can be illustrated by creating an AIMMS front-end for a Python application that teaches about ethics-related bias using data from Kaggle.

Notifications You must be signed in to change notification settings

aimms/bias-in-ai

Repository files navigation

bias-in-ai

WebUI

Mirrored in: https://github.com/aimms/bias-in-ai

How-to: https://how-to.aimms.com/Articles/623/623-bias-in-ai.html

Story

At the end of 2017, the Civil Comments (https://medium.com/@aja_15265/saying-goodbye-to-civil-comments-41859d3a2b1d) platform shut down and released their ~2 million public comments in a lasting open archive. Jigsaw sponsored this effort and helped to comprehensively annotate the data. In 2019, Kaggle held the Jigsaw Unintended Bias in Toxicity Classification (https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/overview) competition so that data scientists worldwide could work together to investigate ways to mitigate bias.

This project is able to load some of the data from the competition and connect with Python model. After typing a query, the code will run for approximately 30 seconds, when it finishes, you should see as output a message saying if the query you typed is "toxic" or "not toxic".

About

Shows how machine learning bias can be illustrated by creating an AIMMS front-end for a Python application that teaches about ethics-related bias using data from Kaggle.

Resources

Stars

Watchers

Forks