Fighting Fraud Fairly: Upgrade Your AI Toolkit
A practical approach to address bias in AI systemsPhoto by the authorAs sophisticated AI systems are increasingly used in decision-making, ensuring fairness has become a priority, with a growing need to prevent algorithms from disproportionately affecting vulnerable groups in sensitive areas like the justice or educational system.One famous example is the COMPAS algorithm, that was designed to assist the U.S. criminal justice system in making the judicial process less biased. However, evidence suggests that the algorithm has unfairly predicted a higher risk of recidivism for Black defendants, as detailed in the 2019 MIT Technology Review article [1].In the educational system, we’re also starting to see how AI Detectors Falsely Accuse Students of Cheating — With Big Consequences [2]. For example, tools like GPTZero, Copyleaks or Turnitin’s AI-powered plagiarism detectors have been shown to unfairly target English as a Second Language (ESL) students and other vulnerable student population (e.g. neurodivergent). Although designed to maintain academic integrity, these detectors are more likely to flag non-native English speakers for plagiarism, as they tend to overfit linguistic patterns typical of ESL writing. As a non-native writer myself, this article will probably fail to pass any of these AI-powered plagiarism detectors.
A practical approach to address bias in AI systems
As sophisticated AI systems are increasingly used in decision-making, ensuring fairness has become a priority, with a growing need to prevent algorithms from disproportionately affecting vulnerable groups in sensitive areas like the justice or educational system.
One famous example is the COMPAS algorithm, that was designed to assist the U.S. criminal justice system in making the judicial process less biased. However, evidence suggests that the algorithm has unfairly predicted a higher risk of recidivism for Black defendants, as detailed in the 2019 MIT Technology Review article [1].
In the educational system, we’re also starting to see how AI Detectors Falsely Accuse Students of Cheating — With Big Consequences [2]. For example, tools like GPTZero, Copyleaks or Turnitin’s AI-powered plagiarism detectors have been shown to unfairly target English as a Second Language (ESL) students and other vulnerable student population (e.g. neurodivergent). Although designed to maintain academic integrity, these detectors are more likely to flag non-native English speakers for plagiarism, as they tend to overfit linguistic patterns typical of ESL writing. As a non-native writer myself, this article will probably fail to pass any of these AI-powered plagiarism detectors.
What's Your Reaction?