Machine learning removes bias from algorithms and the hiring process
Arena | November 09, 2020
At the premiere Machine Learning Conference (MLConf), November 6, Arena Analytics' Chief Data Scientist Patrick Hagerty will unveil a cutting edge technique that removes 92%-99% of latent bias from algorithmic models. If undetected and unchecked, algorithms can learn, automate, and scale existing human and systemic biases. These models then perpetuate discrimination as they guide decision-makers in selecting people for loans, jobs, criminal investigation, healthcare services, and so much more. Currently, the primary methods of reducing the impact of bias on models has been limited to adjusting input data or adjust models after-the-fact to ensure there is no disparate impact. Recent reporting from the Wall Street Journal confirmed these as the most recent advances, concluding, "It's really up to the software engineers and leaders of the company to figure out how to fix it… [or] go into the algorithm and tweak some of the main factors it considers in making its decisions." For several years, Arena Analytics was also limited to these approaches, but that all changed 9 months ago. Up until then, Arena removed all data from the models that could correlate to protected classifications and then measured demographic parity.