AI Should be Reducing Bias, Not Introducing it in Recruiting

AI Should be Reducing Bias, Not Introducing it in Recruiting

AI Should be Reducing Bias, Not Introducing it in Recruiting | DeviceDaily.com

It’s easy to celebrate the accelerating ability of AI and machine learning to solve problems. It can be more difficult, however, to admit that this technology might be causing them in the first place.

Tech companies that have implemented algorithms meant to be an objective, bias-free solution to recruiting more female talent have learned this the hard way. [And yet — saying “bias-free, and “recruit more female” in the same breath — ahem — is not bias-free].

Amazon has been perhaps the loudest example when it was revealed that the company’s AI-driven recruiting tool was not sorting candidates for a developer and other technical positions in a gender-neutral way. While the company has since abandoned the technology, it hasn’t stopped other tech giants like LinkedIn, Goldman Sachs and others from tinkering with AI as a way to better vet candidates.

It’s not a surprise that Big Tech is looking for a silver bullet to increase their commitment to diversity and inclusion — so far, their efforts have been ineffective. Statistics reveal women only hold 25 percent of all computing jobs and the quit rate is twice as high for women than it is for men. At the educational level, women also fall behind their male counterparts; only 18 percent of American computer science degrees go to women.

But leaning on AI technology to close the gender gap is misguided. The problem is very much human.

Machines are fed massive amounts of data and are instructed to identify and analyze patterns. In an ideal world, these patterns produce an output of the very best candidates, regardless of gender, race, age or any other identifying factor aside from the ability to meet job requirements. But AI systems do precisely as they are trained, most of the time based on real-life data, and when they begin to make decisions, prejudices and stereotypes that existed in the data become amplified.

Thinking outside the (black) box about AI bias.

Not every company that uses algorithmic decision-making in their recruiting efforts are receiving biased outputs. However, all organizations that employ this technology need to be hyper-vigilant about how they are training these systems — and take proactive measures to ensure bias is being identified and then reduced, not exacerbated, in hiring decision making.

  • Transparency is key.

    In most cases, machine learning algorithms work in a “black box,” with little to no visibility into what happens between the input and the resulting output. Without in-depth knowledge of how individual AI systems are built, understanding how each specific algorithm makes decisions is improbable.

    If companies want their candidates to trust their decision making, they need to be transparent about their AI systems and the inner-workings. Companies looking for an example of how this looks in practice can take a page from the S. Military’s Explainable Artificial Intelligence project.

    The project is an initiative of the Defense and Research Project Agency (DARPA), and seeks to teach continually evolving machine learning programs to explain and justify decision making so that it can be easily understood by the end user — thus building trust and increasing transparency in the technology.

  • Algorithms should be continuously re-examined.

    AI and machine learning are not tools you can “set and forget.” Companies need to implement regular audits of these systems and the data they are being fed in order to mitigate the effects of inherent or unconscious biases. These audits should also incorporate feedback from a user group with diverse backgrounds and perspectives to counter potential biases in the data.

Companies should also consider being open about the results of these audits. Audit findings are not only critical to their understanding of AI, but can also be valuable to the broader tech community.

By sharing what they have learned, the AI and machine learning communities can contribute to more significant data science initiatives like open source tools for bias testing. Companies that are leveraging AI and machine learning ultimately benefit from contributing to such efforts, as more substantial and better data sets will inevitably lead to better and fairer AI decision making.

  • Let AI influence decisions, not make them.

    Ultimately, AI outputs are predictions based on the best available data. As such, they should only be a part of the decision making process. A company would be foolish to assume an algorithm is producing an output with total confidence, and the results should never be treated as absolutes.

    This should be made abundantly clear to candidates. Ultimately, they should feel confident that AI is helping them in the recruiting process, not hurting them.

AI and machine learning tools are advancing at a rapid clip. But for the foreseeable future, humans are still required to help them learn.

Companies currently using AI algorithms to reduce bias, or those considering using them in the future, need to think critically about how these tools will be implemented and maintained. Biased data will always produce biased results, no matter how intelligent the system may be.

Technology should only be seen as part of the solution, especially for problems as important as addressing tech’s diversity gap. An evolved AI solution may one day be able to sort candidates without any sort of bias confidently. Until then, the best solution to the problem is looking inward.

Lin Classon

Lin Classon

Director of Public Cloud Product Strategy at Ensono

Lin Classon is the director of public cloud product strategy at Ensono. Passionate about the opportunity for innovation in the public cloud, Lin is responsible for leading purpose-driven, evidence-based product strategy and ensuring a world-class public cloud solution for clients. Prior to joining Ensono, Lin led global product marketing for Google travel products, supporting global product strategy and driving partner growth and user adoption. Lin began her career as a management consultant at McKinsey and Company after receiving an Interdisciplinary Ph.D. from Northwestern University.

The post AI Should be Reducing Bias, Not Introducing it in Recruiting appeared first on ReadWrite.

ReadWrite

(34)