Employers large and small are increasingly turning to AI systems to support talent acquisition. In a 2020 report, Sage (the UK software company – not the emergency scientific advisory group – not everything related to Covid-19!) Noted that 24 percent of companies are currently using AI for recruiting and that number is in the next likely to double as 12 months as 56 percent plan to adopt it within the next year. The Covid-19 pandemic (ok, most things are related to Covid-19 right now …) only seems to speed up that process as companies accelerate digital transformations and lockdown rules require candidates to conduct interviews remotely and more people do theirs Losing Jobs, Which Means More People are applying for limited vacancies.

AI in recruiting is not a new thing. Back in 2014, Amazon began using a resume verification algorithm for the appointment of top talent (it later stopped using that particular AI system when it appeared sexist in its referrals from applicants). Today there are dedicated companies offering a variety of bespoke video interviewing software solutions and platforms that use AI to facilitate the selection of the “best” candidates. This brief episode of Moving Upstream from the Wall Street Journal provides a great look at how one company, HireVue, works. Candidates are interviewed by an AI robot and human behavior – tone of voice, phrase and micro-expressions (smile, frown, etc.) – is assessed during the course of the interview and then quantified using a desired list of attributes.

How to do it right

With the use of AI to make these life-changing decisions, there is a significant risk that algorithms can exacerbate issues of fairness and inequality. The UK Information Commissioner’s office has explored the use of algorithms and automated decision-making, and the risks and opportunities they pose in the employment context. It highlighted six key points to consider when using AI when hiring. We summarize each point below and extract the main ICO recommendations.

1. Bias and discrimination are a problem in human decision making, so it is a problem in AI decision making. An AI model is only as good as the data it delivers – as an early IBM programmer put it briefly: “Garbage in, garbage out”. Programmers need to be aware of the prejudices that may be reflected in past data, as using that data to train an AI system will ultimately spread this past injustice into the future. According to the ICO, the AI ​​is not currently at a stage where it can effectively predict social outcomes or eliminate discrimination in the datasets or decisions. ICO recommendation: Employers should assess whether AI is a necessary and appropriate solution to the problem. (The UK Center for Data Ethics and Innovation (CDEI) has published a report reviewing the bias in algorithmic decision-making. A summary of the CDEI’s recommendations can be found on our previous blog.

2. It is difficult to build fairness into an algorithm. Every AI system must comply with the data protection principle of fairness. UK law does not require fairness. ICO recommendation: At the beginning of the AI ​​life cycle, determine and document how you can adequately reduce distortions and discrimination in your data protection impact assessment. Then use the appropriate safety precautions and technical measures during the design and construction phase. For international employers – consider whether an algorithm trained to meet the fairness requirements of one jurisdiction meets the standards of another jurisdiction.

3. The further development of big data and machine learning algorithms makes it difficult to recognize distortions and discrimination. Machine learning AI systems that use big data can develop patterns that are non-intuitive and difficult to spot. Some of these patterns can lead to correlations that discriminate against groups of people. ICO recommendation: Monitor changes and invest time and resources to ensure that you continue to follow best practice in this area and that your employees remain trained accordingly.

4. When developing AI systems, you must take into account the data protection AND equality law. There are a variety of laws that could make AI decision-making illegal. While there is some overlap between the obligations under the various pieces of legislation, there are differences. In particular, the Data Protection Act prescribes various measures that every employer must take in order to combat unfair discrimination. This includes the application of suitable technical and organizational measures to prevent discrimination in the processing of personal data for profiling and automated decision-making. ICO recommendation: Organizations must examine their obligations from both areas of law separately. Compliance with one does not guarantee compliance with the other.

5. Using only automated decisions for hiring private sector employees is likely illegal under the GDPR. The GDPR only prohibits automated decisions that have legal or similarly significant effects. There are exceptions, but they are unlikely to be adequate for private sector recruitment. ICO recommendation: AI in recruiting is better used as a complementary tool to improve human decisions – not to make decisions yourself.

6. Algorithms and automation can also be used to address the problems of bias and discrimination. By using AI-led video interviews, decision makers can conduct the same video interview with the same questions for each candidate, helping to eliminate bias. In addition, algorithms are developed in-house to detect distortions and discrimination in the early stages of the system life cycle. ICO recommendation: While we may never be able to dispel the deeply ingrained human prejudice, automation can improve our decision making.