AI in recruiting: Let's try it out!
For years now, the focus of public discussion on the use of algorithms has been on how much responsibility we can take away from them. This is also the case in HR, where the opportunities of new technologies are discussed and, rightly so, viewed critically. However, it is not uncommon for these debates to lack practical relevance, leading to uncertainties when it comes to how algorithms can actually help solve a problem, as well as how they actually work.
Recruiting should no longer be about whether algorithms should help, but whether they can provide specific solutions
How automation of the pre-selection process for candidates can be useful has been answered in many ways in recent years, and is known to most decision-makers in human resources: the process is made significantly faster if, for example, CVs can be evaluated automatically, and assessed when put into a pre-selected form. However, AI should not just replicate existing processes automatically, but should also aim to improve them. It is here that the digitization of the pre-selection process offers the opportunity to reduce discrimination in recruiting.
However, the question of how exactly such algorithms should look has not been clearly answered. The capabilities and impact of computer-aided selection processes have not yet been fully investigated in research and among companies. While some CV information, such as academic qualifications, can already be compared scientifically and fairly, there is still a risk that fully-automated algorithms for the selection process will perpetuate existing discrimination, and thus contribute to the problem. Our cooperation partner in the FAIR project, Prof. Pia Pinger from the University of Cologne, also states: "The state of research in this regard must be expanded in order to be able to develop ethically-justifiable algorithms and to use validation studies to present the effectiveness of the technology transparently". Only then can companies be trusted to dedicate themselves to implementing such technologies. FAIR, a project by the HR Tech company candidate select and the University of Cologne, has set itself this task and is researching methods to measure the amount of discrimination of an algorithm.
What exactly should such an algorithm look like?
Algorithms that are used to preselect candidates must be able to analyse information such as education, work experience, social commitment and special skills, while making valid predictions without discriminating against certain groups. So the danger is not only that algorithms discriminate, but also that they make automated, yet still bad, decisions.
An example: academic qualifications can be qualified based primarily on the degree achieved and the final grade. In the annual report from 2012 of the German Science Council (page 9, paragraph 1, Drs. 2627-12, Hamburg, November 9, 2012), it was noted that comparisons at this level do not provide sufficient information for a meaningful prediction. Grades only make sense in their precise context, which is why degrees should be considered at a school or programme level. Basically, a model based on contextual information must reduce complexity in order to optimise less complex factors using training data sets. Or, in the words of my co-founder Dr. Jan Bergerhoff: "In Germany there are over 5,000 different schools where the Abitur can be obtained. Since centrally-placed final exams only make up a small part of the final Abitur grade, each of these schools has a different grade standard in practice. There are also deviations in the average grade of more than one grade step within the individual federal states of Germany. In addition, the performance density at different schools differs. Considering the limited training data and the high level of complexity in the educational system, “classic” machine learning applications are simply not able to make useful predictions - regardless of whether such algorithms discriminate against individual groups.
When technology is explained transparently and scientifically validated, scepticism disappears
In order to advance both digitisation and the optimisation of existing processes through new technologies, providers and HR departments must be equally open to them. HR personnel should be ready for new technology and educational talks, and providers should explain their algorithms more transparently. And above all, even the best expert cannot check an algorithm without a database; that is why HR and other areas should be ready to collect and analyse data responsibly. Data-driven HR work takes place far too rarely, yet this is the basis for answering whether an algorithm makes valid predictions in a certain context and works without discrimination. Larissa Fuchs from the University of Cologne explains how algorithms can be checked for discrimination in the blog post "Fair Play – The Interaction Between Humans and Machines in Recruiting". Incidentally, these analyses are also the prerequisite for the long-term use of such methods. And, of course, the evaluation should also continue alongside the assignment - this can also be automated. Of course, all of this is more work than determining the advantages or the dangers of AI for oneself. But if AI not only digitally maps the current processes, but actually improves them through better attitudes and less discrimination, then we open up economic potential that justifies the use of AI in recruiting across the board.
About the author:
Dr. Philipp Karl Seegers is a Labour Economist dealing with the transition between education and the labour market. Together with Dr. Jan Bergerhoff and Dr. Max Hoyer, Philipp founded the HR-Tech company candidate select GmbH (CASE), which uses large data sets and scientific methods to make educational qualifications comparable. Philipp is the project manager of Project FAIR (“Fair Artificial Intelligence Recruiting”), which is funded by the state of North Rhine-Westphalia and the EU. In addition, In addition, Philipp is a research fellow at Maastricht University and the initiator of the study series “Fachkraft 2030”, actively researching questions in the areas of educational economics, psychological diagnostics and the labour market.