AI robot removes bias from job interviews

A robot called Tengai has been created by Furhat Robotics, an AI company at Stockholm's KTH Royal Institute of Technology, to conduct job interviews without any form of unconscious bias, often displayed unwittingly by human interviewers.

Tengai measures 16-in tall and sits at eye level on top of a table directly across from the candidate 'she' is interviewing.

Furhat Robotics has spent four years building Tengai, and since October 2018, it has been collaborating with TNG, one of Sweden's largest recruitment firms, to offer candidates job interviews free from any of the unconscious biases that managers and recruiters can often bring to the hiring process, while still making the experience 'seem human'.

The robot will conduct the interview in the way a human recruiter would do it, using competency-based questions, such as “tell me about a work situation when you found it difficult to work with colleagues in a team or project, and why did you find it difficult?”. The robot will give feedback (such as nodding, smiles and says “hmm”), in order to encourage the candidate to give elaborate answers. If the answer is too vague, the robot might, for example, ask the candidate to give more concrete examples. After the interview, the robot will make a summary of the interview and some objective recommendations for a human to make the decision about the candidate.

Experienced and trained recruiters, who were also trained within discrimination law, were used to feed Tengai on the different dimensions of unconscious bias.

On a piece on the TNG blog, the developers add; 'Even if we reduce human bias as much as possible in the process, there is also a potential risk of introducing so-called algorithmic bias. For example, there might be a risk that the speech recogniser (which translates speech into words) could perform worse for speakers with a foreign accent or specific gender. This could potentially affect the outcome of the interview.

'To mitigate this, we will perform thorough analyses of how these components perform on the data that we have recorded, to see if certain groups are affected. One should be careful here, though, to not throw out the baby with the bathwater, since it is not certain that a slightly worse performance at an early stage of processing will affect the final outcome, in terms of robot behaviour and analysis of the interviews. In many cases, there might be ways of compensating for these shortcomings.'