It’s been predicted for a long time, but the adoption of Artificial Intelligence has really come into its own in recent months, particularly in response to Covid-19. The job market was hit hard, but has bounced back quickly, and recruiters are hard pushed to fill vast numbers of roles as quickly as is needed. Major companies have been using automation to recruit for a while, but that technology is fast filtering down to anyone who has a role to fill.

Each year, Vodafone receives more than 10,000 graduate applications for just 1,000 roles. With such high volume of CVs to read through, it’s understandable that recruiters simply don’t have time to look at them all. AI is a perfect solution to this issue, and Vodafone recently began working with HireVue. The model works by extracting tens of thousands of pieces of data from video interviews. Facial recognition can analyse key factors such as eye contact, body language and tone to suggest the best candidates. The applicants are sorted into categories – highly recommended, recommended and not suitable, and Vodafone confirmed that the AI system showed strong correlation with their human assessors, scoring around 70% for accuracy in the highly recommended category.


AI will alter the role of recruiters, allowing recruitment companies to make their hiring processes more proactive. AI powered software can recommend candidates who haven’t yet approached a company for a role. By analysing the full online presence of potential applicants, platforms like Arya (used by Dyson and Home Depot) can predict not just who has the necessary skills for the job, but who might be ready to transition to a new role and who is likely to be a good fit. Facial recognition, although often hotly contested, would also be a very quick and simple way of ensuring the correct candidate is taking an online exam or aptitude test. Even in moderated tests, what stops a candidate from having someone else sit the exam for them?

AI, however, has famously not developed without flaws. AI tools are a reflection of who builds and trains them.

Built-in bias is a huge issue with AI, and one that data analysts spend thousands of hours trying to counteract. In 2020 there was uproar when the British Government attempted to predict the grades of school leavers who had been unable to take their exams due to the pandemic. The system got it very wrong, but not because the software itself was flawed, but because the data it was trained on was full of human bias. The software had inadvertently been taught that students from deprived areas were less likely to receive top grades, and so the algorithm marked many exceptional students down (sometimes by several grades) based on their postcode, rather than their aptitude.

It might be dangerous to overly rely on tools such as AI analysis of video interviews. While they may seem a great way of giving candidates the opportunity to prove themselves, software will always lack the human touch. Candidates who are nervous, have speech impediments, a disability that affects their face or shoulders, or potentially even just poor internet connection, could all be penalised by the software.

AI software that cruises social media looking for prospective candidates will naturally be biased against those who choose to limit their social media output. Any technology which chooses candidates based on an existing office culture, runs the risk of generating a progressively smaller and smaller group of ‘intellectual clones’, people from similar backgrounds who are likely to think and act in similar ways. One CV scanning tool accidentally learnt that men called Jared and people who played lacrosse in high school were top picks when analysing prospective job performance.

Similarly, Reuters reported that a recruitment algorithm used by Amazon was unintentionally favouring male applicants over female. The system had been trained on years of data, which unfortunately consisted of majority male applications. The data used for training was flawed, and thus the output was biased.

AI also could pose a problem to ethnic minorities. In the UK it has been well documented that job applications submitted under traditional Asian or African names, are accepted at a lower rate than traditional ‘White British’ names. Facial recognition is equally slated for bias against People of Colour – frequently misidentifying and confusing Black and Asian faces at a much higher rate than Caucasian faces. Given these issues, would an ethnic minority applicant trust a hiring process that utilises artificial intelligence?


Ultimately, the software is as smart as the rules you make, and the processes you then follow. Somen Mondal is the co-founder and CEO of Ideal, an AI enabled recruitment platform that is used to screen around five million candidates a month. Mondal claims that extensive auditing is the only way to combat bias. An example he gives, is that although you can teach a system not to discriminate against women, it might still inadvertently learn bias against other factors associated with women – for example attending an all-girls’ school. A regular sweep of the systems will be able to detect any imminent biases, and they can then be moderated by a human.

Hiring and retaining excellent staff is the holy grail for all profitable companies, and AI offers a drastic change in the way recruiters can connect and engage with top candidates. Of course, this is not a silver bullet, but by improving the speed to hire and quality of hire, companies and employees alike will benefit from promptly filled and well-matched roles.

AI has many uses in our modern world and we will all interact differently with a plethora of AI enabled services in our day-to-day life. Perhaps in recruitment we will all benefit the same way.



BSI is a specialist solution supplier of hardware and expertise across a range of sectors. We are the leading provider of AI solutions.

Get in touch to discover how we could optimise your business with AI.



To learn more...

Our AI technology solutions can be viewed here and our AI inception programme here.