Friday, April 26, 2024
No menu items!
HomeArtificial Intelligence and Machine LearningPromise and Perils of Using AI for Hiring: Guard Against Data BiasĀ 

Promise and Perils of Using AI for Hiring: Guard Against Data BiasĀ 

By AI Trends StaffĀ Ā 

WhileĀ AI in hiring is now widely used for writing job descriptions, screening candidates, and automating interviews, it poses a risk of wide discrimination if not implemented carefully.Ā 

Keith Sonderling, Commissioner, US Equal Opportunity Commission

That was the message from Keith Sonderling, Commissioner with the US Equal Opportunity Commision, speaking at theĀ AI World GovernmentĀ event held live and virtually in Alexandria, Va., last week. Sonderling is responsible for enforcing federal laws that prohibit discrimination against job applicants because of race, color, religion, sex, national origin, age orĀ disability.Ā Ā Ā 

ā€œThe thought that AI would become mainstream in HR departments was closer to science fiction two year ago, but the pandemic has accelerated the rate at which AI is being used by employers,ā€ he said. ā€œVirtual recruiting is now here to stay.ā€Ā Ā 

Itā€™s a busy time for HR professionals. ā€œThe great resignation is leading to the great rehiring, and AI will play a role in that like we have not seen before,ā€ Sonderling said.Ā Ā 

AI has been employed for years in hiringā€”ā€œIt did not happen overnight.ā€ā€”for tasks including chatting with applications, predicting whether a candidate would take the job, projecting what type of employee they would be and mapping out upskilling and reskilling opportunities. ā€œIn short, AI is now making all the decisions once made by HR personnel,ā€ which he did not characterize as good or bad.Ā Ā Ā 

ā€œCarefully designed and properly used, AI has the potential to make the workplace more fair,ā€ Sonderling said. ā€œBut carelessly implemented, AI could discriminate on a scale we have never seen before by an HR professional.ā€Ā Ā 

Training Datasets for AI Models Used for Hiring Need to Reflect DiversityĀ Ā 

This is because AI models rely on training data. If the companyā€™s current workforce is used as the basis for training, ā€œIt will replicate the status quo. If itā€™s one gender or one race primarily, it will replicate that,ā€ he said. Conversely, AI can help mitigate risks of hiring bias by race, ethnic background, or disability status. ā€œI want to see AI improve on workplace discrimination,ā€ he said.Ā Ā 

Amazon began building a hiring application in 2014, and found over time that it discriminated against women in its recommendations, because the AI model was trained on a dataset of the companyā€™s own hiring record for the previous 10 years, which was primarily of males. Amazon developers tried to correct it but ultimately scrapped the system in 2017.Ā Ā Ā 

Facebook has recently agreed to pay $14.25 million to settle civil claims by the US government that the social media company discriminated against American workers and violated federal recruitment rules, according to an account fromĀ Reuters. The case centered on Facebookā€™s use of what it called its PERM program for labor certification. The government found that Facebook refused to hire American workers for jobs that had been reserved for temporary visa holders under the PERM program.Ā Ā Ā 

ā€œExcluding people from the hiring pool is a violation,ā€ Sonderling said.Ā  If the AI program ā€œwithholds the existence of the job opportunity to thatĀ class, soĀ they cannot exercise their rights, or if it downgrades a protected class, it is within our domain,ā€ he said.Ā Ā Ā 

Employment assessments, which became more common after World War II, have providedĀ  high value to HR managers and with help from AI they have the potential to minimize bias in hiring. ā€œAt the same time, they are vulnerable to claims of discrimination, soĀ employersĀ need to be careful and cannot take a hands-off approach,ā€ Sonderling said. ā€œInaccurate data will amplify bias in decision-making. Employers must be vigilant against discriminatory outcomes.ā€Ā Ā 

He recommended researching solutions from vendors who vet data for risks of bias on the basis of race, sex, and other factors.Ā Ā Ā 

One example is fromĀ HireVueĀ of South Jordan, Utah, which hasĀ built aĀ hiring platform predicated on the US Equal Opportunity Commissionā€™s Uniform Guidelines, designed specifically to mitigate unfair hiring practices, according to an account fromĀ allWork.Ā Ā Ā 

A post on AI ethical principles on its website states in part, ā€œBecauseĀ HireVueĀ uses AI technology in our products, we actively work to prevent the introduction orĀ propagationĀ of bias against any group or individual. We will continue to carefully review the datasets we use in our work and ensure that they are as accurate and diverse as possible. We also continue to advance our abilities to monitor, detect, and mitigate bias. We strive to build teams from diverse backgrounds with diverse knowledge, experiences, and perspectives to best represent the people our systems serve.ā€Ā Ā 

Also, ā€œOur data scientists and IO psychologists buildĀ HireVueĀ Assessment algorithms in a way that removes data from consideration by the algorithm that contributes to adverse impact without significantly impacting the assessmentā€™s predictive accuracy. The result is a highly valid, bias-mitigated assessment that helps to enhance human decision making while actively promoting diversity and equal opportunity regardless of gender, ethnicity, age, or disability status.ā€Ā Ā 

Dr. EdĀ Ikeguchi, CEO,Ā AiCure

The issue of bias in datasets used to train AI models is not confined to hiring. Dr. EdĀ Ikeguchi, CEO ofĀ AiCure, an AI analytics company working in the life sciences industry, stated in a recent account inĀ HealthcareITNews, ā€œAI is only as strong as the data itā€™s fed, and lately that data backboneā€™s credibility is being increasingly called into question. Todayā€™s AI developers lack access to large, diverse data sets on which to train and validate new tools.ā€Ā Ā 

He added, ā€œThey often need to leverage open-source datasets, but many of these were trained using computer programmer volunteers, which is a predominantly white population. Because algorithms are often trained on single-origin data samples with limited diversity, when applied in real-world scenarios to a broader population of different races, genders, ages, and more, tech that appeared highly accurate in research may prove unreliable.ā€Ā 

Also, ā€œThere needs to be an element of governance and peer review for all algorithms, as even the most solid and tested algorithm is bound to have unexpected results arise. An algorithm is never done learningā€”it must be constantly developed and fed more data to improve.ā€Ā 

And, ā€œAs an industry, we need to become more skeptical of AIā€™s conclusions and encourage transparency in the industry. Companies should readily answer basic questions, such as ā€˜How was the algorithm trained? On what basis did it draw this conclusion?ā€Ā 

Read the source articles and information atĀ AI World Government, fromĀ ReutersĀ and fromĀ HealthcareITNews.Ā 

Read MoreAI Trends

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments