Friday, September 24, 2021
No menu items!
HomeData Analytics and VisualizationFairness in AI – Lessons from Practice

Fairness in AI – Lessons from Practice

Wharton Annual Analytics Conference

Alex Vaughan is chief science officer for talent matching platform Pymetrics, which focuses on deploying fair and equitable solutions in the hiring space. He is also co-founder of Cajal Neuroscience, a drug discovery company focused on neurodegenerative disease. Vaughan holds a Ph.D. in neuroscience from Stanford University.

The following is an edited transcript of his presentation at the 2021 Wharton Annual Analytics Conference, a virtual event focused on sharing industry best practices and the latest research insights from data science. The event was hosted by Wharton Customer Analytics in partnership with Analytics at Wharton and Wharton AI for Business.

Fairness in AI – Lessons from Practice

I’m here to talk to you about some of the lessons we’ve learned at Pymetrics about deploying AI in a regulated landscape. The problem Pymetrics focuses on is how do we find the right person for the right job. Everybody who goes through hiring has to solve this problem, and there is a variety of ways to solve it, but most of them are pretty poor.

Just to give one example, to call out a major tech firm that posted an ad recently, they tried to define the core feature of the job and really wanted somebody who was excellent at a skill called Kubernetes administration. They asked for 12 years of experience in this skill. But they were asking for expertise that doesn’t exist because this technology is only six years old. This sort of thing seems like a typo or a one-off but is actually pretty pervasive in the hiring realm.

Oftentimes, the things that you try to get when you look for the right person for the right job are impossible or impossible to define. At Pymetrics, we’re trying to solve that problem of scale using a data-driven approach.

Three Things You Need for Any AI Solution

High-quality data
Machine learning at the core of your company
A mechanism for measuring and demonstrating business value

Like many folks in the analytics business, we’re using a data-driven, machine-learning-driven approach to hiring that I’ll tell you about through this talk. But I just want to unpack the three main pieces that go into any AI or analytics solution. The first thing you need, of course, is data. It has to be high quality, and it has to be at scale. The second thing you need is machine learning. You need some sort of algorithm at the core of your company that makes decisions in a way that supports your business value. And the third thing you need is some way of measuring and demonstrating that business value so that you can capture and provide value to clients. Pymetrics has done all of three of these in a really unique way.

 

Building a Better Recruitment Tool

The first thing Pymetrics did is we came at the problem of hiring. How do you find the right person for the right job in a way that’s different than a lot of other folks have? Instead of looking at resumes or skill sets, we took the hypothesis that many people could be a good fit for one job or another based on attributes of their personality or behavior that were measurable, and from that we could make decisions. We developed a set of core assessments that measure behavioral aptitude such as the effort or emotional style that somebody brings to a job, skills such as numerical reasoning, and communication, which is key to our day-to-day life.

It’s really important in the hiring landscape that we focus on ensuring that everybody has equal access to opportunity in the job context.

We didn’t just make these up out of thin air. All of the assessments that Pymetrics builds as part of our approach to data collection and data ingestion in this analytics life cycle are drawn from cognitive neuroscience. We looked through the literature and developed a series of assessments based on this literature that we could then gamify and adapt into the hiring landscape. To do this, we need to take these assessments, which were originally used in a lab context, and convert them into data that we can use in an algorithm to predict hiring.

We do this with a variety of ML solutions. The key is that as we build out these solutions and as we build out our machine-learning platform, we have to have a tool that is predictive of success but also fair. Because it’s really important in the hiring landscape that we focus on ensuring that everybody has equal access to opportunity in the job context. This is the core machine-learning part of what we do.

We have to do two things at once: We have to make predictions, and those predictions have to be fair. And it’s not just prediction that we have to focus on. It’s making sure that we maximize the value of the people that we recommend to be hired. And it’s not just fairness. We also have another layer. We have to add on the explaianability component.

When you’re making decisions about people’s lives as we do in the hiring space, it’s really important that those decisions feel like they are accessible to someone. It’s not just a black box. It’s not just an arbitrary decision. For the candidate applying to a job, they need to understand why we’re making the decision we’re making. For the recruiter assessing a candidate, they need to understand where the decision is coming from. These seem like features of a machine-learning solution, but they’re actually the key features of business value as we go through our client life cycle.

 

Measuring the Metrics

These three metrics of prediction, fairness, and explainability, we can then quantify in various ways, which we do for all of our clients. We quantify things like the efficiency that we bring to a client’s hiring life cycle. If they need to hire 1,000 or 10,000 people, how much time can a solution like Pymetrics save? We can quantify things like diversity. How much diversity do we bring into the company that can help offset some of the historical marginalization of underrepresented groups? We can also quantify things like the performance of people on the job, how effective they are and what the kind of candidate and client experience is over time. Over the last five or six years, we’ve developed a whole suite of metrics that we measure for everything. We have a proprietary, scalable, and really valuable data set that we use to drive our hiring decisions. We have a core of machine-learning algorithms that we use to make predictions and recommend who should be hired for a particular job. And lastly, we have really well-validated ways of demonstrating business value. So, that’s the nutshell of what Pymetrics does.

Three Lessons from Building Equitable Hiring Practices

Fairness is a performance criterion
Transparency is a collaborative process
Integration into society is the most important step

Fairness is Paramount

I want to bridge out from there to give three short lessons from this journey that we’ve learned so far. The first lesson is that, in addition to simply providing value, the fairness of the platform as a whole across many clients is a performance criterion for any analytics company. This is going to be true for everybody working in the analytics or AI space going forward. If it hasn’t hit your industry yet, it will hit your industry soon.

But a really important thing for us is that, unlike many of the analytics driven companies you see today, Pymetrics actually started out with regulations surrounding the decisions that we were recommending. The Equal Employment Opportunity Commission has developed standards for hiring assessments that say that a hiring assessment can only be deployed if it meets certain fairness criteria. If, for instance, a hiring solution is recommending that a certain fraction of white men like me get hired, then it must also recommend other demographics be hired at approximately the same rate within about an 80% margin.

Many of the tests that have been used historically such as cognitive testing, which is used broadly across a variety of industries in the hiring context, or even human resume reviews, actually fail this test. Cognitive testing has been reported in the literature to pass about 40% as many Black men as white men, which is just unacceptable in the hiring landscape and in the shape of the society we’d like to live in together.

There is a lot of missing trust in this field, and with good reason.

The second thing is that it’s not enough for a company like Pymetrics to simply report its internal numbers. There is a lot of missing trust in this field, and with good reason. One of the approaches Pymetrics has taken is to improve the transparency of its process through external audits. This really bridges the gap between what companies like Pymetrics say they do and what regulators enforce by law that they must do. This also ensures that that gap is filled in a way that the public has access to and understands how a company like Pymetrics is meeting its regulatory obligation.

As the analytics landscape evolves throughout the broader business context, the real value comes from developing new data sets and new ways to demonstrate business value to a client.

The Future of AI

I talked about Pymetrics’ approach to data collection. We use our specific cognitive assessments, our approach to machine learning, we have our particular special sauce as the core analytics engine of Pymetrics, and the way we quantify business value. One really important thing to note is that machine learning can be meaningfully commoditized. As the analytics landscape evolves throughout the broader business context, the real value comes from developing new data sets and new ways to demonstrate business value to a client.

But there is a fourth step that is coming and that we’ve learned a lot from at Pymetrics by starting in the regulatory context, which is the most important step in the analytics life cycle. It is integration into society. This is not just a legal framework about government regulations, but also about broader metrics that build trust as we deploy tools like these. Honestly, this phase is just beginning, and I think it’s going to be one of the most interesting and fruitful ways in which folks can engage in this. With that, I will give my big thanks to the organizers of this conference. It’s been a pleasure to be here. Thank you very much.

About the Wharton Annual Analytics Conference

In partnership with Analytics at Wharton and Wharton AI for Business, Wharton Customer Analytics hosted the Annual Analytics Conference virtually on May 3-7. This week-long virtual event featured talks from business analytics leaders and showcased cutting-edge research from top academics.

The post Fairness in AI – Lessons from Practice appeared first on Wharton Customer Analytics.

Read MoreWharton Customer Analytics

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments