Friday, March 29, 2024
No menu items!
HomeArtificial Intelligence and Machine LearningDeployed AI Putting Companies at Significant Risk, says FICO ReportĀ 

Deployed AI Putting Companies at Significant Risk, says FICO ReportĀ 

By John P. Desmond, AI Trends EditorĀ Ā 

A new report on responsible AI from the Fair Isaac Corp. (FICO), the company that brings you credit ratings, finds that most companies are deploying AI at significant risk.Ā 

The report,Ā The State of Responsible AI: 2021, assesses how well companies are doing in adopting responsible AI, making sure they are using AI ethically, transparently, securely and in their customers best interest.Ā Ā 

ScottĀ Zoldi, Chief Analytics Officer, FICO

ā€œThe short answer: not great,ā€ states ScottĀ Zoldi, Chief Analytics Officer at FICO, in a recent account on theĀ blog of Fair Isaac. Working with market intelligence firmĀ CoriniumĀ for the second edition of the report, the analysts surveyed 100 AI-focused leaders from financial services, insurance, retail, healthcare and pharma, manufacturing, public and utilities sectors in February and March 2021.Ā Ā 

Among the highlights:Ā 

65%Ā of respondentsā€™ companies cannot explain how specific AI model decisions or predictions are made;Ā 
73%Ā have struggled to get executive support for prioritizing AI ethics and Responsible AI practices; andĀ Ā 
Only 20%Ā actively monitor their models in production for fairness and ethics.Ā 

With worldwide revenues for the AI market including software, hardware and services, forecast by IDC market researchers to grow 16.4%Ā in 2021 to$327.5 billion, reliance on AI technology is increasing. Along with this, the reportā€™s authors cite ā€œan urgent needā€ to elevate the importance of AI governance and Responsible AI to the boardroom level.Ā Ā 

Defining Responsible AIĀ 

Zoldi, who holds more than 100 authored patents in areas including fraud analytics, cybersecurity, collections and credit risk, studies unpredictable behavior. He definesĀ Responsible AIĀ here and has given many talks on the subject around the world.Ā Ā 

ā€œOrganizations are increasingly leveraging AI to automate key processes that, in some cases, are making life-altering decisions for their customers,ā€ he stated. ā€œNot understanding how these decisions are made, and whether they are ethical and safe, creates enormous legal vulnerabilities and business risk.ā€Ā 

The FICO study found executives have no consensus about what a companyā€™s responsibilities should be when it comes to AI. Almost half (45%) said they had no responsibility beyond regulatory compliance to ethically manage AI systems that make decisions which could directly affect peopleā€™s livelihoods. ā€œIn my view, this speaks to the need for more regulation,ā€ he stated.Ā Ā 

AI model governance frameworks are needed to monitor AI models to ensure the decisions they make are accountable, fair, transparent and responsible. Only 20%Ā of respondents are actively monitoring the AI in production today, the report found. ā€œExecutive teams and Boards of Directors cannot succeed with a ā€˜do no evilā€™ mantra without a model governance enforcement guidebook and corporate processes to monitor AI in production,ā€ Zoldi stated. ā€œAI leaders need to establish standards for their firms where none exist today, and promote active monitoring.ā€Ā 

Business is recognizing that things need to change. Some 63%Ā believe that AI ethics and Responsible AI will become core to their organizationā€™s strategy within two years.Ā Ā 

Cortnie Abercrombie, Founder and CEO, AI Truth

ā€œI think thereā€™s now much more awareness that things are going wrong,ā€ stated Cortnie Abercrombie, Founder and CEO of responsible AI advocacy group AI Truth, and a contributor to the FICO report. ā€œBut I donā€™t know that there is necessarily any more knowledge about how that happens.ā€Ā 

Some companies are experiencing tension between management leaders who may want to get models into production quickly, and data scientists who want to take the time to get things right. ā€œIā€™ve seen a lot of what I call abused data scientists,ā€ Abercrombie stated.Ā 

Little Consensus Around What Are Ethical Responsibilities Around AIĀ Ā 

GannaĀ Pogrebna, Lead for Behavioral Data Science, TheĀ Alan Turing Institute

Regarding the lack of consensus about the ethical responsibilities around AI, companies need to work on that, the report suggested. ā€œAt the moment, companies decide for themselves whatever they think is ethical and unethical, which is extremely dangerous. Self-regulation does not work,ā€ stated GannaĀ Pogrebna, Lead for Behavioral Data ScienceĀ at the Alan Turing Institute, also a contributor to the FICO report. ā€œI recommend that every company assess the level of harm that could potentially come with deploying an AI system, versus the level of good that could potentially come,ā€ she stated.Ā Ā Ā 

To combat AI model bias, the FICO report found that more companies are bringing the process in-house, with only 10%Ā of the executives surveyed relying on a third-party firm to evaluate models for them.Ā Ā Ā 

The research shows that enterprises are using a range of approaches to root out causes of AI bias during model development, and that few organizations have a comprehensive suite of checks and balances in place.Ā Ā 

Only 22%Ā of respondents said their organization has an AI ethics board to consider questions on AI ethics and fairness. One in three report having a model validation team to assess newly-developed models, and 38%Ā reportĀ having data bias mitigation steps built into model development.Ā Ā 

This yearā€™s research shows a surprising shift in business priorities away fromĀ explainabilityĀ and toward model accuracy. ā€œCompanies must be able to explain to people why whatever resource was denied to them by an AI was denied, ā€ stated Abercrombie of AI Truth.Ā Ā 

Adversarial AI Attacks Reported to be On the RiseĀ Ā 

Adversarial AI attacks, in which inputs to machine learning models are hacked in an effort to thwart the correct operation of the model, are on the increase, the report found, with 30%Ā of organizations reporting an increase, compared to 12%Ā in last yearā€™s survey. Zoldi stated that the result surprised him, and suggested that the survey needs a set of definitions around adversarial AI.Ā Ā 

Data poisoning and other adversarial AI technologies border on cybersecurity. ā€œThis may be an area where cybersecurity is not where it needs to be,ā€Ā ZoldiĀ stated.Ā Ā 

Organization politics was cited as the number one barrier to establishing Responsible AI practices. ā€œWhat weā€™re missing today is honest and straight talk about which algorithms are more responsible and safe,ā€ statedĀ Zoldi.Ā 

Respondents from companies that must comply with regulations have little confidence they are doing a good job, with 31%Ā reporting the processes they use to ensure projects comply with regulations are effective. Some 68%Ā report their model compliance processes are ineffective.Ā Ā 

As for model development audit trails, four percent admit to not maintaining standardized audit trails, which means some AI models being used in business today are understood only by the data scientists that originally coded them.Ā Ā 

This falls short of what could be described as Responsible AI, in the view of Melissa Koide, CEO of the AI research organizationĀ FinRegLab, and a contributor to the FICO report. ā€œI deal primarily with compliance risk and the fair lending sides of banks andĀ fintechs,ā€ she stated. ā€œI think theyā€™re all quite attuned to, and quite anxious about, how they do governance around using more opaque models successfully.ā€Ā Ā 

More organizations are coalescing around the move to Responsible AI, including theĀ Partnership on AI, formed in 2016 and including Amazon, Facebook, Google, Microsoft, and IBM, TheĀ European CommissionĀ in 2019 published a set of non-binding ethical guidelines for developing trustworthy AI, with input from 52 independent experts, according to a recent report inĀ VentureBeat. In addition, theĀ Organization for Economic Cooperation and Development (OECD)Ā has created a global framework for AI around common values.Ā Ā 

Also, theĀ World Economic ForumĀ is developing a toolkit for corporate officers for operationalizing AI in a responsible way. Leaders from around the world are participating.Ā Ā Ā 

ā€œWe launched the platform to create a framework to accelerate the benefits and mitigate the risks of AI and ML,ā€ stated Kay Firth-Butterfield, Head of AI and Machine Learning and Member of the Executive Committee at the World Economic Forum. ā€œThe first place for every company to start when deploying responsible AI is with an ethics statement. This sets up your AI roadmap to be successful and responsible.ā€Ā 

Wison Pang, the CTO of Appen, a machine learning development company, who authored the VentureBeat article, cited three focus areas for a move to Responsible AI: risk management, governance,Ā and ethics.Ā Ā 

ā€œCompanies that integrate pipelines and embed controls throughout building, deploying, and beyond are more likely to experience success,ā€ he stated.Ā Ā 

Read the source articles and information onĀ theĀ blog of Fair Isaac,Ā in the Fair Isaac report,Ā The State of Responsible AI: 2021, on the definition inĀ Responsible AIĀ and inĀ VentureBeat.Ā 

Read MoreAI Trends

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments