Sunday, October 13, 2024
No menu items!
HomeArtificial Intelligence and Machine LearningUnethical Use of AI Being Mainstreamed by Some Business Execs, Survey FindsĀ 

Unethical Use of AI Being Mainstreamed by Some Business Execs, Survey FindsĀ 

By John P. Desmond, AI Trends EditorĀ 

In a recent survey, senior business executives admitted to their sometimes unethical use of AI.Ā 

The admission of being openly unethical came from respondents to a recent survey conducted by KPMG of 250 director-level or higher executives at companies with more than 1,000 employees about data privacy.Ā Ā 

Some 29%Ā of the respondents admitted that their own companies collect personal information that is ā€œsometimes unethicalā€ and 33%Ā said consumers should be concerned about how their company uses personal data, according to a recent report inĀ The New Yorker.Ā 

Orson Lucas, principal, US privacy services team, KPMG

The result surprised the survey-taker.Ā ā€œFor some companies, there may be a misalignment between what they say they are doing on data privacy and what they are actually doing,ā€ stated Orson Lucas, the principal in KPMGā€™s US privacy services team.Ā Ā 

One growing practice is a move to ā€œcollect everythingā€ about a person, then figure out later how to use it. This approach is seen as an opportunity to better understand what customers want to get out of the business that can later result in a transparent negotiation about what information customers are willing to provide and for how long.Ā Ā Ā 

Most of these companies have not yet reached the transparent negotiation stage. Some 70%Ā of the executives interviewed said their companies had increased the amount of personal information they collected in the past year. And 62%Ā said their company should be doing more to strengthen data protection measures.Ā Ā Ā 

KPMG also surveyed 2,000 adults in the general population on data privacy, finding that 40%Ā did not trust companies to behave ethically with their personal information. In Lucasā€™ view, consumers will want to punish a business that demonstrates unfair practices around the use of personal data.Ā Ā Ā 

AI Conferences Considering Wider Ethical Reviews of Submitted PapersĀ Ā 

Meanwhile, at AI conferences, sometimes AI technology is on display with little sensitivity to its potentially unethical use, and at times, this AI tech finds its way into commercial products. The IEEE Conference on Computer Vision and Pattern Recognition in 2019, for example, accepted a paper from researchers with MITā€™s Computer Science and AI Laboratory on learning a personā€™s face from audio recordings of that person speaking.Ā Ā 

The goal of the project, called Speech2Face, was to research how much information about a personā€™sĀ looks could be inferred from the way they speak. The researchers proposed a neural network architecture designed specifically to perform the task of facial reconstruction from audio.Ā Ā Ā 

Stuff hit the fan around it, Alex Hanna, a trans woman and sociologist at Google who studies AI ethics, asked via tweet for the research to stop, calling it ā€œtransphobic.ā€ Hanna objected to the way the research sought to tie identity to biology. Debate ensued. Some questioned whether papers submitted to academic-oriented conferences need further ethical review.Ā Ā 

Michael Kearns, a computer scientist at the University of Pennsylvania and a coauthor of the book,Ā ā€œThe Ethical Algorithm,ā€Ā stated toĀ The New YorkerĀ that we are in ā€œa little bit of a Manhattan Project momentā€ for AI and machine learning. ā€œThe academic research in the field has been deployed at a massive scaleĀ on society,ā€ he stated. ā€œWith that comes this higher responsibility.ā€Ā Ā 

Katherine Heller, computer scientist, Duke University

A paper onĀ Speech2FaceĀ was accepted in the 2019 Neural Information Processing Systems (Neurips) Conference held in Vancouver, Canada. Katherine Heller,Ā a computer scientist at Duke University and aĀ NeuripsĀ co-chair for diversity and inclusion, toldĀ The New YorkerĀ that the conference had accepted some 1,400 papers that year, and she could not recall facing comparable pushback on the subject of ethics. ā€œItā€™s new territory,ā€ she stated.Ā 

ForĀ NeuripsĀ 2020, held remotely in December 2020, papers faced rejection if the research was found to pose a threat to society. Iason Gabriel, a research scientist at Google DeepMind in London, who is among the leadership of the conferenceā€™s ethics review process, said the change was needed to help AI ā€œmake progress as a field.ā€Ā 

Ethics is somewhat new territory for computer science. Whereas biologists, psychologists,Ā and anthropologists are used to reviews that query the ethics of their research, computer scientists have not been raised that way. The focus is more around methods, such as plagiarism and conflicts of interest.Ā Ā Ā Ā 

That said, a number of groups interested in the ethical use of AI have come about in the last several years. The Association for Computing Machineryā€™s Special Interest Group on Computer-Human Interaction, for example, launched a working group in 2016 that is now an ethics research committee that offers to review papers at the request of conference program chairs. In 2019, the group received 10 inquiries, primarily around research methods.Ā Ā Ā 

ā€œIncreasingly, we do see, especially in the AI space, more and more questions of, Should this kind of research even be a thing?ā€ stated Katie Shilton, an information scientist at the University of Maryland and the chair of the committee, toĀ The New Yorker.Ā 

Shilton identified four categories of potentially unethical impact. First, AI that can be ā€œweaponizedā€ against populations, such as facial recognition, location tracking,Ā and surveillance. Second, technologies such as Speech2Face that may ā€œharden people into categories that donā€™t fit well,ā€ such as gender or sexual orientation. Third, automated weapons research. Fourth, tools used to create alternate sets of reality, such as fake news, voices or images.Ā Ā 

This green field territory is a venture into the unknown. Computer scientists usually have good technical knowledge, ā€œBut lots and lots of folks in computer science have not been trained in research ethics,ā€ Shilton stated, noting that it is not easy to say that a line of research should not exist.Ā 

Location Data Weaponized for Catholic PriestĀ 

The weaponization of location-tracking technology was amply demonstrated in the recent experience of the Catholic priest who was outed as a Grindr dating app user, and who subsequently resigned. Catholic priests take a vow of celibacy, which would be in conflict with being in a dating app community of any kind.Ā Ā Ā 

The incident raised a panoply of ethical issues. The story was broken by a Catholic news outlet called the Pillar, which had somehow obtained ā€œapp data signals from the location-based hookup app Grindr,ā€ stated an account inĀ recodeĀ from Vox. It was not clear how the publication obtained the location data other than to say it was from a ā€œdata vendor.ā€Ā Ā 

ā€œThe harms caused by location tracking are real and can have a lasting impact far into the future,ā€ stated Sean Oā€™Brien, principal researcher atĀ ExpressVPNā€™sĀ Digital Security Lab, toĀ recode.Ā ā€œThere is no meaningful oversight of smartphone surveillance, and the privacy abuse we saw in this case is enabled by a profitable and booming industry.ā€Ā Ā 

One data vendor in this business is X-Mode, which collects data from millions of users across hundreds of apps. The company was kicked off the Apple and Google platforms last year over its national security work with the US government, according to an account inĀ The Wall Street Journal.Ā However, the company is being acquired by Digital Envoy, Inc. of Atlanta, and will be rebranded asĀ Outlogic. Itā€™s chief executive, Joshua Anton, will join Digital Envoy as chief strategy officer. The purchase price was not disclosed.Ā 

Acquiring X-Mode ā€œallows us to further enhance our offering related to cybersecurity, AI, fraud and rights management,ā€ stated Digital Envoy CEO Jerrod Stoller. ā€œIt allows us to innovate in the space by looking at new solutions leveraging both data sets. And it also brings new clients and new markets.ā€Ā Ā Ā 

Digital Envoy specializes in collecting and providing to its customers data on internet users based on the IP address assigned to them by their ISP or cell phone carrier. The data can include approximate geolocation and is said to be useful in commercial applications, including advertising.Ā Ā Ā 

X-Mode recently retired a visualization app, called XDK, and has changed practices by adding new guidance on where data is sourced from, according to an account inĀ Technically.Ā This is the second time the company has rebranded since it was founded in 2013, when itĀ started off as Drunk Mode.Ā Ā 

Following the acquisition, Digital Envoy said in a statement that it added a new code of ethics, a data ethics review panel, a sensitive app policy and will be hiring a chief privacy officer.Ā 

Read the source articles and information inĀ The New Yorker, inĀ recodeĀ from Vox, inĀ The Wall Street JournalĀ and inĀ Technically.Ā 

Read MoreAI Trends

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments