Unethical Use of AI Being Mainstreamed by Some Business Execs, Survey Finds 


With a number of business executives admitting to sometimes unethical use of AI in a recent survey, a new era of ethical AI awareness dawns. (Credit: Getty Images) 

By John P. Desmond, – IAIDL Editor 

In a recent survey, senior business executives admitted to their sometimes unethical use of AI. 

The admission of being openly unethical came from respondents to a recent survey conducted by KPMG of 250 director-level or higher executives at companies with more than 1,000 employees about data privacy.  

Some 29% of the respondents admitted that their own companies collect personal information that is “sometimes unethical” and 33% said consumers should be concerned about how their company uses personal data, according to a recent report in The New Yorker. 

Orson Lucas, principal, US privacy services team, KPMG

The result surprised the survey-taker. “For some companies, there may be a misalignment between what they say they are doing on data privacy and what they are actually doing,” stated Orson Lucas, the principal in KPMG’s US privacy services team.  

One growing practice is a move to “collect everything” about a person, then figure out later how to use it. This approach is seen as an opportunity to better understand what customers want to get out of the business that can later result in a transparent negotiation about what information customers are willing to provide and for how long.   

Most of these companies have not yet reached the transparent negotiation stage. Some 70% of the executives interviewed said their companies had increased the amount of personal information they collected in the past year. And 62% said their company should be doing more to strengthen data protection measures.   

KPMG also surveyed 2,000 adults in the general population on data privacy, finding that 40% did not trust companies to behave ethically with their personal information. In Lucas’ view, consumers will want to punish a business that demonstrates unfair practices around the use of personal data.   

AI Conferences Considering Wider Ethical Reviews of Submitted Papers  

Meanwhile, at AI conferences, sometimes AI technology is on display with little sensitivity to its potentially unethical use, and at times, this AI tech finds its way into commercial products. The IEEE Conference on Computer Vision and Pattern Recognition in 2019, for example, accepted a paper from researchers with MIT’s Computer Science and AI Laboratory on learning a person’s face from audio recordings of that person speaking.  

The goal of the project, called Speech2Face, was to research how much information about a person’s looks could be inferred from the way they speak. The researchers proposed a neural network architecture designed specifically to perform the task of facial reconstruction from audio.   

Stuff hit the fan around it, Alex Hanna, a trans woman and sociologist at Google who studies AI ethics, asked via tweet for the research to stop, calling it “transphobic.” Hanna objected to the way the research sought to tie identity to biology. Debate ensued. Some questioned whether papers submitted to academic-oriented conferences need further ethical review.  

Michael Kearns, a computer scientist at the University of Pennsylvania and a coauthor of the book, The Ethical Algorithm,” stated to The New Yorker that we are in “a little bit of a Manhattan Project moment” for AI and machine learning. “The academic research in the field has been deployed at a massive scale on society,” he stated. “With that comes this higher responsibility.”  

Katherine Heller, computer scientist, Duke University

A paper on Speech2Face was accepted in the 2019 Neural Information Processing Systems (Neurips) Conference held in Vancouver, Canada. Katherine Heller, a computer scientist at Duke University and a Neurips co-chair for diversity and inclusion, told The New Yorker that the conference had accepted some 1,400 papers that year, and she could not recall facing comparable pushback on the subject of ethics. “It’s new territory,” she stated. 

For Neurips 2020, held remotely in December 2020, papers faced rejection if the research was found to pose a threat to society. Iason Gabriel, a research scientist at Google DeepMind in London, who is among the leadership of the conference’s ethics review process, said the change was needed to help AI “make progress as a field.” 

Ethics is somewhat new territory for computer science. Whereas biologists, psychologists, and anthropologists are used to reviews that query the ethics of their research, computer scientists have not been raised that way. The focus is more around methods, such as plagiarism and conflicts of interest.    

That said, a number of groups interested in the ethical use of AI have come about in the last several years. The Association for Computing Machinery’s Special Interest Group on Computer-Human Interaction, for example, launched a working group in 2016 that is now an ethics research committee that offers to review papers at the request of conference program chairs. In 2019, the group received 10 inquiries, primarily around research methods.   

“Increasingly, we do see, especially in the AI space, more and more questions of, Should this kind of research even be a thing?” stated Katie Shilton, an information scientist at the University of Maryland and the chair of the committee, to The New Yorker. 

Shilton identified four categories of potentially unethical impact. First, AI that can be “weaponized” against populations, such as facial recognition, location tracking, and surveillance. Second, technologies such as Speech2Face that may “harden people into categories that don’t fit well,” such as gender or sexual orientation. Third, automated weapons research. Fourth, tools used to create alternate sets of reality, such as fake news, voices or images.  

This green field territory is a venture into the unknown. Computer scientists usually have good technical knowledge, “But lots and lots of folks in computer science have not been trained in research ethics,” Shilton stated, noting that it is not easy to say that a line of research should not exist. 

Location Data Weaponized for Catholic Priest 

The weaponization of location-tracking technology was amply demonstrated in the recent experience of the Catholic priest who was outed as a Grindr dating app user, and who subsequently resigned. Catholic priests take a vow of celibacy, which would be in conflict with being in a dating app community of any kind.   

The incident raised a panoply of ethical issues. The story was broken by a Catholic news outlet called the Pillar, which had somehow obtained “app data signals from the location-based hookup app Grindr,” stated an account in recode from Vox. It was not clear how the publication obtained the location data other than to say it was from a “data vendor.”  

“The harms caused by location tracking are real and can have a lasting impact far into the future,” stated Sean O’Brien, principal researcher at ExpressVPN’s Digital Security Lab, to recode. “There is no meaningful oversight of smartphone surveillance, and the privacy abuse we saw in this case is enabled by a profitable and booming industry.”  

One data vendor in this business is X-Mode, which collects data from millions of users across hundreds of apps. The company was kicked off the Apple and Google platforms last year over its national security work with the US government, according to an account in The Wall Street Journal. However, the company is being acquired by Digital Envoy, Inc. of Atlanta, and will be rebranded as Outlogic. It’s chief executive, Joshua Anton, will join Digital Envoy as chief strategy officer. The purchase price was not disclosed. 

Acquiring X-Mode “allows us to further enhance our offering related to cybersecurity, AI, fraud and rights management,” stated Digital Envoy CEO Jerrod Stoller. “It allows us to innovate in the space by looking at new solutions leveraging both data sets. And it also brings new clients and new markets.”   

Digital Envoy specializes in collecting and providing to its customers data on internet users based on the IP address assigned to them by their ISP or cell phone carrier. The data can include approximate geolocation and is said to be useful in commercial applications, including advertising.   

X-Mode recently retired a visualization app, called XDK, and has changed practices by adding new guidance on where data is sourced from, according to an account in Technically. This is the second time the company has rebranded since it was founded in 2013, when it started off as Drunk Mode.  

Following the acquisition, Digital Envoy said in a statement that it added a new code of ethics, a data ethics review panel, a sensitive app policy and will be hiring a chief privacy officer. 

Read the source articles and information in The New Yorker, in recode from Vox, in The Wall Street Journal and in Technically. 

2024-03-26T15:55:19+00:00
Change Language