Thursday, March 28, 2024
No menu items!
HomeArtificial Intelligence and Machine LearningContent Moderation Becoming a Big Business with AI Enlisted to Help 

Content Moderation Becoming a Big Business with AI Enlisted to Help 

By John P. Desmond, AI Trends Editor  

Content moderation is becoming a bigger business, expecting to reach a volume of $11.8 billion by 2027, according to estimates from Transparency Market Research. 

The market is being fueled by exponential increases in user-generated content in the form of short videos, memes, GIFs, live audio and video content and news. Because some percentage of the uploaded content is fake news, or malicious or violent content, social media sites are employing armies of moderators equipped with tools employing AI and machine learning to attempt to filter out inappropriate content. 

Facebook has employed Accenture to help clean up its content, in a contract valued at $500 million annually, according to a recent account in The New York Times, based on extensive research into the history of content moderation at the social media giant.  

Julie Sweet, CEO, Accenture

The Times reported that Accenture CEO Julie Sweet ordered a review of the contract after her appointment in 2019, out of concern for what was then seen as growing ethical and legal risks, which could damage the reputation of the multinational professional services company.  

Sweet ordered the review after an Accenture worker joined a class action lawsuit to protest the working conditions of content moderators, who review hundreds of Facebook posts in a shift and have experienced depression, anxiety and paranoia as a result. The review did not result in any change; Accenture employs more than a third of the 15,000 people Facebook has hired to inspect its posts, according to the Times report.  

Facebook CEO Mark Zuckerberg has had a strategy of employing AI to help filter out the toxic posts; the thousands of content moderators are hired to remove inappropriate messages the AI does not catch.   

Cori Crider, Cofounder, Foxglove

The content moderation work and the relationship of Accenture and Facebook around it have become controversial. “You couldn’t have Facebook as we know it today without Accenture,” stated Cori Crider, a co-founder of Foxglove, a law firm that represents content moderators, to the Times. “Enablers like Accenture, for eye-watering fees, have let Facebook hold the core human problem of its business at arm’s length.” 

Facebook has hired at least 10 consulting and staffing firms, and a number of subcontractors,  to filter its posts since 2012, the Times reported. The pay rates vary, with US moderators generating $50 or more per hour for Accenture, while moderators in some US cities get starting pay of $18 per hour, the Times reported. 

Insights From an Experienced Content Moderator  

The AI catches about 90% of the inappropriate content. One supplier of content moderation systems is Appen, based in Australia, which works with its clients on machine learning and AI systems. In a recent blog post on its website, Justin Adam, a program manager overseeing several content moderation projects, offered some insights.   

The first is to update policies as real world experience dictates. “Every content moderation decision should follow the defined policy; however, this also necessitates that policy must rapidly evolve to close any gaps, gray areas, or edge cases when they appear, and particularly for sensitive topics,” Adam stated. He recommended monitoring content trends specific to markets to identify policy gaps.  

Second, be aware of the potential demographic bias of moderators. “Content moderation is most effective, reliable, and trustworthy when the pool of moderators is representative of the general population of the market being moderated,” he stated. He recommended sourcing a diverse group of moderators as appropriate.    

Third, develop a content management strategy and have expert resources to support it. “Content moderation decisions are susceptible to scrutiny in today’s political climate,” Adam stated. His firm offers services to help clients employ a team of trained policy subject matter experience, establish quality control review, and tailor quality analysis and reporting.   

Techniques for Automated Content Moderation with AI  

The most common type of content moderation is an automated approach that employs AI, natural language processing and computer vision, according to a blog post from Clarifai, a New York City-based AI company specializing in computer vision, machine learning, and the analysis of images and videos.   

AI models are built to review and filter content. “Inappropriate content can be flagged and prevented from being posted almost instantaneously,” to support the human moderator’s work, the company suggested.  

Techniques for content moderation include image moderation that uses text classification and computer vision-based visual search techniques. Object character recognition can identify text within an image and moderate that as well. The filters are looking for abusive or offensive words, objects and body parts within all types of unstructured data. Content flagged as inappropriate can be sent for manual moderation.  

Another technique, for video moderation, requires that the video be watched frame by frame and the audio screened also. For text moderation, natural language processing algorithms are used to summarize the meaning of the text or gain an understanding of the emotions in the text. Using text classification, categories can be assigned to help analyze the text or sentiment.    

Sentiment analysis identifies the tone of the text and can categorize it as anger, bullying, or sarcasm, for example, then label it as positive, negative, or neutral. The named entity recognition technique finds and extracts names, locations, and companies. Companies use it to track the number of times its brand is mentioned or the brand of a competitor, or the number of people from a city or state that are posting reviews. More advanced techniques can rely on built-in databases to make predictions about whether the text is appropriate, or is fake news or a scam.  

With little doubt, AI is needed in online content moderation for it to have a chance of being successful. “The reality is, there is simply too much UGC for human moderators to keep up with, and companies are faced with the challenge of effectively supporting them,” the Clarifai post states. 

Limitations of Automated Content Management Tools  

The limitations of automated content moderation tools include accuracy and reliability when the content is extremist or hate speech, due to nuanced variations in speech related to different groups and regions, according to a recent account from New America, a research and policy institute based in Washington, DC. Developing comprehensive datasets for these categories of content was called “challenging” and developing a tool that can be reliably applied across different groups and regions was described as “extremely difficult.”  

In addition, the definitions of what types of speech fall into inappropriate categories is not clear.   

Moreover, “Because human speech is not objective and the process of content moderation is inherently subjective, these tools are limited in that they are unable to comprehend the nuances and contextual variations present in human speech,” according to the post. 

In another example, an image recognition tool could identify an instance of nudity, such as a breast, in a piece of content. However, it is not likely that the tool could determine whether the post depicts pornography or perhaps breastfeeding, which is permitted on many platforms.  

Read the source articles and information from Transparency Market Researchin The New York Times, in blog post on the website of Appen,  a blog post on the website of Clarifai and an account from New America. 

Read MoreAI Trends

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments