search
HomeTechnology peripheralsAIHow to catch inappropriate content in the era of big models? EU bill requires AI companies to ensure users' right to know

Over the past 10 years, big tech companies have become very good at many technologies: language, prediction, personalization, archiving, text parsing, and data processing. But they're still terrible at catching, flagging, and removing harmful content. As for the election and vaccine conspiracy theories spreading in the United States, one need only look back at the events of the past two years to understand the real-world harm they are causing.

This difference raises some questions. Why aren’t tech companies improving on content moderation? Can they be forced to do this? Will new advances in artificial intelligence improve our ability to catch bad information?

Most often, when tech companies are asked by the U.S. Congress to explain their role in spreading hate and misinformation, they tend to blame their failures on the complexity of the language itself. Executives say understanding and preventing contextual hate speech in different languages ​​and contexts is a difficult task.

One of Mark Zuckerberg’s favorite sayings is that technology companies should not be responsible for solving all the world’s political problems.

How to catch inappropriate content in the era of big models? EU bill requires AI companies to ensure users right to know (Source: STEPHANIE ARNETT/MITTR | GETTY IMAGES)

Most companies currently use both technology and human content moderators, with the latter’s work being undervalued and this reflected in their meager pay.

For example, AI is currently responsible for 97% of all content removed on Facebook.

However, AI isn’t good at interpreting nuance and context, so it’s unlikely to fully replace human content moderators, even if humans do, said Renee DiResta, research manager at the Stanford Internet Observatory. Not always good at explaining these things.

Because automated content moderation systems are typically trained on English data, cultural background and language can also pose challenges in effectively processing content in other languages.

Professor Hani Farid of the School of Information at the University of California, Berkeley, provides a more obvious explanation. According to Farid, because content moderation is not in the financial interest of tech companies, it does not keep up with the risks. It's all about greed. Stop pretending it's not about money. ”

Due to the lack of federal regulation, it is difficult for victims of online violence to demand that platforms bear financial responsibility.

Content moderation seems to be a never-ending war between tech companies and bad actors. When tech companies roll out content moderation rules, bad actors often use emojis or intentional misspellings to avoid detection. Then these companies try to close the loopholes, and people find new loopholes, and the cycle continues.

How to catch inappropriate content in the era of big models? EU bill requires AI companies to ensure users right to know

Now, the large language model is coming...

The current situation is already very difficult. With the emergence of generative artificial intelligence and large-scale language models such as ChatGPT, the situation may become even worse. Generative technology has its problems—for example, its tendency to confidently make things up and present them as fact—but one thing is clear: AI is getting better at language. Very powerful.

While both DiResta and Farid are cautious, they believe it is too early to make a judgment on how things will develop. Although many large models like GPT-4 and Bard have built-in content moderation filters, they can still produce toxic output, such as hate speech or instructions on how to build a bomb.

Generative AI enables bad actors to conduct disinformation campaigns at greater scale and speed. This is a dire situation given that methods for identifying and labeling AI-generated content are woefully inadequate.

On the other hand, the latest large-scale language models perform better at text interpretation than previous artificial intelligence systems. In theory, they could be used to facilitate the development of automated content moderation.

Tech companies need to invest in redesigning large language models to achieve this specific goal. While companies like Microsoft have begun looking into the matter, there has yet to be significant activity.

Farid said: "While we have seen many technological advances, I am skeptical about any improvements in content moderation."

While large language models are advancing rapidly, they still face challenges in contextual understanding, which may prevent them from understanding subtle differences between posts and images as accurately as human moderators. Cross-cultural scalability and specificity also pose problems. "Do you deploy a model for a specific type of niche? Do you do it by country? Do you do it by community? It's not a one-size-fits-all question," DiResta said.

How to catch inappropriate content in the era of big models? EU bill requires AI companies to ensure users right to know

New tools based on new technologies

Whether generative AI ultimately harms or helps the online information landscape may depend largely on whether tech companies can come up with good, widely adopted tools that tell us whether content was generated by AI .

DiResta told me that detecting synthetic media may be a technical challenge that needs to be prioritized because it is challenging. This includes methods like digital watermarking, which refers to embedding a piece of code as a permanent mark that the attached content was produced by artificial intelligence. Automated tools for detecting AI-generated or manipulated posts are attractive because, unlike watermarks, they do not require active tagging by the creator of AI-generated content. In other words, current tools that try to identify machine-generated content are not doing well enough.

Some companies have even proposed using mathematics to securely record cryptographic signatures of information, such as how a piece of content was generated, but this would rely on voluntary disclosure technologies like watermarks.

The latest version of the Artificial Intelligence Act (AI Act) proposed by the European Union just last week requires companies that use generative artificial intelligence to notify users when the content is indeed generated by a machine. We'll likely hear more about emerging tools in the coming months, as demand for transparency in AI-generated content increases.

Support: Ren

Original text:

https://www.technologyreview.com/2023/05/15/1073019/catching-bad-content-in-the-age-of-ai/

The above is the detailed content of How to catch inappropriate content in the era of big models? EU bill requires AI companies to ensure users' right to know. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:搜狐. If there is any infringement, please contact admin@php.cn delete
How to Run LLM Locally Using LM Studio? - Analytics VidhyaHow to Run LLM Locally Using LM Studio? - Analytics VidhyaApr 19, 2025 am 11:38 AM

Running large language models at home with ease: LM Studio User Guide In recent years, advances in software and hardware have made it possible to run large language models (LLMs) on personal computers. LM Studio is an excellent tool to make this process easy and convenient. This article will dive into how to run LLM locally using LM Studio, covering key steps, potential challenges, and the benefits of having LLM locally. Whether you are a tech enthusiast or are curious about the latest AI technologies, this guide will provide valuable insights and practical tips. Let's get started! Overview Understand the basic requirements for running LLM locally. Set up LM Studi on your computer

Guy Peri Helps Flavor McCormick's Future Through Data TransformationGuy Peri Helps Flavor McCormick's Future Through Data TransformationApr 19, 2025 am 11:35 AM

Guy Peri is McCormick’s Chief Information and Digital Officer. Though only seven months into his role, Peri is rapidly advancing a comprehensive transformation of the company’s digital capabilities. His career-long focus on data and analytics informs

What is the Chain of Emotion in Prompt Engineering? - Analytics VidhyaWhat is the Chain of Emotion in Prompt Engineering? - Analytics VidhyaApr 19, 2025 am 11:33 AM

Introduction Artificial intelligence (AI) is evolving to understand not just words, but also emotions, responding with a human touch. This sophisticated interaction is crucial in the rapidly advancing field of AI and natural language processing. Th

12 Best AI Tools for Data Science Workflow - Analytics Vidhya12 Best AI Tools for Data Science Workflow - Analytics VidhyaApr 19, 2025 am 11:31 AM

Introduction In today's data-centric world, leveraging advanced AI technologies is crucial for businesses seeking a competitive edge and enhanced efficiency. A range of powerful tools empowers data scientists, analysts, and developers to build, depl

AV Byte: OpenAI's GPT-4o Mini and Other AI InnovationsAV Byte: OpenAI's GPT-4o Mini and Other AI InnovationsApr 19, 2025 am 11:30 AM

This week's AI landscape exploded with groundbreaking releases from industry giants like OpenAI, Mistral AI, NVIDIA, DeepSeek, and Hugging Face. These new models promise increased power, affordability, and accessibility, fueled by advancements in tr

Perplexity's Android App Is Infested With Security Flaws, Report FindsPerplexity's Android App Is Infested With Security Flaws, Report FindsApr 19, 2025 am 11:24 AM

But the company’s Android app, which offers not only search capabilities but also acts as an AI assistant, is riddled with a host of security issues that could expose its users to data theft, account takeovers and impersonation attacks from malicious

Everyone's Getting Better At Using AI: Thoughts On Vibe CodingEveryone's Getting Better At Using AI: Thoughts On Vibe CodingApr 19, 2025 am 11:17 AM

You can look at what’s happening in conferences and at trade shows. You can ask engineers what they’re doing, or consult with a CEO. Everywhere you look, things are changing at breakneck speed. Engineers, and Non-Engineers What’s the difference be

Rocket Launch Simulation and Analysis using RocketPy - Analytics VidhyaRocket Launch Simulation and Analysis using RocketPy - Analytics VidhyaApr 19, 2025 am 11:12 AM

Simulate Rocket Launches with RocketPy: A Comprehensive Guide This article guides you through simulating high-power rocket launches using RocketPy, a powerful Python library. We'll cover everything from defining rocket components to analyzing simula

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

SublimeText3 English version

SublimeText3 English version

Recommended: Win version, supports code prompts!

VSCode Windows 64-bit Download

VSCode Windows 64-bit Download

A free and powerful IDE editor launched by Microsoft

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

SublimeText3 Linux new version

SublimeText3 Linux new version

SublimeText3 Linux latest version

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools