Home >Technology peripherals >AI >Six pitfalls to avoid with large language models

Six pitfalls to avoid with large language models

王林
王林forward
2023-05-12 13:01:351362browse

From security and privacy concerns to misinformation and bias, large language models bring risks and rewards.

There have been incredible advances in artificial intelligence (AI) recently, largely due to advances in developing large language models. These are at the core of text and code generation tools such as ChatGPT, Bard, and GitHub’s Copilot.

These models are being adopted by all departments. But how they are created and used, and how they can be misused, remains a source of concern. Some countries have decided to take a drastic approach and temporarily ban specific large language models until appropriate regulations are in place.

Here’s a look at some of the real-world adverse effects of tools based on large language models, as well as some strategies for mitigating these effects.

1.Malicious content

Large language models can improve productivity in many ways. Their ability to interpret people's requests and solve fairly complex problems means people can leave mundane, time-consuming tasks to their favorite chatbot and simply check the results.

Of course, with great power comes great responsibility. While large language models can create useful material and speed up software development, they can also quickly access harmful information, speed up bad actors' workflows, and even generate malicious content such as phishing emails and malware. When the barrier to entry is as low as writing a well-constructed chatbot prompt, the term "script kiddie" takes on a whole new meaning.

While there are ways to restrict access to objectively dangerous content, they are not always feasible or effective. As with hosted services like chatbots, content filtering can at least help slow things down for inexperienced users. Implementing strong content filters should be necessary, but they are not omnipotent.

2. Hint injection

Specially crafted hints can force large language models to ignore content filters and produce illegal output. This problem is common to all llms, but will be amplified as these models are connected to the outside world; for example, as a plugin for ChatGPT. This could allow the chatbot to "eval" user-generated code, leading to the execution of arbitrary code. From a security perspective, equipping chatbots with this functionality is highly problematic.

To help mitigate this situation, it's important to understand what your LLM-based solution does and how it interacts with external endpoints. Determine whether it is connected to an API, running a social media account, or interacting with customers without supervision, and evaluate the threading model accordingly.

While hint injection may have seemed inconsequential in the past, these attacks can now have very serious consequences as they begin executing generated code, integrating into external APIs, and even reading browser tabs .

3. Privacy information/copyright infringement

Training large language models requires a large amount of data, and some models have more than 500 billion parameters. At this scale, understanding provenance, authorship, and copyright status is a difficult, if not impossible, task. Unchecked training sets can lead to models leaking private data, falsely attributing quotes, or plagiarizing copyrighted content.

Data privacy laws regarding the use of large language models are also very vague. As we’ve learned in social media, if something is free, chances are the users are the product. It’s worth remembering that if people ask the chatbot to find bugs in our code or write sensitive documents, we’re sending that data to third parties who may ultimately use it for model training, advertising, or competitive advantage. AI-prompted data breaches can be particularly damaging in business settings.

As services based on large language models integrate with workplace productivity tools like Slack and Teams, carefully read the provider’s privacy policy, understand how AI prompts are used, and regulate large language models accordingly For use in the workplace, this is critical. When it comes to copyright protection, we need to regulate access to and use of data through opt-ins or special licenses, without hampering the open and largely free Internet we have today.

4. Error Message

While large language models can convincingly pretend to be smart, they don’t really “understand” what they produce. Instead, their currency is probabilistic relationships between words. They are unable to distinguish between fact and fiction - some output may appear perfectly believable, but turn out to be a confident phrasing that is untrue. An example of this is ChatGPT doctoring citations and even entire papers, as one Twitter user recently discovered directly.

Large-scale language model tools can prove extremely useful in a wide range of tasks, but humans must be involved in validating the accuracy, benefit, and overall plausibility of their responses.

The output of LLM tools should always be taken with a grain of salt. These tools are useful in a wide range of tasks, but humans must be involved in validating the accuracy, benefit, and overall plausibility of their responses. Otherwise, we will be disappointed.

5. Harmful Advice

When chatting online, it is increasingly difficult to tell whether you are talking to a human or a machine, and some entities may try to take advantage of this. For example, earlier this year, a mental health tech company admitted that some users seeking online counseling unknowingly interacted with GPT3-based bots instead of human volunteers. This raises ethical concerns about the use of large language models in mental health care and any other setting that relies on interpreting human emotions.

Currently, there is little regulatory oversight to ensure that companies cannot leverage AI in this way without the end-user’s explicit consent. Additionally, adversaries can leverage convincing AI bots to conduct espionage, fraud, and other illegal activities.

Artificial intelligence has no emotions, but its reactions may hurt people's feelings and even lead to more tragic consequences. It is irresponsible to assume that AI solutions can fully interpret and respond to human emotional needs responsibly and safely.

The use of large language models in healthcare and other sensitive applications should be strictly regulated to prevent any risk of harm to users. LLM-based service providers should always inform users of the scope of AI's contribution to the service, and interacting with bots should always be an option, not the default.

6. Bias

AI solutions are only as good as the data they are trained on. This data often reflects our biases against political party, race, gender or other demographics. Bias can negatively impact affected groups, where models make unfair decisions, and can be both subtle and potentially difficult to address. Models trained on uncensored internet data will always reflect human biases; models that continuously learn from user interactions are also susceptible to deliberate manipulation.

To reduce the risk of discrimination, large language model service providers must carefully evaluate their training data sets to avoid any imbalances that could lead to negative consequences. Machine learning models should also be checked regularly to ensure predictions remain fair and accurate.

Large-scale language models completely redefine the way we interact with software, bringing countless improvements to our workflows. However, due to the current lack of meaningful regulations for artificial intelligence and the lack of security for machine learning models, widespread and rushed implementation of large language models is likely to experience major setbacks. Therefore, this valuable technology must be quickly regulated and protected. ?

The above is the detailed content of Six pitfalls to avoid with large language models. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete