Home >Technology peripherals >AI >Ten thoughts on large model application design

Ten thoughts on large model application design

王林
王林forward
2023-12-04 17:17:211246browse

Technology is not omnipotent, but without technology it may be absolutely impossible, and the same may be true for large models. Application design based on large models needs to focus on the problem being solved. In the field of natural language processing, the large model itself only unifies various NLP tasks into a sequence-to-sequence model to a certain extent. Using large models, we are solving specific production and life problems, and product and technical design are still indispensable.

So, if big models are reimagining the future of software engineering, are there some basic principles we should follow?

1. Model first, continuous iteration

If the model can complete the task, there is no need to write code; the model will continue to evolve, but the code will not

In today's era, the value of models is becoming more and more prominent. Different from traditional programming methods, current development ideas are more inclined to "model first". This means that when we are faced with a problem or task, we first consider whether it can be solved using existing models, rather than immediately starting to write code. Because the code is fixed, the model has huge room for development. Over time, models have strong learning and adaptability capabilities, and can self-optimize and improve through continuous iteration. Therefore, our first task is to tap the potential of the model and make it a tool for us to solve problems

The goal of the entire system is to leverage LLM's ability to plan and understand intentions to build efficient projects. Along the way, we may be tempted to fall back into that imperative mindset and write code for every detail of our program. However, we must resist this temptation, and to the extent that we can get the model to do something reliably now, it will get better and more robust as the model evolves

2 Trade off accuracy and use interaction for disambiguation

Use trade-off leverage for accuracy and use interaction to alleviate ambiguity. The correct mindset when coding with LLM is not "let's see what we can make the dancing bear do" but to get as much leverage as possible from the system. For example, it is possible to build very general patterns like "Build report from database" or "Teach a year's worth of subjects" that can be parameterized with plain text prompts, easily producing very valuable and differentiated results.

In the pursuit of maintaining high accuracy, we need to weigh its relationship with other factors. To achieve this balance, we can adopt an interactive approach to eliminate possible ambiguities and misunderstandings. This strategy not only improves accuracy, but also increases operational flexibility and efficiency

In the process of coding with LLM, the key mindset should be to think about how to get the most leverage from the system, and It’s not “try it and see what you can do.” This means that we should not just be satisfied with simple function implementation, but should deeply explore the potential of the system and let it create greater value for us

In practical applications, we can build some common patterns . For example, a mode such as "Generate report from database" is highly adaptable and can adjust parameters through simple text prompts to cope with various needs. Another example is the model of "teaching a one-year in-depth learning course", which integrates a wealth of educational resources and can also be easily adjusted through interactive methods to meet personalized teaching needs.

Through the application of these common patterns, work efficiency is not only improved, but also valuable and distinctive results can be easily produced. This strategy of weighing accuracy and interaction disambiguation is undoubtedly an important way of thinking in large model application design.

3 Code is used for syntax and procedures, models are used for semantics and intent

In the field of modern programming, the division of labor between code and models is becoming increasingly clear . Simply put, code is mainly responsible for implementing syntax and procedures, while models focus on generating and interpreting semantics and intent. In practical applications, this division of labor has many expressions, but the core idea is the same: code is used to execute specific instructions and processes, while models are used to reason, generate, and understand the deep meaning and goals of the language

Fundamentally, models are great at reasoning about the meaning and purpose of a language, but often underperform code when they are asked to perform specific computations and processes. For example, a high-level model may be easy to code for solving Sudoku, but it may be relatively difficult for it to solve the Sudoku itself.

Every code has its unique advantages, the key is to choose the tool that best suits the specific problem. The boundary between syntax and semantics is a major challenge in the design of large-scale model applications. In this context, we need a deeper understanding of the respective strengths and weaknesses of code and models in order to more effectively use them to solve problems

4 Avoid vulnerability When building any system, a fact that cannot be ignored is that the overall strength of the system is often determined by its most vulnerable part. This view applies not only to traditional software systems, but also to applications based on large models

When pursuing the flexibility and efficiency of the system, hard coding should be avoided as much as possible. Hardcoding refers to writing specific values ​​or logic directly into the code without considering possible future changes or extensions. While this approach may bring convenience in the short term, in the long run it can lead to code that is rigid and difficult to maintain. Therefore, we should pay attention to reasoning and flexibility when writing codes and algorithms

When designing prompts and interactions, we should try to include enough information and logic so that the system can make decisions and make decisions autonomously. Reason instead of simply executing predefined commands. In this way, not only can the use of hard coding be reduced, but the capabilities of LLM can also be better utilized, making the system smarter and more flexible.

5 Data quality comes first, and the application of LLM is closely related to high-quality data

Large-scale models indeed demonstrate extraordinary capabilities, just like "well-educated" individuals same, but in practical applications they still lack some context and initiative

Simply put, if you ask these models a simple or open-ended question, they will give you a simple or general answer . Such answers may lack depth or detail and may not satisfy all needs. If you want to get more detailed and in-depth answers, then the way and strategy of asking questions need to be smarter.

This is actually a manifestation of the "Garbage in, Garbage out" principle in the era of artificial intelligence. No matter how advanced the technology becomes, the quality of the incoming data remains critical. If the input data is ambiguous, inaccurate, or incomplete, the answers the model outputs are likely to be as well.

In order to ensure that the LLM model can give high-quality, in-depth answers, it is necessary to ensure that the input data is accurate, detailed, and rich in context. This also means that data quality remains paramount. Only by paying attention to and ensuring the quality of data can we expect to obtain truly valuable and in-depth information from these advanced models

6 Treat uncertainty as an anomaly

Whenever the model encounters an uncertain situation, we cannot simply ignore it or give a vague answer. Instead, we should rely on interaction intentions with users to clarify this uncertainty.

In programming, when there is uncertainty in a set of nested prompts - for example, the result of a prompt may have multiple interpretations, we should take a method similar to "exception throwing" Strategy. This means that this uncertainty should be passed to higher levels in the stack until you reach a level where you can interact with the user or clarify the uncertainty.

Through such design strategies, programs can respond appropriately when faced with uncertainty, thereby providing more accurate and reliable results

7 Text as a universal protocol

Text has become a universal protocol, mainly due to LLM's excellent ability to parse natural language, intent and semantics. Therefore, text has become the preferred format for transferring instructions between prompts, modules and LLM-based services

Although natural language may be slightly imprecise in some application scenarios, compared with other structured Languages, such as XML, have the advantage of being concise, intuitive, and easy for humans to understand. Of course, for tasks that require a high degree of precision and structure, structured language can still be used to assist in a small amount. But in most scenarios, natural language performs quite well in conveying instructions and intentions.

It is worth mentioning that with the popularization and advancement of these LLM-based technologies, text, as a "future-proof" natural interaction method, will further promote the interoperability and communication between different systems and different prompts. understand. If two completely different LLM services can understand and respond to the same text instructions, then the collaboration and interaction between them will become as natural and smooth as communication between humans

The purpose of rewriting the content is not to change the original meaning, but to rewrite the language into Chinese

8 Complex problems, decomposition operations

When faced with a complex problem, it is not only a challenge for people, but also for large models. In practical applications, if we directly use prompts for complex problems as part of the program, we may encounter problems, because all we really need is the result of the inference

In order to solve this problem, an effective method is to use the "meta" prompt. This prompt not only poses a question, but also provides a detailed answer, and then asks the model to extract key information from it. This approach works well because it actually transforms a complex cognitive task into a relatively simple one. Imagine if you give a person a task of "read this article and find out the answer", even if the user has no expertise in the relevant field, he is very likely to complete this task, because the power of natural language is huge.

When designing applications based on large models, there are some things that need to be paid attention to. Things that ordinary people find difficult may be equally difficult in models. Faced with this situation, the best strategy is to break down a complex problem or task into simpler steps. This can reduce the difficulty of processing and improve the stability and accuracy of answers

9. Wherever there is control, there is a model

Model More than just a tool, it can also be a weapon against our own mistakes. Many times, we tend to imagine the operation of LLM (Large Language Model) as the internal process of a "brain". However, it is important to recognize that despite some similarities between models and human thinking, there are many meaningful differences between the two.

One feature is particularly important: models often lack persistent memory during short-term interactions. This means that the model is unlikely to remember all the details from one minute of interaction to the next. This feature provides us with the possibility of some control.

This control is not limited to code review. In fact, the model can also act as a security monitor for the code to ensure the safe operation of the code; it can also be used as a component of the testing strategy to help us develop more effective testing plans; it can even act as a content filter to help us generate high-quality Quality content

Therefore, as long as we control and guide the model appropriately, it can become a powerful "assistant" in our work. The basis of this control is our in-depth understanding and mastery of the internal mechanisms and characteristics of the model.

10. Identify boundaries and don’t think that big models can do everything

The capabilities of large language models are truly amazing. They can process and parse large amounts of text data. Generate logical and coherent text, even surpassing human performance on some tasks. However, that doesn’t mean we should blindly worship these big models and think they can do anything.

Large models actually still have many limitations and limitations. Although they can process large amounts of text data, they don't really understand the nuances of language and context the way humans do. In addition, their performance is also limited by the selection of training data and algorithms, and some biases and errors may occur

Therefore, we should maintain a rational and cautious attitude when using large models, as well as appreciate their While bringing convenience and progress, we must also be wary of their limitations and potential risks. In this way, these models can be better utilized to promote the healthy development of large model-based applications.

The above is the detailed content of Ten thoughts on large model application design. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete