Home  >  Article  >  Technology peripherals  >  What ChatGPT and generative AI mean in digital transformation

What ChatGPT and generative AI mean in digital transformation

PHPz
PHPzforward
2023-05-15 10:19:121163browse

What ChatGPT and generative AI mean in digital transformation

OpenAI, the company that developed ChatGPT, shows a case study conducted by Morgan Stanley on its website. The topic is "Morgan Stanley Wealth Management deploys GPT-4 to organize its vast knowledge base." The case study quotes Jeff McMillan, head of analytics, data and innovation at Morgan Stanley, saying, "The model will provide an internal powered by a chatbot that will conduct a comprehensive search of wealth management content and effectively unlock Morgan Stanley Wealth Management’s accumulated knowledge.”

McMillan further emphasized: "With GPT-4, you basically immediately have the knowledge of the most knowledgeable person in wealth management... Think of it as our chief investment strategist, chief global economics experts, global equity strategists, and every other analyst in the world and are on call every day. We believe this is a transformative capability for our company."

This is the knowledge management The ultimate goal - the ability to embody the organization's knowledge and expertise in the systems, processes and tools that interact with customers.

So has this goal really been achieved? Is generative AI the answer to knowledge access, retrieval and application? Before declaring victory over information chaos, it is important to consider some basic elements and considerations.

First, behind the perception that generative AI can overcome knowledge management challenges is the assumption that knowledge exists in an explicit, documented form. However, in most enterprises, knowledge is locked inside employees’ heads, and if stored in digital form, it is dispersed in silos within an ecosystem of departments, technologies, and repositories. OpenAI further points out on its website that Morgan Stanley publishes thousands of papers every year, covering capital markets, asset classes, industry analysis and global economic regions...This wealth of knowledge creates a unique experience for Morgan Stanley. An internal content library that can be processed and parsed using GPT-4, as well as internally controlled. Morgan Stanley has knowledge that could form the basis for using ChatGPT’s large-scale language model. If enterprise content and knowledge resources are inaccessible, of poor quality, or inconsistent with the needs of customers and employees, ChatGPT will not have access to the specific knowledge that responds to those needs.

Second, generative artificial intelligence creates content. It is not a retrieval mechanism. So how is the original knowledge base used? This is a tricky area. ChatGPT is looking for patterns in content and concept relationships so that it can predict which text should be displayed based on the prompts. The prompt is a signal, just like the search term is a signal. Search engines predict what information should be displayed based not only on terms, but also on other signals relevant to the query scenario, such as the industry or the searcher's role. Scenarios can be provided to ChatGPT in the form of facts or documents in the prompt, or programmatically by pointing to specific information on which to base the response.

Large-scale language model - thesaurus

Large-scale language model is a mathematical representation of terms, concepts and relationships contained in a body of information. The power of large language models lies in their ability to understand user intent—what the user is looking for regardless of how the request is expressed—and predict the word patterns that are most likely to respond to the user's intent. The model "understands" the user's request and makes predictions about what should be returned. Search engines also make predictions based on user queries, albeit through different mechanisms. Search engines can be used to generate retrievals in artificial intelligence scenarios. Retrieve content using semantic search or neural search engines and use large language models to format responses for users.

The thesaurus maps non-preferred terms to preferred terms (for example, "SOW" and "Statement of Work" map to "Proposal", the preferred term that tags the document). Think of one aspect of a large language model as a "thesaurus", but not just words, but phrases and concepts. Users can ask the same question in many different ways. This intent classification is not new and is the basis for chatbots that parse phrase changes into specific actions. Language models are the basis for intent parsing and classification capabilities.

Large language models can also understand word patterns that follow prompts. This is how you enable ChatGPT session fluency. The key to making them useful to the enterprise is to tailor the models to specific content or bodies of knowledge (which is what Morgan Stanley did when implementing ChatGPT) and to incorporate terminology that is unique to the enterprise.

There are many tutorials with example code illustrating how to use large language models with specific content. For example, its videos walk developers through the process of using language models such as GPT-4 and pointing chatbots at specific knowledge and content.

Knowledge-Specific Bots for Enterprises

After reviewing these tutorials, here are some observations:

Customized, knowledge-specific chatbots can use large language models to understand users request and then return results from the specified knowledge source. The developers noted the need to "chunk" content into "semantically meaningful" sections. Componentized content designed to answer a specific question needs to be complete and contextual. It is important to note that knowledge does not usually exist in this state. To componentize, large documents and bodies of text must be broken into chunks. For example, a user manual can be divided into parts by chapters, sections, paragraphs, and sentences. In the world of technical documentation, this is already done - standards such as DITA (Darwin Information Classification Architecture) use a topic-based approach that is ideal for answering questions.

Developers talk about "semantics" and the importance of semantics. What does this mean? Semantics is about meaning. Semantically rich content is tagged with metadata that facilitates the precise retrieval of the required information and the context of the information. For example, if a user uses a specific model of router and that router emits an error code, content tagged with those identifiers can be retrieved when requesting help from a support bot. This process is also known as “interpolation” in the chatbot world.

Custom content is ingested into what’s called a “vector space,” another mathematical model of information that places documents in a multidimensional space (which is a mathematical construct), allowing for similar Documents are clustered and retrieved. This is called "embedding". Embeddings can contain metadata and identifiers (such as reference sources) that help document the reasons why a specific answer was provided to the user. This is important for legal liability and regulatory purposes as well as assurance that correct, most authoritative information is provided to users.

Definition of training artificial intelligence

There are several views on "training". ChatGPT and large language models are trained on large amounts of content, allowing them to understand user queries and respond with optimal answers that are well-formatted and conversational. One way to train the tool is to include in the prompt, "Answer this question based on the following information..."

But there are two problems here:

First, ChatGPT Only a certain amount of content can be processed in the prompt, and this type of question will be very limited. Content can be ingested into the tool, which will support additional training. However, adding content to ChatGPT also merges that content into the public model. As a result, the company's intellectual property rights will be compromised. This risk has led many businesses to ban the use of ChatGPT and other AI tools that have lost intellectual property due to inadvertent uploading of corporate secrets.

Also, there is another way to train content. Large language models can use enterprise-specific knowledge as part of the training corpus, but this requires providing a version behind a firewall. Fortunately, large language models are rapidly becoming commoditized, and some can even be run natively on a laptop. This type of training is also computationally expensive. Another mechanism is to use large language models to interpret the user's goals (their intentions) and then use vector embeddings to programmatically provide scenarios from specific data or content sources.

The language model then processes and formats the response to make it conversational and complete. In this way, knowledge is separated from large language models so that a company’s trade secrets and intellectual property are not compromised.

All these factors point to the need for knowledge management and knowledge architecture to organize information into components so that users can get answers to specific questions. The large language model and the revolutionary nature of ChatGPT deliver the conversational fluency needed to support a positive customer experience with near-human levels of interaction. The key factor is access to well-structured knowledge in the enterprise. ChatGPT looks amazing, but it is based on statistical processing of information and pattern prediction. Information, if organized and integrated correctly, can be an important part of a business's digital transformation.

The above is the detailed content of What ChatGPT and generative AI mean in digital transformation. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete