Home  >  Article  >  Technology peripherals  >  How to manage generative AI

How to manage generative AI

王林
王林forward
2024-03-11 12:04:101108browse

How to manage generative AI

##Author丨Dom Couldwell

Compiled丨Noah

produced | 51CTO Technology Stack (WeChat ID: blog51cto)

According to McKinsey & Company estimates, generative artificial intelligence is expected to bring economic benefits of US$2.6 trillion to US$4.4 trillion to the global economy annually. This forecast is based on 63 new application scenarios that are expected to bring improvements, efficiency gains and new products to customers in multiple markets. This is undoubtedly a huge opportunity for developers and IT leaders.

The core of generative AI lies in data. Data not only gives generative AI the ability to understand and analyze the world around it, but also powers its transformative potential. To succeed in the field of generative AI, companies need to effectively manage and prepare data.

To successfully build and operate large-scale AI services and support generative AI projects, you need to ensure you do your homework on data preparation and adopt a smart and sustainable funding strategy . A slow pace and waning support approach will not lead to an advantage in artificial intelligence. Therefore, in addition to expanding the scale of AI services, it is also necessary to ensure stable funding sources for projects to promote long-term development and continuous innovation.

The huge potential of generative AI will be wasted if we don’t improve how we manage data or take the right approach to scale and cost control. . Here are some thoughts on how we can improve our approach to data management and support generative AI projects in the long term.

1. Where does the data come from?

Data exists in many forms. If used properly, each form of data can enhance the richness of generative AI insights. and quality.

The first form is structured data, which is organized in a regularly ordered and consistent manner and includes items such as product information, customer demographics, or inventory levels. This type of data provides an organized fact base that can be added to generative AI projects to improve the quality of responses.

In addition, you may have external data sources that complement your internal structured data sources, such as weather reports, stock prices, or traffic flows, etc. This data can bring real-time, real-world context to the decision-making process, and integrating it into a project can provide additional high-quality data, but it may not be necessary to generate this data yourself.

Another common data set is derived data, which covers data created through analysis and modeling scenarios. Such insights might include customer intent reports, seasonal sales forecasts, or segment analysis.

The last common form of data is unstructured data, which is different from the regular report or data formats analysts are used to. This type of data includes formats such as images, documents, and audio files. These data capture the nuances of human communication and expression. Generative AI programs often work around images or audio, which are common inputs and outputs for generative AI models.

2. To achieve large-scale application of generative AI

All these diverse data sets each exist in their own environment. To make it useful for generative AI projects, the key is to make this diverse data landscape accessible in real-time situations. With such a large amount of potential data involved, any approach must be able to dynamically scale as demand grows and replicate data globally, ensuring resources are close to users when requested, thus avoiding downtime and reducing latency in transaction requests. .

In addition, this data also needs to be preprocessed so that it can be effectively utilized by generative AI systems. This involves creating embeddings, which are mathematical values, or vectors, that represent semantic meaning. Embedding enables generative AI systems to go beyond specific text matching and instead embrace the meaning and context embedded in the data. Regardless of the original data form, creating embeddings means the data can be understood and used by generative AI systems while retaining its meaning and context.

With these embeddings, enterprises can support vector or hybrid searches across all their data while combining value and meaning. These results are then collected and passed back to a large-scale language model (LLM) that is used to integrate the results. By providing more data from multiple sources, rather than relying solely on LLM itself, your generative AI projects can provide users with more accurate results and reduce the risk of fictional content.

In order to achieve this in practice, the correct underlying data architecture must be chosen. In this process, data should be avoided as much as possible from being dispersed in different solutions to form a fragmented patchwork, because each such solution represents an island of data that requires long-term support, query, and management. Users should be able to quickly ask questions to the LLM and get responses quickly, rather than waiting for multiple components to respond and have their results weighed by the model. A unified data architecture should provide seamless data integration, allowing generative AI to fully utilize the entire available data spectrum.

3. Advantages of modular approach

To scale generative AI implementations, there needs to be a balance between accelerating adoption and maintaining control of critical assets. Taking a modular approach to building generative AI agents can make this process easier because it breaks down the implementation process and avoids potential bottlenecks.

Similar to the application of microservices design in applications, a modular approach to AI services also encourages best practices around application and software design, eliminating points of failure and enabling more Multiple potential users have access to this technology. This approach also makes it easier to monitor the performance of AI agents across the enterprise, pinpointing more precisely where problems occur.

The first benefit of modularity is interpretability, because the components involved in a generative AI system are separated from each other, making it easier to analyze how the agent operates and makes decisions. . AI is often viewed as a "black box," and modularity makes it easier to track and interpret results.

The second benefit is security, as individual components can be protected with optimal authentication and authorization mechanisms, ensuring that only authorized users can access sensitive data and functionality. Modularity also makes compliance and governance easier, as personally identifiable information (PII) or intellectual property (IP) can be secured and kept separate from the underlying LLM.

4. Provide a continuously flexible funding model

In addition to adopting a microservices approach, a platform mindset should be adopted in overall generative AI projects. This means replacing the traditional project-based software project funding model with one that provides an ongoing and flexible funding model. This approach empowers participants to make value-based decisions, respond to emerging opportunities, and develop best practices without being constrained by rigid funding cycles or business cases.

Managing budgets in this way also encourages developers and business teams to consider generative AI as part of the organization’s already existing infrastructure, making it easier to smooth out spikes in planning workloads and troughs, it’s easier to take a “center of excellence” approach and maintain consistency over the long term.

A similar approach is to regard generative AI as a product operated by the enterprise itself, rather than as pure software. AI agents should be managed as products, as this more effectively reflects the value they create and makes support resources for integrations, tools, and tips more readily available. Simplifying this model helps spread understanding of generative AI across the organization, promotes the adoption of best practices, and creates a culture of shared expertise and collaboration in generative AI development.

Generative AI has huge potential, and companies are racing to implement new tools, agents, and cues into their operations. However, moving these potential projects into production requires effective management of data, a foundation for scaling the system, and a budget model in place to support the team. Getting your processes right and prioritizing will help you and your team unlock the transformative potential of this technology.

Reference address: https://www.infoworld.com/article/3713461/how-to-manage-generative-ai.html

The above is the detailed content of How to manage generative AI. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete