Home  >  Article  >  Technology peripherals  >  What can be learned from analyzing failed AI projects?

What can be learned from analyzing failed AI projects?

WBOY
WBOYforward
2023-04-08 18:21:141003browse

The failure of AI projects often has nothing to do with big troubles, but is determined by tiny details. Faced with all the exciting possibilities, companies are often full of confidence when they first launch AI projects. However, practical problems during the specific implementation process can easily extinguish this enthusiasm, leading to the AI ​​project being shelved or even ultimately failing. One of the common problems that causes failure is the organization's lack of accurate consideration of the long-term cost of the project. The management only calculated the initial cost of the project, but failed to pay attention to the subsequent maintenance and update expenses.

The research company Cognilytica conducted a comprehensive analysis of hundreds of failed AI projects and realized that many organizations did not realize the continuity of the AI ​​project life cycle. Organizations often allocate budget only for the first few iterations of a project, including data preparation, cleaning, model training, data labeling, model evaluation, and iteration requirements, but fail to maintain budget for ongoing iterations. Additionally, organizations must continuously monitor model and data decay, retrain models as needed, and consider further expansion and iteration in the future. Over time, this will inevitably lead to deviations or even imbalances in the organization's expected return on investment for AI projects.

What can be learned from analyzing failed AI projects?

What kind of thinking process did everyone go through when considering the cost of continuous iteration of the model? The challenge most organizations face is that they tend to view AI projects as one-time proof-of-concept or pilot applications, without considering setting aside a portion of funds, resources, and manpower for ongoing evaluation and retraining of models. But as a typical data-driven project, AI is by no means a one-time investment. People may not realize that once a model is put into production, they need to continue to allocate funds, resources, and manpower to iteration and development of the model.

So organizations that only consider the cost of model construction will encounter various problems after the project is launched. Taking AI project cost and return on investment as an example, AI project owners need to pay attention to how much it costs to maintain the model, and how much resources they are willing to invest in subsequent data preparation and model iteration.

One thing that successful AI projects have in common is that their functions will not be delivered all at once. In contrast, successful projects view AI solutions as a continuous iterative cycle with no clear starting point and end point. Just as cybersecurity projects are not one-time projects, data-driven projects such as AI also need to continue to operate to ensure they adapt to changing realities and changing data. Even a model that works well initially may gradually fail over time as data drift and model drift are inevitable. In addition, as the organization itself develops, the professional knowledge and skills, use cases, models and data for AI applications will continue to be updated and changing.

Furthermore, the global economy and world structure are also fluctuating in unexpected ways. As a result, any long-term planning project, including extremely complex AI projects, will inevitably have to make adjustments accordingly. Retailers certainly could not have anticipated the supply chain and labor market shocks seen over the past two years, nor could organizations have anticipated the rapid shift to working from home. Rapid changes in the real world and user behavior will inevitably lead to changes in data, so the model must also change. Because of this, we need to continuously monitor and iterate the model, taking full account of data drift and model drift.

Thoughts on iteration: Methodology and ML Ops

When an organization plans to expand or enhance a model, it also needs to match the original model iteration mechanism. For example, if a North American business wants to expand its purchasing pattern prediction model to other markets, it will need to continuously iterate the model and data to adapt to new data needs.

These factors mean that organizations must continue to provide additional funding for iterations to ensure that the model correctly identifies data sources and other key factors. Organizations that are successful in AI also realize that they need to follow empirically proven iterative and agile methods to successfully complete the expansion of AI projects. Relying on agile methodologies and data-centric project management ideas, the Cross-Industry Data Mining Process Standard (CRISP-DM) and others have begun to enhance AI functions to ensure that iterative projects do not miss certain key steps.

With the continuous development of the AI ​​market, the emerging machine learning model operation management called "ML Ops" has also begun to be sought after. ML Ops focuses on the entire life cycle of model development and use, machine learning operations and deployment. ML Ops methods and solutions are designed to help organizations manage and monitor AI models in a continuously evolving space. ML Ops can also be said to stand on the shoulders of giants, fully absorbing the development-centered project iteration/development ideas of DevOps and the management experience of DataOps in the management of ever-changing large-scale data sets.

The goal of ML Ops is to provide organizations with visibility guidance such as model drift, model governance, and version control, thereby assisting AI project iterations. ML Ops can help everyone better manage these problems. Although the market is currently flooded with various ML Ops tools, ML Ops, like DevOps, mainly emphasizes that organizations do things themselves, rather than spending money to solve them without thinking. Ml Ops best practices cover a series of aspects such as model governance, version control, discovery, monitoring, transparency, and model security/iteration. ML Ops solutions can also support multiple versions of the same model simultaneously, customizing their behavior according to specific needs. Such solutions also track, monitor and determine who has access to which models, while ensuring strict governance and security management principles.

Taking into account the realistic needs of AI iteration, ML Ops has begun to become an important part of the overall model construction and management environment. In the future, these functions are expected to increasingly become part of the overall AI and ML tool set, and gradually land in application scenarios such as cloud solutions, open source products, and ML machine learning platforms.

Failure is the mother of success

The success of ML Ops and AI projects cannot be separated from the support and guidance of best practices. Problems will not cause the AI ​​project to fail; the inability to accurately solve the problem is the root cause of failure. Organizations need to view AI projects as an iterative and step-by-step process, and fully explore the best practices that suit them through cognitive project management for AI (CPMAI) methods and evolving ML Ops tools. Think big, start small, and the concept of continuous iteration should run through the entire life cycle of an AI project. These failures are by no means the end of the story, but a new beginning.

The above is the detailed content of What can be learned from analyzing failed AI projects?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete