Home > Article > Technology peripherals > How to improve the observability of artificial intelligence?
In the context of the current era, we can understand the nostalgia for the past, but we must realize that we are in a different environment. Therefore, observability will never be the same again
How AI improves observability
Recently, observability has become more and more complex, certainly more complicated than IT monitoring It was much more complicated in the early days of , when everything was running on mainframes and logs and all available monitoring data could be easily collected and visualized.
Even after more recent applications became the core of most organizations, the situation was much simpler. However, in our current world of Kubernetes, microservices, and serverless, things look very different. Imagine taking a hammer and smashing the easily observed flow of the past, and watching it break into hundreds of pieces; however, all of these small pieces must still remain tightly connected and constantly communicating. Essentially, this situation is caused by the initial introduction of abstraction and virtualization. When Kubernetes emerged, its ephemeral, rapid changes and distributed nature added a lot of complexity. In this situation, everything becomes more difficult to manage and more difficult to monitor and troubleshoot; many people feel at a loss and don't know what they have gotten themselves into. We might ask ourselves – does it really need to be so complicated?
We can understand people's nostalgia for the past, but because of the environment we are in now, observability will never be the same again
Revisiting the concept of "modern" observability Boundaries
Next, let’s think about designing a reliable server in today’s world The art of observability systems. Particularly in areas where coding or infrastructure problems have evolved into big data problems, we need to find ways to improve the computational, network, and storage efficiency needs of these modern observability systems. It’s important to note that more data does not necessarily mean better insights
It turns out that abstraction, virtualization and microservices are just the tip of the iceberg. With the emergence and continued adoption of artificial intelligence tools, such as Copilot,Code Whisperer, and others, it is practical for humans to process, analyze, and correlate billions of different events to understand whether the code they write runs as expected. becomes an unsolvable problem. Once again, observability has become an urgent big data problem.
Even if an engineer has the skills to understand observability signals and how to analyzetelemetry data - a talent that is hard to come by - the massive amount of data to sort through is not Realistic, even startling. The fact is that the vast majority of this vast amount of data is not particularly useful in providing insights into the performance of business-critical systems. More does not mean better. At the same time, most popular observability solutions indicate that in order to solve the huge data flow and complexity of this big data problem, a lot of sophisticated features and additional tools are required - all of which come at a hefty price tag Tags to cope with data expansion. But there is still hope
Embrace the era of AI observability
How does this work? LLM is becoming adept at processing, learning, and identifying patterns in large-scale repetitive text data—the very nature of log data and other telemetry in highly distributed and dynamic systems. LLM knows how to answer basic questions and draw useful inferences, hypotheses, and predictions.
This approach is not perfect because LLM models are not yet designed for real-time and are not accurate enough in determining the complete context range to solve all observability challenges. However, it is much easier to first establish a baseline with LLM, understand what is going on, and get helpful recommendations than it is for humans to understand and establish context on large amounts of machine-generated data in a reasonable amount of time.
Therefore, LLM is very relevant for solving observability problems. They are designed to be used in text-based systems, as well as to analyze and provide insights. This can be easily applied to observability through integration to provide meaningful recommendations.
Rewritten content is: We believe that one of the greatest values of LLM in this field is to better support and enable practitioners who may not have high technical proficiency Ability to handle large and complex data problems. Most production problems that need to be solved have enough time for LLM to provide assistance based on historical contextual data. In this way, LLM can make observability simpler and more cost-effective , more disruptive opportunities. What follows is an LLM that can be written and investigated in natural language, rather than arcane query languages - a huge boon for users of all levels, but especially those with less practical experience. Includes business unit managers.
Now, users no longer need to be experts on all relevant information, they can write queries related to common parameters and use the natural language used by business unit executives, not just production engineers. This enables observability to a wide range of new processes and stakeholders, not just production engineersAt Logz.io we have started integrating with LLM and are working hard to develop amazing features on the platform Excited new features designed to take full advantage of these emerging AI capabilities. We believe this will bring critical innovation to organizations facing big data challenges and seeking essential observability. While there are still pressing issues of cost and complexity in the market, we believe this gives everyone many reasons to remain optimisticThe above is the detailed content of How to improve the observability of artificial intelligence?. For more information, please follow other related articles on the PHP Chinese website!