Home >Technology peripherals >AI >Evaluation and improvement of smart cockpit software performance and reliability

Evaluation and improvement of smart cockpit software performance and reliability

WBOY
WBOYforward
2024-03-14 09:00:201006browse

Author | Zhang Xuhai

With the rapid development of smart cars, smart cockpits have had some problems in terms of performance and reliability, resulting in poor user experience and complaints. Increase. This article briefly discusses the importance of building a smart cockpit software evaluation framework from an engineering perspective, as well as methods to continuously improve performance and reliability.

Evaluation and improvement of smart cockpit software performance and reliability

1. Poor performance and reliability of smart cockpit software

According to the "2023 Smart Cockpit White Paper - Focus" released by KPMG "Electrification in the Second Half", China's automotive smart cockpit market continues to expand, and the compound annual growth rate from 2022 to 2026 is expected to exceed 17%, showing that this field has huge development potential. As the market grows, smart cockpit software functions will become more diverse and powerful, and the overall intelligence level will also be significantly improved. This shows that the automotive industry is moving in a more intelligent and connected direction, providing consumers with a more intelligent, convenient and comfortable driving experience.

Evaluation and improvement of smart cockpit software performance and reliability

(Source: "2023 Smart Cockpit White Paper - Focus on the Second Half of Electrification")

As the market size forecast continues to expand, consumers are increasingly interested in smart cockpit software The proportion of complaints is also increasing year by year. It mainly focuses on the operating experience, performance and reliability of smart cockpit software, highlighting the challenges brought about by the continuous increase of smart functions.

According to the car quality network’s car complaint analysis report for the four quarters of 2023, the quality problems involved in smart cockpits (vehicle machines) account for a significant proportion, among which the top 20 complaint fault points in Q1~Q4 are related to car machines Related parts (audio and video system failures, navigation problems, in-vehicle interconnection failures, driving safety assistance system failures, etc.) accounted for 15.89%, 10.99%, 10.56% and 9.56% of the total complaints respectively.

Evaluation and improvement of smart cockpit software performance and reliability

_(Source: Chezhi.com)_Further checking the specific complaint form, you will find that problems including crashes, black screens, freezes, slow responses, etc. are very common and seriously affect the The user's driving experience also reduces the user's confidence and recognition of the brand.

After combining the development trends of smart cockpit software and user complaints, it can be found that performance and reliability are the most critical factors affecting the user experience in addition to ease of operation. These two key factors are not only directly related to user satisfaction, but also determine the competitiveness of smart cockpit software in the market to a large extent.

  • Performance improvement is the cornerstone to ensure the smooth operation of smart cockpit software. As functions continue to increase, software requires more efficient processors and optimized algorithms to ensure instant response to user operations and high system fluency.
  • Reliability is the key to ensuring that users can trust smart cockpit software in various usage scenarios. Users expect that they will not be disturbed by smart cockpit software failures during driving. It is best for the system to run stably to avoid problems such as crashes or freezes.

In the following article, we will combine the best practices of software development and the characteristics of software in the smart cockpit field to explore methods to evaluate and improve its performance and reliability.

2. Evaluation framework for performance and reliability

If you can't measure it, you can't improve it.

The smart cockpit software system itself is a kind of software, and its research and development process also follows the common processes of software architecture design, development implementation and quality verification. Therefore, before discussing how to improve, we should first clarify: How to correctly evaluate the performance and reliability of software systems?

1. Software Architecture Characteristics Model

Mark Richards and Neal Ford once described "architectural characteristics" in "Software Architecture: A Guide to Architectural Patterns, Characteristics, and Practices" :

Architects may work with others to identify domain or business requirements, but a key responsibility of the architect is to define, discover, and analyze the domain-independent things necessary for the software: architectural features.

Architecture Characteristics are software characteristics that architects need to consider when designing software that are independent of domain or business requirements, such as auditability, performance, security, scalability, and reliability. Sex and so on. In many cases we also call them nonfunctional requirements (Nonfunctional Requirements) or quality attributes (Quality Attributes).

Obviously, key software architecture features need to be taken into consideration at the beginning of the architecture design, and continued attention should be paid during the software development process. So when developing software systems, what are the key architectural features that need to be considered?

ISO/IEC 25010:2011 is a set of standards promoted by the International Organization for Standardization (now updated to the 2023 version). It is affiliated with the ISO System and Software Quality Requirements and Evaluation (SQuaRE) system and defines a Group systems and software quality models. This quality model is widely used to describe and evaluate software quality, and can well guide us in modeling key architectural features of software.

The quality model described by ISO 25010 is as follows (the parts related to performance and reliability are highlighted in the figure):

Evaluation and improvement of smart cockpit software performance and reliability

##ISO 25010 on software architecture characteristics (called "quality attributes" in the original text of the standard) are divided and cover many aspects, such as functionality, reliability, performance efficiency, maintainability, portability, etc. Each architectural feature defines the key aspects related to it. The feature also includes multiple sub-features to describe the specific dimensions of the feature in more detail. It can be seen that this quality model provides a comprehensive and general framework to better understand and evaluate the quality of software.

For performance characteristics, the model is divided into three sub-characteristics: time characteristics, resource utilization, and capacity; for reliability characteristics, the model is divided into four sub-characteristics: maturity, availability, fault tolerance and easy recovery. sex.

Of course, any kind of software has its own characteristics and operating environment. Software that can meet all the architectural features in the above model is excellent, but the cost is bound to be high, just like for an internal system with only 3 users. It is not necessary to design the system to be elastically scalable to meet availability. Obviously, in the field of smart cockpit software, using user experience to evaluate performance and reliability characteristics is more consistent with the design goals of smart cockpit software than using throughput and elastic scaling ratio to evaluate.

2. Evaluate architectural characteristics through the indicator system

Analyzing the previous software quality model, we will find that this model mainly defines how the architectural characteristics of the software "should behave" ”, but it does not explain “how to evaluate” to judge that the requirements of architectural features have been met. The features and sub-features in the quality model are qualitative descriptions of architectural features, but how to quantitatively evaluate architectural features is not mentioned.

In fact, SQuaRE also provides an evaluation framework for quality models (see ISO/IEC 25020:2019 for details):

Evaluation and improvement of smart cockpit software performance and reliability

The above evaluation framework is essentially It is to use a set of indicators with different weights to evaluate an architectural feature (sub-feature). The indicator can be calculated from some indicator elements, and the indicator elements can be measured through some measurement methods implemented in software development activities.

In the software industry, many evaluation indicators can reach consensus across business areas, such as response time, throughput, RTO, RPO, MTTR, etc., which companies can directly adopt when establishing their own indicator systems in their business areas.

The following are some examples of relatively common software performance and reliability indicators, which are applicable to most software:

Evaluation and improvement of smart cockpit software performance and reliability

Evaluation and improvement of smart cockpit software performance and reliability

Of course, due to differences in functional areas and operating environments, there are bound to be certain differences in the indicator systems used to evaluate architectural characteristics.

First of all, different business scenarios will have different weight settings for evaluation indicators. For example, for the performance efficiency evaluation of smart cockpit systems and software, time characteristics are crucial because it is related to user driving experience. For Web applications that provide Internet services, in order to provide services to more users, capacity characteristics need to be paid attention to. focus.

Secondly, specific areas will have their own unique performance indicators. These difference indicators need to be extracted from actual business. For example, the fluency of the UI interface cannot be simply evaluated by response time, but needs to be comprehensively judged through indicators such as frame rate and number of dropped frames.

3. Find the data source of indicator elements

After establishing the indicator system, the next problem is how to find reasonable indicator elements to calculate the indicator value.

Similarly, there are many common indicator elements that can be adopted directly, such as cyclomatic complexity, module coupling, CPU usage, memory usage, transaction execution time, concurrency, etc. However, indicator elements are more relevant to the business field than the indicators themselves, and it is more necessary to combine domain knowledge to find suitable indicator elements.

The GQM method is an effective analysis method for finding and establishing indicator elements: GQM stands for "Goal - Question - Metrics", which can be translated as "Goal - Question - Indicator". It is an analysis method with a long history. Introduced by Victor Basili and David Weiss in 1984.

Essentially, GQM analyzes the structure through a tree, progressing layer by layer. First, we ask questions about the goals based on how to achieve them, and then break down each question into multiple indicator elements that can support the solution to the problem, and finally select the most appropriate indicator elements.

Below we take "evaluation index elements to help find the performance and reliability characteristics of smart cockpit software" as an example, respectively based on "evaluating the smoothness of smart cockpit home screen operations" and "calculating faults in smart cockpit systems and applications" "Rate and availability" as the goal, establish a GQM analysis tree:

Evaluation and improvement of smart cockpit software performance and reliability

Evaluation and improvement of smart cockpit software performance and reliability

##At the beginning of the analysis, in order to expand ideas , you can first identify as many possible indicator elements as possible without considering the value and difficulty of obtaining them, and then analyze the value and difficulty of obtaining each indicator element, and prioritize and filter them accordingly. The most suitable indicator element. This process can follow the following priority principle:

    The more problems it can support, the higher the priority
  • The easier it is to collect and calculate, the higher the priority
Based on the GQM method, we can dismantle abstract indicators and obtain clearer indicator calculation formulas and collected data points. At this point, a complete evaluation framework is completed.

3. Engineering methods to continuously improve performance and reliability

Based on the evaluation framework introduced previously, we have mastered certain analysis methods and clarified how to improve intelligence Directions for cockpit software performance and reliability.

The next step in the evaluation is improvement. This section will discuss how to use engineering methods to continuously improve the performance and reliability architecture characteristics of the smart cockpit software to ensure that as the software iterates, its performance and reliability will improve. Not only will reliability not deteriorate, but it will steadily improve over the long term.

1. Architecture modeling guides research and development

Modeling is an effective practice for analyzing business domains and architectural characteristics during the design phase. When designing software architecture, many organizations tend to focus on business domain modeling and underestimate architectural feature modeling. This often results in design considerations such as security, reliability, performance, etc. being seriously put behind the scenes, and then being overridden by production problems after the software is released. Force improvement.

In fact, early architectural feature modeling can not only guide code development in the subsequent research and development process, but can also be naturally converted into white-box testing to verify whether the code meets the design requirements.

For performance modeling, a performance model can be formed by identifying the performance concerns of the software architecture and predefined performance indicators. Regarding performance modeling, the author has introduced it in "What is Performance Engineering".

For reliability modeling, thanks to the many mature modeling methods in the automotive manufacturing field, the software field can also be directly referenced and tailored. Modeling methods such as Fault Tree Analysis (FTA) and Failure Mode and Effects Analysis (FMEA). _(Source: National Standard G describing FMEA procedures)_

Evaluation and improvement of smart cockpit software performance and reliability

(B/T 7826-2012)

In order to avoid The established model is only valid at the architecture review meeting, but does not follow the architectural design at all when it is actually implemented. It is necessary to build a corresponding fitness function based on the model to ensure that the architecture will not slowly deteriorate. The next section will introduce architecture adaptation. degree function.

2. Continuous care of fitness function

With the indicator system, we can quantitatively analyze and evaluate the performance and reliability of the smart cockpit software. However, if the evaluation process is too complex, lengthy, and difficult to conduct quickly, then over time, the evaluation of these architectural features will become a heavy burden on the team, which means that the number of evaluation activities will become less and less, and feedback will be reduced. Slower and slower, unsustainable, and eventually stagnant.

Everything that can be automated should be automated.

When evaluating whether software functions meet the requirements, we will build a large number of automated tests, so as to form a software feature safety net to continuously ensure that the software meets the requirements. As for the evaluation of architectural characteristics, the traditional approach is more like a "sports-style" evaluation:

  • On the R&D side, a dedicated performance or reliability testing team is regularly established, holding the indicator system in hand, testing and evaluating whether the indicator requirements are met from a black box perspective, and producing test reports;
  • On the design side, various architecture discussions and review meetings are regularly arranged to evaluate the design itself and whether the software is correctly implemented as designed, and a large number of documents are produced.

ASPICE is a typical case. Due to the complexity of the process and documentation, as well as the strict requirements for each development stage, it is easy for design and testing to stay in the state of an earlier snapshot version. , can never keep up with the speed of software changes.

Evaluation and improvement of smart cockpit software performance and reliability

(Source: An ASPICE Overview)

In the book "Evolutionary Architecture" co-authored by Neal Ford, Patrick Kua and Rebecca Parsons, The fitness function is defined as “an objective function that summarizes how close the intended design solution is to achieving the set goal.” Introducing the fitness function means that the evaluation of the architecture can be automated and normalized through engineering means.

Evaluation and improvement of smart cockpit software performance and reliability

(Source: "Evolutionary Architecture")

When our indicators and models are converted into fitness functions, they can be bound In the R&D pipeline, this enables automated evaluation of architectural features.

With automation as a premise, architecture care can then be used to drive continuous improvement.

Based on the various fitness functions that have been established, the execution results generated by the fitness functions can form a complete set of performance and reliability evaluation reports during daily build, iterative testing, integration testing and other processes. Taking the evaluation results of the previous version as the baseline and comparing them with the evaluation results of the latest version, we can carefully monitor the performance and reliability of the software, thereby determining which parts of the new version have been optimized and which parts have failed. Deterioration is obvious at a glance.

3. Observable toolset helps analysis

So far we have some means to support continuous performance and reliability evaluation, but evaluation is essentially for exposure Problems, subsequent analysis and optimization are the difficulties of continuous improvement.

After the problem is exposed, optimization often needs to be carried out as quickly as possible. For business-oriented organizations, the team spends most of its time working in the business field, analyzing issues such as performance and reliability. and insufficient optimization capabilities, usually at this time the organization will look for or hire technical experts to help improve. However, as a scarce resource, technical experts are often stretched thin when faced with a variety of problems.

Therefore, for organizations that hope to achieve continuous improvement, it is essential to establish engineering analysis and optimization methods to improve efficiency. The first one here is to build an observable tool set. In the evaluation framework mentioned earlier, the role of indicators is mainly to indicate the current status. Indicators can evaluate the pros and cons, but cannot help analyze the root causes of problems. Analyzing software problems requires being able to reproduce what happened when the system was running, how components interacted, and what data was generated. This information needs to be captured and recorded through observable tools.

After having such a toolset, when the assessment finds that certain indicators have deteriorated, the context and observation records of the system runtime can be quickly associated based on some basic information, so as to quickly analyze and locate the problem and implement it quickly. optimization.

Summary

The smart car market has broad prospects and is developing rapidly. As competition deepens, the ultimate experience of smart cockpits will definitely become a major goal of various automobile manufacturers.

This article mainly discusses the continuous evaluation methods and continuous improvement methods of smart cockpit software in terms of performance and reliability from the perspective of software development and delivery, combined with excellent practices and explorations in the software field.

As more and more external investments and cross-field talents pour into the field of smart cars, I believe that huge value will continue to be created in related industries in the future.

The above is the detailed content of Evaluation and improvement of smart cockpit software performance and reliability. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete