Home  >  Article  >  Operation and Maintenance  >  What are the Android APP testing process and common problems?

What are the Android APP testing process and common problems?

WBOY
WBOYforward
2023-05-13 21:58:041186browse

1. Automated testing

Automated testing mainly includes several parts, automated testing of UI functions, automated testing of interfaces, and other specialized automated testing.

1.1UI function automated testing

Automated testing of UI functions, also known as automated testing, is mainly automated testing based on the UI interface, and the click of the UI function is realized through scripts. Replace manual testing with automation.

The advantage of this test is to effectively release the testing manpower for highly repetitive interface feature functional testing, and use the execution of scripts to achieve fast and efficient return of functions.

However, the shortcomings of this kind of test are also obvious, including high maintenance costs, easy misjudgment, and insufficient compatibility. Because it is based on interface operations, the stability of the interface has become the biggest constraint on maintaining scripts. Frequently changing interface interactions mean that test case scripts need to be constantly updated, occupying a large amount of testing resources.

=

Misjudgments are prone to occur mainly because identification based on UI controls can easily lead to slow or abnormal loading due to network conditions, device configuration, test environment, etc., resulting in test case execution Some judgments during the process are inaccurate, which affects the accuracy of the test. Insufficient compatibility mainly means that the execution of test scripts on different devices, different operating systems, different hardware environments, etc. will cause unpredictable situations, leading to inaccurate test case execution results.

Based on the above comparison of advantages and disadvantages, in our UI function automated testing, we mainly implement the testing of the core path of the APP, and conduct UI testing on functional modules that require a large number of repeated executions, repeated verifications, and low frequency of UI interface changes. Implementation of functional automated testing.

The need for a large number of repeated executions and repeated verifications means that the utilization rate after automation is high, and the frequency of UI interface changes is low, which means that subsequent maintenance costs are not high. For us, these three types of use cases are For parts with relatively high input and output, we will give the highest priority to the practice of UI function automated testing.

In the process of UI function automated testing, relevant controls, test cases, and test sets can be effectively sorted out and managed, and repeatable work can be merged in a timely manner to reduce the waste of resources. When the UI function changes, it can also be maintained at a smaller cost, reducing maintenance costs.

1.2 Interface Automated Testing

In the UI function automated testing section, we mentioned the constraints for automation: stability. Precisely because the UI interface is unstable, the cost of automating UI functions is relatively high, so we naturally think of the part that is more stable and more conducive to automation than the UI functions, and that is the interface.

The interface of an APP may change due to the different demands of the product manager at different stages, but the interface behind it is usually relatively stable, which is beneficial for us to carry out automated testing. ensure.

We need to prepare the interfaces called by the APP, sort and summarize them according to the functional modules, prioritize them for automation, understand the meaning of each interface, the value ranges of different parameters, and how to handle different inputs Inventory the situations that produce various outputs, and summarize error or exception returns to ensure the effectiveness and completeness of interface testing.

After the interface automated testing is started, an interface document needs to be maintained together with the development engineer. Whether there is an increase or decrease in interfaces, or related changes to existing interfaces, the test engineers can know immediately and Make corresponding adjustments to the use cases for interface automation testing.

1.3 Other special automated tests

In addition to the above two categories of automation, we can also use automation to do some special tests to help improve our test quality and testing efficiency. Here, we need to think diligently in our daily testing work, thinking about which tasks can be achieved through automation, which tests can be automated to improve testing efficiency, which function points can be automated to achieve long-term test monitoring, etc.

For example, in the project I am responsible for, there is a function. During manual testing, we can only verify it with a limited number of clicks, and the frequency of clicks is low. However, through scripts, we can implement it during the testing process. For faster and longer click operations, we can use automation to achieve it. Not only can it be executed on your own test equipment, but it can also be executed on different devices. This automated test is effective and can improve test efficiency and test quality. Although this test will not be added to the set of use cases for UI function automation for various reasons, in the current version, automation has indeed brought us very useful help, and this is what we need to advocate.

In short, we can use various automated testing tools and testing methods to assist us in testing, which is worthy of recognition.

2. Performance testing

In the testing system of the project I am responsible for, performance testing mainly includes performance testing in three dimensions, namely performance testing in the time dimension, performance testing in the resource dimension, and Fluency test.

2.1 Time Dimension

Performance testing in the time dimension mainly refers to the time response of functional features after a click operation. We are more familiar with the first screen loading time, the response jump opening time after clicking, etc.

There are many ways to perform performance testing in the time dimension. You can use screen recordings to calculate time, you can also use timestamps in the program to calculate time, you can also use third-party scripts to calculate time, or you can through image recognition Technology to calculate time, etc.

During the testing process, we need to conduct pre-research on the tool in conjunction with the project itself. Is it a one-time test or does it require continuous testing in the future? Whether it needs to be converted into a tool for subsequent long-term use? Is it on a single device? To use it, you still need to consider compatibility for use in different device environments, whether the tool is open source or provides a data interface for subsequent integration with the team's test platform, and so on.

2.2 Resource Dimension

The performance test of the resource dimension mainly refers to the consumption of various system resources during the use of the APP, including CPU, memory, power, traffic, etc.

The selection of testing tools depends on the different test terminals. The dimensions that need to be monitored during the test are also determined based on the project. The specific testing methods will not be discussed here.

What needs to be mentioned here is that performance testing in the resource dimension can do two parts of the work, one is the performance test during the test process, and the other is the collection of online performance data.

Performance testing during the testing process can be evaluated based on business testing needs. Which scenarios need to be tested? Is it a one-time test of the current version or a test that requires comparison of each subsequent version? Is it only necessary to test this version? The performance data of the machine still needs to be collected on multiple devices. It only needs to be tested by this APP, and it still needs to be compared with competing products and tested.

On this basis, evaluate whether it is necessary to implement test cases through automated scripts for subsequent reuse. If subsequent longitudinal comparative testing with historical versions is required, it is necessary to ensure that the test environment and test equipment are as consistent as possible to make the test results more authentic and reliable.

Another small point to add is that the processing and calculation of test data can be realized through automated scripts, saving the cost of human computing resources. If necessary, you can also build a simple platform and store all test data on the platform for subsequent analysis and reference.

To collect online performance data, development engineers need to report relevant data during the function implementation process. After the function is released, the online data must be retrieved, processed, and calculated to discover possible problems. With the cooperation of the development engineer's logs and the logs of users who have errors, the location, analysis and resolution of relevant performance issues can be achieved.

2.3 Fluency test

As the most intuitive feeling of user experience, the fluency test is also a must for many performance tests. There is no need to go into details here about the method of doing fluency testing, but there are a few points that need to be noted:

The first is how we plan the use cases for the fluency test, and the second is how we use the test result data after the fluency test To analyze and improve, the third is how we need to monitor the fluency from online data after the APP is released.

Regarding the design of fluency test cases, it needs to be designed based on the core functions of the APP and the common user paths. It is best to have online data to support this part, rather than just thinking about it. The jump paths of most users in the APP obtained with the support of data are what we need to focus on. In addition, we also need to pay attention to the paths prone to lags monitored in the online data during the testing process.

The analysis and use of data after the fluency test, as well as the monitoring of online fluency data, require joint planning and joint investigation by test engineers and development engineers. This article will not elaborate on it.

3. Stability test

Regarding this part, you can start from the two stages of the test stage before the release of the APP and the online operation stage after the release, and carry out the work separately.

During the testing phase, we can carry out stability testing around Monkey testing and code review. Qualified teams can also use static code scanning tools at this stage. During the Monkey testing process, attention should be paid to the equipment, environment, and frequency of test execution. Problems discovered during the process should also be analyzed to a certain extent, and special attention should be paid to the parts prone to problems. Code walkthroughs can be combined with modules that are prone to crashes during functional testing to carry out key walkthroughs, promote development and pair programming, and check for possible problems in these modules. As for static code scanning, development students need to solve the scanned problems and develop good coding habits to avoid the leakage of related problems.

In the operation phase, we can conduct stability testing around the reporting and analysis of external network crash data. This part relies more on development engineers to solve. However, during this process, test engineers can analyze the reported data and locate some basic data of crashes, such as common systems, models, etc., to improve and optimize daily stability. sex test.

The above is the detailed content of What are the Android APP testing process and common problems?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:yisu.com. If there is any infringement, please contact admin@php.cn delete