Home > Article > Technology peripherals > Can deep learning be used to replace other image processing algorithms?
With the emergence of deep learning technology, visual image processing has become more and more popular in recent years and has been widely used in various fields. At the same time, a large number of practitioners have emerged. However, many people only use deep learning and think that traditional image processing algorithms are outdated. I once heard someone say that image processing has become so common that traditional algorithms are outdated and the threshold is very low, and anyone can use it. To be honest, when I hear such remarks, sometimes I am really speechless
#I just have some time today and I want to talk about this issue. First, let us analyze, what is image processing mainly used for? No matter which industry image processing is used in, its main functions include identification, classification, positioning, detection, size measurement and visual guidance, etc.
Since some people say that deep learning has replaced traditional image processing technology, then today I Let me give you a few specific cases to see the application of deep learning. At the same time, we should also see whether traditional image processing technology still has its place.
First, let us briefly introduce what the main functions of vision mentioned above are. There is a certain connection between identification and classification. Face recognition, license plate recognition, character recognition, barcode/QR code recognition, product category recognition, fruit recognition, etc. are all image recognition technologies. After the recognition is completed, the recognition results are sometimes given directly, and sometimes classification is required. For example, the products identified on the mixed production line need to be classified and packed
rewritten into Chinese: There are many positioning methods. Sometimes you only need to roughly know the target location, and sometimes you need precise positioning so that the robot can automatically grab it. Detection methods include target detection and defect detection. Target detection usually only needs to know whether the target exists in the scene, while defect detection not only detects whether the defect exists, but also needs to determine the size and category of the defect. The goal of size measurement is very clear, that is, to detect the target object through vision. Whether a specific size meets the requirements. Visual guidance is combined with the robot's automatic grasping. It needs to accurately locate the target position and also determine the specific location of the grab to avoid the target from falling when the robot moves. I will list a few below. A specific case shows how you can use deep learning to achieve it. Since the company does not want these images to be made public, the pictures below only capture a small part of them, and the specific content cannot be determined
All the following examples are actual needs of the company, and the images were also taken on site. Let’s first look at a simple character recognition case. The requirement of this case is to determine whether these characters are correct. 20 characters need to be processed per second. The budget is 20,000 yuan per visual system. There are 100 production lines, and the total cost is 2 million yuan. Do you think it should be done or not? Although 2 million yuan is already a lot of money, each vision system only costs 20,000 yuan. Will it be profitable? In addition, when the host computer recognizes erroneous characters, it needs to automatically eliminate
. A brief analysis shows that 20 products can be detected per second, that is, each product needs to be Completed within 50 milliseconds. At the same time, the host computer needs to send a signal to the rejection mechanism. In order to ensure the stability of the signal, 20 milliseconds need to be reserved. The remaining 30 milliseconds are used for taking pictures and image processing. For deep learning training, the configuration of the industrial computer needs to be considered. In addition, the cost of PLC, rejection device, camera, light source, lens, cabinet and other small accessories need to be considered. How much does manual on-site commissioning cost? What is the total cost?Give another example. The picture below shows welding defect detection. There are many kinds of welding defects. Someone once spent a week using deep learning to train and detect. They told me the test results were great, but then they came back to me a month later and said they couldn't afford the cost and the test results were poor. Think about it, why does this happen?
Give another example. The picture below is an example of barcode recognition. You see that the barcode is very blurry and difficult to identify, right? We can use deep learning to identify it As an example, let’s take a look at the picture below. We need to check whether the thickness on both sides is consistent and whether there are defects on the surface. So, how can we use deep learning to solve this problem? Someone once debugged on site for half a year, but in the end the customer was not satisfiedHow to use deep learning to automatically grasp the robot and detect the tilt angle of the grasped object, so as to adjust the robot's posture?
How to use deep learning to measure the size of a spring that has been used for a period of time to determine whether it is qualified? How to measure other similar parameters of bearings, gears, threads, etc.?
The above examples are only a small part of many actual cases. Similar detection methods are emerging in endlessly, such as automatic bolt tightening, disordered grabbing and micron-level precise positioning. However, unfortunately, many people only use deep learning, and some people assert that image processing is outdated and deep learning has replaced other image processing methods without even understanding the basic image concepts. If you were exposed to more actual cases, you would not say such a thing
Many people mistakenly believe that you only need to input images into the deep learning model for training. If the effect is not good, just increase the training samples or adjust the parameters. can achieve the desired effect. All I can say is that this understanding of images is too superficial. The most common image applications are face recognition and license plate recognition. For these recognition tasks, there is not much problem in using deep learning because the requirements for recognition are not high. Even if the recognition takes a long time or a recognition error occurs, it will not cause too big a problem. For example, with facial recognition payment, if the face cannot be recognized, you can also choose to pay manually; with license plate recognition and access control systems, if the license plate or face cannot be recognized, you can also choose to open the door manually. However, in a fully automatic application scenario, this situation is not allowed.
For the detection, classification and identification of product defects, deep learning is a commonly used method, but it also needs to be decided according to the specific situation. . In addition, before training, it is usually necessary to use other image processing algorithms
Some people just use public data sets to train deep learning models and write articles, which is no problem. However, there is still a long way to go if these models are to be applied in practical applications. A company I was familiar with recruited some Ph.D.s to work on visual inspection, but after half a year there were no results, and they were severely criticized by the leader. Do you know why?
Deep learning has its place in the application field. This is an undeniable fact. However, it is only one part of visual inspection and cannot be implemented in many other aspects. Current visual detection technology can only be applied to some simple scenes. For many complex scenes, no matter what algorithm is used, effective detection cannot be achieved. Therefore, visual image processing algorithms still have a long way to go. When deep learning trains images, it usually requires some processing of the original image, such as filtering, enhancement, threshold segmentation, edge detection and morphological operations. Sometimes, it is also necessary to process the image and then directly extract image features for deep learning training. Many people who are engaged in visual work understand this principle.
It has been mentioned before that if you just want to write a paper, then it is enough to study one direction in depth. In-depth theoretical research is also promising, but it requires higher personal theoretical abilities. If you have not been exposed to actual vision applications, it is best not to casually claim that image processing is outdated, other image processing algorithms are no longer used, and deep learning has replaced other image processing algorithms
So, are we still What about traditional image processing algorithms? I think the answer to this question is clear. If there is still something unclear, please think carefully about how many vision-related projects you have been exposed to, and what else you don’t know. Do you really understand what visual inspection can do and how to do it? Very few visual inspections are achieved solely by relying on a single image processing algorithm. Therefore, at the application level, we need to be proficient in the use of various image processing algorithms and the combined application of various algorithms in order to be able to operate with ease in the visual industry. Summer vacation is coming soon. Summer vacation is a very good time for learning. Use this vacation and this learning platform to quickly master the application of image processing related algorithms
The above is the detailed content of Can deep learning be used to replace other image processing algorithms?. For more information, please follow other related articles on the PHP Chinese website!