Home  >  Article  >  Technology peripherals  >  How to create a complete computer vision application in minutes with just two Python functions

How to create a complete computer vision application in minutes with just two Python functions

WBOY
WBOYforward
2024-03-12 17:07:24392browse

How to create a complete computer vision application in minutes with just two Python functions

Translator| Li Rui

Reviser| Chonglou

This article begins with a brief introduction to the basic requirements for computer vision applications. Then, Pipeless, an open source framework, is introduced in detail, which provides a serverless development experience for embedded computer vision. Finally, a detailed step-by-step guide is provided that demonstrates how to create and run a simple object detection application using a few Python functions and a model.

Creating Computer Vision Applications

One way to describe “computer vision” is to define it as “the use of cameras and algorithmic technology to perform The field of image recognition and processing". However, this simple definition may not fully satisfy people's understanding of the concept. Therefore, in order to gain a deeper understanding of the process of building computer vision applications, we need to consider the functionality that each subsystem needs to implement. The process of building computer vision applications involves several key steps, including image acquisition, image processing, feature extraction, object recognition, and decision making. First, image data is acquired through a camera or other image acquisition device. The images are then processed using algorithms, including operations such as denoising, enhancement, and segmentation for further analysis. During the feature extraction stage, the system identifies key features in the image, such as

In order to process a 60 fps video stream in real time, each frame needs to be processed within 16 milliseconds. This is usually achieved through multi-threading and multi-processing processes. Sometimes it's even necessary to start processing the next frame before the previous one is complete to ensure really fast frame processing.

For artificial intelligence models, fortunately there are many excellent open source models available now, so in most cases there is no need to develop your own model from scratch, just fine-tune the parameters to meet a specific Just use cases. These models run inference on every frame, performing tasks such as object detection, segmentation, pose estimation, and more.

•Inference runtime: The inference runtime is responsible for loading the model and running it efficiently on different available devices (GPU or CPU).

In order to ensure that the model can run quickly during the inference process, the use of GPU is essential. GPUs can handle orders of magnitude more parallel operations than CPUs, especially when processing large amounts of mathematical operations. When processing frames, you need to consider the memory location where the frame is located. You can choose to store it in GPU memory or CPU memory (RAM). However, copying frames between these two different memories can result in slower operations, especially when the frame size is large. This also means that memory choices and data transfer overhead need to be weighed to achieve a more efficient model inference process.

The multimedia pipeline is a set of components that take a video stream from a data source, split it into frames, and then use it as input to the model. Sometimes, these components can also modify and reconstruct the video stream for forwarding. These components play a key role in processing video data, ensuring that the video stream can be transmitted and processed efficiently.

• Video stream management: Developers may want applications to be able to resist interruption of video streams, reconnection, dynamically add and remove video streams, handle multiple video streams simultaneously, and so on.

All of these systems need to be created or incorporated into the project, and therefore, code needs to be maintained. However, the problem faced is that you end up maintaining a large amount of code that is not application specific, but rather a subsystem that surrounds the actual case specific code.

Pipeless Framework

To avoid building all of the above from scratch, you can use the Pipeless framework instead. This is an open source framework for computer vision that allows for some case-specific functionality and is capable of handling other things.

The Pipeless framework divides the application's logic into "stages", one of which is like a micro-application for a single model. A stage can include preprocessing, running inference using the preprocessed input, and postprocessing the model output for action. You can then chain as many stages as you like to make up a complete application, even using multiple models.

To provide the logic for each stage, simply add an application-specific code function and Pipeless takes care of calling it when needed. This is why Pipeless can be considered a framework that provides a server-like development experience for embedded computer vision and provides some functionality without worrying about the need for additional subsystems.

Another important feature of Pipeless is the ability to automate video stream processing by dynamically adding, removing, and updating video streams via CLI or REST API. You can even specify a restart policy, indicating when processing of the video stream should be restarted, whether it should be restarted after an error, and so on.

Finally, to deploy the Pipeless framework, just install it and run it with your code functions on any device, whether in a cloud computing virtual machine or containerized mode, or directly on an edge device such as Nvidia Jetson, Raspberry, etc. middle.

Creating an Object Detection Application

The following is an in-depth look at how to create a simple object detection application using the Pipeless framework.

The first is installation. The installation script makes it very simple to install:

Curl https://raw.githubusercontent.com/pipeless-ai/pipeless/main/install.sh | bash

Now, a project must be created. A Pipeless project is a directory containing stages. Each stage is in a subdirectory, and in each subdirectory, a file containing hooks (specific code functions) is created. The name provided for each stage folder is the stage name that must be indicated to the Pipeless box later when you want to run that stage for the video stream.

pipeless init my-project --template emptycd my-project

Here, the empty template tells the CLI to just create the directory, if no template is provided, the CLI will prompt a few questions to create the stage interactively.

As mentioned above, it is now necessary to add a phase to the project. Download a stage example from GitHub using the following command:

wget -O - https://github.com/pipeless-ai/pipeless/archive/main.tar.gz | tar -xz --strip=2 "pipeless-main/examples/onnx-yolo"


This will create a stage directory onnx-yolo, where Contains application functions.

Then, check the contents of each stage file, which is the application hooks.

Here is a pre-process.py file that defines a function (hooks) that accepts a frame and a scene. This function performs some operations to prepare the input data receiving RGB frames so that it matches the format expected by the model. This data is added to frame_data['interence_input'], which is the data that Pipeless will pass to the model.

def hook(frame_data, context):frame = frame_data["original"].view()yolo_input_shape = (640, 640, 3) # h,w,cframe = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)frame = resize_rgb_frame(frame, yolo_input_shape)frame = cv2.normalize(frame, None, 0.0, 1.0, cv2.NORM_MINMAX)frame = np.transpose(frame, axes=(2,0,1)) # Convert to c,h,winference_inputs = frame.astype("float32")frame_data['inference_input'] = inference_inputs... (some other auxiliar functions that we call from the hook function)

There is also the process.json file which indicates the Pipeless inference runtime to use (ONNX runtime in this case ), where to find the model it should load, and some of its optional parameters, such as the execution_provider to use, i.e. CPU, CUDA, TensorRT, etc.

{ "runtime": "onnx","model_uri": "https://pipeless-public.s3.eu-west-3.amazonaws.com/yolov8n.onnx","inference_params": { "execution_provider": "tensorrt" }}

Finally, the post-process.py file defines a function similar to the one in pre-process.py. This time, it accepts the inference output that Pipeless stores in frame_data["inference_output"] and performs the operation of parsing that output into a bounding box. Later, it draws the bounding box on the frame and finally assigns the modified frame to frame_data['modified']. This way, Pipeless will forward the provided video stream, but with modified frames, including bounding boxes.

def hook(frame_data, _):frame = frame_data['original']model_output = frame_data['inference_output']yolo_input_shape = (640, 640, 3) # h,w,cboxes, scores, class_ids =  parse_yolo_output(model_output, frame.shape, yolo_input_shape)class_labels = [yolo_classes[id] for id in class_ids]for i in range(len(boxes)):draw_bbox(frame, boxes[i], class_labels[i], scores[i])frame_data['modified'] = frame... (some other auxiliar functions that we call from the hook function)

The last step is to start Pipeless and provide a video stream. To start Pipeless, just run the following command in the my-project directory:

pipeless start --stages-dir .

Once run, the video stream from the webcam (v4l2) will be provided, and display the output directly on the screen. It should be noted that a list of stages that the video stream executes in sequence must be provided. In this example, it's just the onnx-yolo stage:

pipeless add stream --input-uri "v4l2" --output-uri "screen" --frame-path "onnx-yolo"

Conclusion

Creating computer vision applications is A complex task as there are many factors and subsystems that must be implemented around it. With a framework like Pipeless, getting up and running only takes a few minutes, allowing you to focus on writing code for specific use cases. In addition, Pipeless "stages" are highly reusable and easy to maintain, so maintenance will be easy and it can be iterated very quickly.

If you wish to participate in the development of Pipeless, you can do so through its GitHub repository.

Original title: Create a Complete Computer Vision App in Minutes With Just Two Python Functions, author: Miguel Angel Cabrera

Link: https://www.php.cn/link/e26dbb5b1843bf566ea7ec757f3325c4

The above is the detailed content of How to create a complete computer vision application in minutes with just two Python functions. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete