Home  >  Article  >  Technology peripherals  >  Dreams and challenges of edge artificial intelligence

Dreams and challenges of edge artificial intelligence

WBOY
WBOYforward
2023-04-09 14:41:07891browse

Dreams and challenges of edge artificial intelligence

In this article, we focus on two main questions, namely, the rationale for implementing artificial intelligence in "small machines", and what challenges will be faced in developing artificial intelligence small machines?

In the future, in terms of artificial intelligence, we should have flying cars and robot butlers. We might even encounter sentient robots that decide to rebel against us. Although we are not quite there yet, it is clear that artificial intelligence (AI) technology has entered our world.

Every time we ask a smart voice assistant to do something, machine learning technology will first figure out what you said and try to make the best decision about what you want it to do. For example, every time a video website or e-commerce platform recommends "movies you may like" or "products you may need" to you, it is based on complex machine learning algorithms to provide you with as persuasive information as possible. Suggestions, this is clearly more attractive than past promotions.

While we may not all have self-driving cars, we are keenly aware of developments in this area and the potential that autonomous navigation offers.

Artificial intelligence technology holds a great promise - that machines can make decisions based on the world around them, processing information like humans, or even in a way that is better than humans. But if we think about the examples above, we see that the promise of AI can only be realized by “large machines,” which tend to have no power, size, or cost constraints. Or in other words, they heat, are wire-powered, are large, and are expensive. For example, the world's leading IT giants such as Alexa and Netflix rely on large power-hungry servers (data centers) in the cloud to infer users' intentions.

While self-driving cars are likely to rely on batteries, their energy capacity is enormous considering those batteries have to turn the wheels and steer. They are huge energy expenditures compared to the most expensive AI decisions.

So while artificial intelligence holds great promise, “little machines” are being left behind. Devices powered by smaller batteries or with cost and size constraints cannot participate in the idea that machines can see and hear. Today, these little machines can only utilize simple artificial intelligence techniques, perhaps listening for a keyword or analyzing low-dimensional signals from heart rate, such as photoplethysmography (PPG).

What if a small machine could see and hear?

But is there value in a small machine being able to see and hear? It may be difficult for many people to imagine small devices like doorbell cameras that utilize technologies such as autonomous driving or natural language processing. Still, opportunities exist for less complex, less processing-intensive AI computations like word recognition, speech recognition, and image analysis:

  • Doorbell cameras and consumer-grade security cameras often trigger Uninteresting events such as plant movement caused by wind, dramatic light changes caused by clouds, or even a dog or cat moving in front of the camera. This can lead to false alarms being triggered and homeowners starting to miss important events. Homeowners may be traveling in different parts of the world or sleeping while their security cameras are frequently alerting to lighting changes caused by sunrises, clouds, and sunsets. Smarter cameras can more accurately identify object changes, such as the outline of the human body, thereby avoiding false alarm interference.
  • A door lock or other access point can use facial recognition or even voice recognition to verify human access, in many cases without the need for a key or IC card.
  • Many cameras want to trigger on certain events: for example, a trail camera might want to trigger when a certain animal appears in the frame, and a security camera might want to trigger when a person appears in the frame or there is noise like a door opening or footsteps. Triggered, and some cameras may want to be triggered by voice command and so on.
  • Large vocabulary commands are useful in many applications. While there are many "Hey Alexa", "Hey Siri" solutions out there, if you start thinking about vocabularies of 20 or more words, you can find them in industrial equipment, home automation, cooking appliances, and many other devices to simplify people's lives. The use of computer interaction.

These examples only scratch the surface. The idea of ​​letting small machines see, hear, and solve problems that previously required human intervention is a powerful one, and we continue to find creative new use cases every day.

Dreams and challenges of edge artificial intelligence

What are the challenges of getting small machines to see and hear?

So, if AI is so valuable for small machines, why aren’t we already using it more widely? The answer is computing power. Artificial intelligence reasoning is the result of neural network model calculations. Think of a neural network model as a rough approximation of how your brain processes a picture or sound, breaking it down into very small pieces, and then recognizing patterns when those small pieces are put together.

The workhorse model for modern vision problems is the convolutional neural network (CNN). These models are excellent at image analysis and are also very useful in audio analysis. The challenge is that such models require millions or billions of mathematical calculations. Traditionally, these applications have been difficult to implement:

  • using cheap and low-power microcontroller solutions. While average power consumption may be low, CNNs can take several seconds to compute, meaning AI inference is not real-time and therefore consumes a lot of battery power.
  • Buy an expensive, high-performance processor that can do these math operations within the required latency. These processors are often large and require a large number of external components, including a heat sink or similar cooling component. However, they perform AI inference very quickly.
  • Unable to be implemented. Low-power microcontroller solutions will be too slow to use, while high-performance processor approaches will blow cost, size, and power budgets.

What is needed is an embedded artificial intelligence solution, built from the ground up to minimize the energy consumption of CNN calculations. AI inference needs to be performed on an order of magnitude compared to traditional microcontroller or processor solutions and does not require the help of external components such as memory, which consume energy, volume and cost.

If artificial intelligence inference solutions could eliminate the energy loss of machine vision, then even the smallest devices could see and identify what is happening in the world around them.

Fortunately, we are at the beginning of this "little machine" revolution. Products are now available that can virtually eliminate the energy costs of AI inference and enable battery-powered machine vision. For example, a microcontroller can be used to perform AI inference while consuming only microjoules of energy.


The above is the detailed content of Dreams and challenges of edge artificial intelligence. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete