Home > Article > Technology peripherals > Dreams and challenges of edge artificial intelligence
In this article, we focus on two main questions, namely, the rationale for implementing artificial intelligence in "small machines", and what challenges will be faced in developing artificial intelligence small machines?
In the future, in terms of artificial intelligence, we should have flying cars and robot butlers. We might even encounter sentient robots that decide to rebel against us. Although we are not quite there yet, it is clear that artificial intelligence (AI) technology has entered our world.
Every time we ask a smart voice assistant to do something, machine learning technology will first figure out what you said and try to make the best decision about what you want it to do. For example, every time a video website or e-commerce platform recommends "movies you may like" or "products you may need" to you, it is based on complex machine learning algorithms to provide you with as persuasive information as possible. Suggestions, this is clearly more attractive than past promotions.
While we may not all have self-driving cars, we are keenly aware of developments in this area and the potential that autonomous navigation offers.
Artificial intelligence technology holds a great promise - that machines can make decisions based on the world around them, processing information like humans, or even in a way that is better than humans. But if we think about the examples above, we see that the promise of AI can only be realized by “large machines,” which tend to have no power, size, or cost constraints. Or in other words, they heat, are wire-powered, are large, and are expensive. For example, the world's leading IT giants such as Alexa and Netflix rely on large power-hungry servers (data centers) in the cloud to infer users' intentions.
While self-driving cars are likely to rely on batteries, their energy capacity is enormous considering those batteries have to turn the wheels and steer. They are huge energy expenditures compared to the most expensive AI decisions.
So while artificial intelligence holds great promise, “little machines” are being left behind. Devices powered by smaller batteries or with cost and size constraints cannot participate in the idea that machines can see and hear. Today, these little machines can only utilize simple artificial intelligence techniques, perhaps listening for a keyword or analyzing low-dimensional signals from heart rate, such as photoplethysmography (PPG).
But is there value in a small machine being able to see and hear? It may be difficult for many people to imagine small devices like doorbell cameras that utilize technologies such as autonomous driving or natural language processing. Still, opportunities exist for less complex, less processing-intensive AI computations like word recognition, speech recognition, and image analysis:
These examples only scratch the surface. The idea of letting small machines see, hear, and solve problems that previously required human intervention is a powerful one, and we continue to find creative new use cases every day.
So, if AI is so valuable for small machines, why aren’t we already using it more widely? The answer is computing power. Artificial intelligence reasoning is the result of neural network model calculations. Think of a neural network model as a rough approximation of how your brain processes a picture or sound, breaking it down into very small pieces, and then recognizing patterns when those small pieces are put together.
The workhorse model for modern vision problems is the convolutional neural network (CNN). These models are excellent at image analysis and are also very useful in audio analysis. The challenge is that such models require millions or billions of mathematical calculations. Traditionally, these applications have been difficult to implement:
What is needed is an embedded artificial intelligence solution, built from the ground up to minimize the energy consumption of CNN calculations. AI inference needs to be performed on an order of magnitude compared to traditional microcontroller or processor solutions and does not require the help of external components such as memory, which consume energy, volume and cost.
If artificial intelligence inference solutions could eliminate the energy loss of machine vision, then even the smallest devices could see and identify what is happening in the world around them.
Fortunately, we are at the beginning of this "little machine" revolution. Products are now available that can virtually eliminate the energy costs of AI inference and enable battery-powered machine vision. For example, a microcontroller can be used to perform AI inference while consuming only microjoules of energy.
The above is the detailed content of Dreams and challenges of edge artificial intelligence. For more information, please follow other related articles on the PHP Chinese website!