The Edge AI Dream

This article is part of TechXchange: Cutting-edge AI

What you will learn:

  • The case of implementing AI in “small machines”.
  • What are the challenges of developing small AI-enabled machines?

At this point, we should have had flying cars. And robot butlers. And with a bit of bad luck, sentient robots who decide to stand against us before we can bring about the apocalypse. Although we don’t have that, it’s clear that artificial intelligence (AI) technology has made its way into our world.

Every time you ask Alexa to do something, the machine learning technology figures out what you said and tries to figure out what you wanted it to do. Every time Netflix or Amazon recommends that “next movie” or “next buy” to you, it’s based on sophisticated machine learning algorithms that give you compelling recommendations that are far more appealing than the sales promotions of the past.

And while not all of us have self-driving cars, we are acutely aware of the developments in this space and the potential offered by autonomous navigation.

AI technology holds great promise: the idea that machines can make decisions based on the world around them, processing information like a human would (or in a way better than that). a human would do). But if you think about the examples above, the promise of AI here is only fulfilled by “big machines” – things that have no power, size or cost constraints. Or to put it another way, they can get hot, line powered, big and expensive. Alexa and Netflix rely on big, power-hungry servers in the cloud to figure out your intent.

While self-driving cars are likely to rely on batteries, their energy capacity is huge, given that those batteries have to turn the wheels and steer. These are big energy expenditures compared to the most expensive AI decisions.

So while the promise of AI is great, the “little machines” are being left behind. Devices that are powered by smaller batteries or have cost and size constraints cannot participate in the idea that machines can see and hear. Today, these little machines can only use simple AI technology, perhaps listening for a single keyword or analyzing low-dimensional signals like photoplethysmography (PPG) from a heart rate. .

What if small machines could see and hear?

But is there any point in small machines being able to see and hear? It’s hard to think of things like a doorbell camera taking advantage of technologies like autonomous driving or natural language processing. Yet, there is an opportunity for less complex and less CPU-intensive AI computations, such as vocabulary recognition, speech recognition, and image analysis:

  • Doorbell cameras and consumer security cameras often trigger during uninteresting events, such as plant movement caused by wind, drastic light changes caused by clouds, or even events like dogs or cats running ahead. This can lead to false triggers, causing the owner to start ignoring events. Also, if the owner is traveling to another part of the world, he or she is likely sleeping while their camera is alarmed by changes in lighting caused by sunrise, clouds, and sunset. A smarter camera could trigger on more specific events, like a human being in the frame of reference.
  • Door locks or other access points can use facial identification or even voice recognition to grant access to authorized personnel, without the need for keys or badges in some cases.
  • Many cameras want to trigger on certain events: for example, trail cameras may want to trigger when there is a deer in the frame, security cameras may want to trigger on a person in the frame or on a sound like a door opening or footsteps. , and a personal camera may want to trigger with a voice command.
  • Large vocabulary commands can be useful in many applications. While there are many “Hey Alexa” solutions out there, if you start thinking of a vocabulary of 20 or more words, you may find use in industrial equipment, home automation, cooking appliances, and many other devices for simplify human interaction.

These examples only scratch the surface. The idea of ​​enabling small machines to see, hear, and solve problems that previously required human intervention is powerful, and we continue to find new, creative use cases every day.

What are the challenges in enabling small machines to see and hear?

So if AI can be so valuable for small machines, why don’t we have it yet? The answer is computing power. AI inferences are the result of calculating a neural network model. Think of a neural network model as a rough approximation of how your brain would process an image or sound, breaking it up into very small pieces and then recognizing the pattern when those small pieces are put together.

The workhorse model of modern vision problems is the convolutional neural network (CNN). These types of models are excellent for image analysis and very useful for audio analysis as well. The challenge is that these models require millions or billions of mathematical calculations. Traditionally, these applications have a difficult choice to make for implementation:

  • Use an inexpensive, low power microcontroller solution. Although the average power consumption may be low, calculating the CNN may take a few seconds, which means that the AI ​​inference is not real-time and therefore consumes considerable battery power.
  • Buy an expensive and powerful processor that can perform these mathematical operations within the required latency. These processors are typically large and require many external components, including heatsinks or similar cooling components. However, they perform AI inferences very quickly.
  • Do not implement. The low-power microcontroller solution will be too slow to be useful, and the high-power processor approach will break cost, size, and power budgets.

What is needed is an embedded AI solution designed from the ground up to minimize the power consumption of a CNN computation. AI inferences should run at orders of magnitude with less power than conventional microcontroller or processor solutions, and without the aid of external components such as memory, which consume power, size and cost.

If an AI inference solution could virtually eliminate the energy penalty of computer vision, then even the smallest devices could see and recognize things happening in the world around them.

Luckily for us, we are at the start of this – a “small machine” revolution. Products are now available to nearly eliminate the energy cost of AI inference and enable battery-powered machine vision. A microcontroller, for example, is designed to perform AI inferences while expending only microjoules of energy.

Read more articles in TechXchange: Cutting-edge AI

Source link

Comments are closed.