Machine learning possible on microcontrollers

ARM’s Zach Shelby introduced the use of microcontrollers for machine learning and artificial intelligence at the ECF19 event in Helsinki on last Friday. The talk showed that that artificial intelligence and machine learning can be applied to small embedded devices in addition to the cloud-based model. In particular, artificial intelligence is well suited to the devices of the Internet of Things. The use of machine learning in IoT is also sensible from an energy efficiency point of view if unnecessary power-consuming communication can be avoided (for example local keyword detection before sending voice data to cloud more more detailed analysis).

According to Shelby , we are now moving to a third wave of IoT that comes with comprehensive equipment security and voice control. In this model, machine learning techniques are one new application that can be added to previous work done on IoT.

In order to successfully use machine learning in small embedded devices, the problem to be solved is that it has reasonably little incoming information and a very limited number of possible outcomes. ARM Cortex M4 processor equipped with a DSP unit is powerful enough for simple hand writing decoding or detecting few spoken words with machine learning model. In examples the used machine learning models needed less than 100 kilobytes of memory.

zackdscf6473

The presentation can be now viewed on YouTube:

Important tools and projects mentioned on the presentation:

TinyML

TensorFlow Lite

uTensor (ARM MicroTensor)

TensorFlow Lite Micro

Articles on presentation:

https://www.uusiteknologia.fi/2019/05/20/ecf19-koneoppiminen-mahtuu-mikro-ohjaimeen/

http://www.etn.fi/index.php/72-ecf/9495-koneoppiminen-mullistaa-sulautetun-tekniikan

 

406 Comments

  1. Tomi Engdahl says:

    Koneoppimista verkon reunalla
    https://etn.fi/index.php/13-news/14243-koneoppimista-verkon-reunalla

    MACHINER LEARNING AT THE EDGE
    https://etn.fi/index.php/tekniset-artikkelit/14242-machiner-learning-at-the-edge

    Many customers fail to assess and demonstrate the benefits AI will bring to their application. To jumpstart applications on the right foot, STMicroelectronics ́ Edge AI Sprint brings a whole support system of experts that can guide developers through the minefields inherent to their application and use case.

    Traditionally, large companies looking to benefit from machine-learning must hire one or more data scientists to collect a massive amount of data for months, clean them, and create AI models. Embedded developers then port the implementation on microcontrollers or use dedicated tools to convert neural networks into optimized code for MCUs.

    When a company wrestles with tight budget constraints, hiring one or more data scientists may be out of the question. Additionally, it may not be possible to outsource the job. Some situations are sensitive, while others require someone to be constantly on staff.

    Even with the right people and all the time in the world, obtaining quality data is still an issue. Despite all the advances in machine learning, getting reliable training samples can be a severe problem. For instance, if an application tries to detect abnormal behaviors, data may be unavailable. Indeed, while many datasets work for classification problems, such as anomaly detection, they’re useless when trying to detect new situations. It is also critical to obtain good quality data, which is far from obvious. When samples aren’t plagued by typos or missing information, recording clean sets and precisely labeling them can demand serious investments.

    NanoEdge AI Studio is a utility that speaks to embedded developers, even to those with no data-science expertise. The magic lies in running the training phase that learns a complex nominal behavior and the inference on the same device. The entire process can thus run on the same STM32 microcontroller.

    Reply
  2. Tomi Engdahl says:

    TinyML-CAM pipeline enables 80 FPS image recognition on ESP32 using just 1 KB RAM
    https://www.cnx-software.com/2022/10/31/tinyml-cam-pipeline-esp32-fast-image-recognition/

    Reply
  3. Tomi Engdahl says:

    Swanholm Tech’s Connected Safety Vest Is a Wearable TinyML Lifesaver
    Built with an embedded inertial measurement unit, these vests can detect falls — and come with a panic button, too.
    https://www.hackster.io/news/swanholm-tech-s-connected-safety-vest-is-a-wearable-tinyml-lifesaver-c472f6ac8d17

    Reply
  4. Tomi Engdahl says:

    Watch as Massimo Banzi presents the Arduino Pro lineup along with a Nicla Vision object detection demo running a FOMO model to spot nuts and bolts. #2022Imagine

    Get started with Edge Impulse on the stamp-sized camera board: https://bit.ly/3Amk1Zz

    Edge Impulse Imagine 2022: Arduino Nicla Vision Demo
    https://m.youtube.com/watch?v=CIOVIH49Q10

    Reply
  5. Tomi Engdahl says:

    Arduino Releases the Nicla Vision
    Richard Elliot
    By Richard Elliot / News / 9th March 2022
    https://www.electromaker.io/blog/article/arduino-releases-the-nicla-vision

    Reply
  6. Tomi Engdahl says:

    Unlike other glass breaking sensors, this vandalism detection device relies on an inexpensive Arduino with an Edge Impulse ML model to classify audio and then send an alert to a property owner.

    Detect vandalism using audio classification on the Nano 33 BLE Sense
    https://blog.arduino.cc/2022/12/01/detect-vandalism-using-audio-classification-on-the-nano-33-ble-sense/

    Unlike other glass breaking sensors, Nekhil’s project relies on a single, inexpensive Arduino Nano 33 BLE Sense and its onboard digital microphone to record audio, classify it, and then alert a property owner over WiFi via an ESP8266-01 board. The dataset used to train the machine learning model came from two sources: the Microsoft Scalable Noisy Speech Dataset for background noise, and breaking glass recorded on the device itself. Both of these were added to an Edge Impulse project via the Studio and split into two-second samples before being processed by a Mel-filterbank Energy (MFE) algorithm.

    Reply
  7. Tomi Engdahl says:

    In this work, researchers developed a lightweight deep learning model for real-time sleep stage classification using single-channel EEG data: https://arxiv.org/pdf/2211.13005.pdf

    Reply
  8. Tomi Engdahl says:

    Espressif’s Ali Hassan Shah Walks Through Putting a TinyML Gesture Recognition Model on an ESP32-S3
    Designed around Espressif’s ESP-DL, Shah’s tutorial walks through the creation, optimization, and deployment of a real-world ML model.
    https://www.hackster.io/news/espressif-s-ali-hassan-shah-walks-through-putting-a-tinyml-gesture-recognition-model-on-an-esp32-s3-e395622577b8

    Reply
  9. Tomi Engdahl says:

    An excellent example of how the ML capabilities of the Arduino Pro Portenta can supplement existing Industry 4.0 processes and lead to substantial increases in efficiency.

    My ML Brings All the Trucks to the Yard
    https://www.edgeimpulse.com/blog/my-ml-brings-all-the-trucks-to-the-yard

    Large manufacturing facilities, warehouses, and distribution centers have very complex yard processes that can be a logistical nightmare. Getting all the day’s trucks into the right places at the right times to be loaded or unloaded and sent on their way can be a very daunting task. The consequences of inefficient yard management range from financial losses and missed opportunities to customer dissatisfaction.

    Reply
  10. Tomi Engdahl says:

    Find out how Blues Wireless works with Edge Impulse by checking out this Smart Energy Meter project from Christopher Mendez Martinez:
    https://bit.ly/smartenergymeter

    Enter to win a Blues Wireless Starter Kit by taking our quiz on the project:
    https://bit.ly/TakeItToTheEdge

    Reply
  11. Tomi Engdahl says:

    Ready to start seeing the world with your Arduino Pro Nicla Vision? This tutorial takes you through the process of training and deploying a custom computer vision model using
    Edge Impulse: https://docs.arduino.cc/tutorials/nicla-vision/image-classification

    Reply
  12. Tomi Engdahl says:

    Deployed directly onto an Arduino Nano 33 BLE Sense, this Edge Impulse machine learning model can alert you before a motor fails.

    This Arduino-Powered TinyML Project Uses an Edge Impulse Model to Listen Out for Motor Failure
    https://www.hackster.io/news/this-arduino-powered-tinyml-project-uses-an-edge-impulse-model-to-listen-out-for-motor-failure-d167e4347629

    Reply
  13. Tomi Engdahl says:

    AR Pong Game with Object Detection
    The ML model uses Edge Impulse’s FOMO (Faster Objects, More Objects) to detect and differentiate which players are playing & the coordinates
    https://www.hackster.io/jallsonsuryo/ar-pong-game-with-object-detection-095f45

    Reply
  14. Tomi Engdahl says:

    Machine Learning Makes Sure Your LOLs Are Genuine
    https://hackaday.com/2023/01/04/machine-learning-makes-sure-your-lols-are-genuine/

    There was a time not too long ago when “LOL” actually meant something online. If someone went through the trouble of putting LOL into an email or text, you could be sure they were actually LOL-ing while they were typing — it was part of the social compact that made the Internet such a wholesome and inviting place. But no more — LOL has been reduced to a mere punctuation mark, with no guarantee that the sender was actually laughing, chuckling, chortling, or even snickering. What have we become?

    To put an end to this madness, [Brian Moore] has come up with the LOL verifier. Like darn near every project we see these days, it uses a machine learning algorithm — EdgeImpulse in this case. It detects a laugh by comparing audio input against an exhaustive model of [Brian]’s jocular outbursts — he says it took nearly three full minutes to collect the training set. A Teensy 4.1 takes care of HID duties

    https://twitter.com/lanewinfield/status/1610294277434933249

    Reply
  15. Tomi Engdahl says:

    Applications Processor Demo Does Image Recognition and Gesture Capture
    Dec. 23, 2022
    The NXP i.MX 93 applications processor family can be used for many things. In this case, we have a demonstrator that’s being used for live facial recognition and gesture capture.
    https://www.electronicdesign.com/industrial-automation/video/21256870/electronic-design-application-processor-demo-does-image-recognition-and-gesture-capture?utm_source=EG+ED+Connected+Solutions&utm_medium=email&utm_campaign=CPS221229044&o_eid=7211D2691390C9R&rdx.identpull=omeda|7211D2691390C9R&oly_enc_id=7211D2691390C9R

    Reply
  16. Tomi Engdahl says:

    Arduino Puts a Syntiant NDP Machine Learning Chip on Its New Nicla Voice TinyML Development Board
    Featuring a Syntiant NDP120 ultra-low-power accelerator, this new dev board looks to bring on-device voice processing to your next project.
    https://www.hackster.io/news/arduino-puts-a-syntiant-ndp-machine-learning-chip-on-its-new-nicla-voice-tinyml-development-board-79b64fbc7fef

    Reply
  17. Tomi Engdahl says:

    Drowsiness is a major contributing factor to motor vehicle accidents, which can have serious consequences. This POC system uses a FOMO-based model with Arduino’s Nicla Vision to detect and alert if a driver’s eyes are closed for more than two seconds: https://bit.ly/3Xj7u2M

    Reply
  18. Tomi Engdahl says:

    Inspired by the Apple Pencil, Nekhil Ravi and Shebin Jose Jacob of Coder’s Cafe came up with their own tinyML-powered device that translates air-written letters into text on a computer.

    This DIY Apple Pencil writes with gestures
    https://blog.arduino.cc/2023/01/20/this-diy-apple-pencil-writes-with-gestures/

    Reply
  19. Tomi Engdahl says:

    Google Opens Pre-Orders for the Coral Dev Board Micro, Its First Microcontroller Development Board
    https://www.hackster.io/news/google-opens-pre-orders-for-the-coral-dev-board-micro-its-first-microcontroller-development-board-73664dd48266

    Launching at $79.99, the board TensorFlow Lite Micro (TFLM) and standard TensorFlow Lite compatible — and includes Arduino IDE support.

    Reply
  20. Tomi Engdahl says:

    Microsoft Research India’s EdgeML tool enables you to perform machine learning tasks such as gesture recognition on tiny devices like an Arduino Uno.

    TINY MACHINE LEARNING ON AS LITTLE AS 2 KB OF RAM
    https://hackaday.com/2023/02/24/tiny-machine-learning-on-as-little-as-2-kb-of-ram/

    All of the machine language stuff coming out lately doesn’t affect you if you are developing with embedded microcontrollers, right? Perhaps not. Microsoft Research India wants you to use their EdgeML tool to do machine learning tasks such as gesture recognition in tiny devices like an Arduino Uno. According to the developers, you might need as little as 2 KB of RAM. There’s no network connection required and the work is using Tensorflow underneath, so it is compatible with much of what you’ll find for bigger computers.

    https://microsoft.github.io/EdgeML/

    Reply
  21. Tomi Engdahl says:

    Plumerai Brings Its TinyML People Detection Model to Espressif’s Low-Cost ESP32-S3
    Running at a usable 3.3 frames per second, the newly-shrunken model can track 20 people at distances over 65 feet.
    https://www.hackster.io/news/plumerai-brings-its-tinyml-people-detection-model-to-espressif-s-low-cost-esp32-s3-22d1dc58cd72

    Reply
  22. Tomi Engdahl says:

    What’s the Difference Between Machine Learning and TinyML?
    April 10, 2023
    Machine learning, or ML, gave us TinyML. What are the differences between the two, and what makes them unique?
    https://www.electronicdesign.com/markets/automation/article/21263631/electronic-design-whats-the-difference-between-machine-learning-and-tinyml

    Reply
  23. Tomi Engdahl says:

    What’s the Difference Between Machine Learning and TinyML?
    April 10, 2023
    Machine learning, or ML, gave us TinyML. What are the differences between the two, and what makes them unique?
    https://www.electronicdesign.com/markets/automation/article/21263631/electronic-design-whats-the-difference-between-machine-learning-and-tinyml?utm_source=EG+ED+Connected+Solutions&utm_medium=email&utm_campaign=CPS230406075&o_eid=7211D2691390C9R&rdx.identpull=omeda|7211D2691390C9R&oly_enc_id=7211D2691390C9R

    Reply
  24. Tomi Engdahl says:

    No Cameras Please
    An image-free object detection algorithm uses ML to efficiently determine the class, location, and size of all targets in a scene.
    https://www.hackster.io/news/no-cameras-please-ace89f4835c7

    Object detection technology has been making significant strides in recent years, and it is increasingly being used across various industries to improve efficiency, productivity, and safety. This technology is most commonly based on computer vision algorithms that enable computers to identify and locate objects within digital images or videos.

    The applications of object detection are numerous and varied.

    Computer vision-based object detection algorithms can require a significant amount of computational resources and energy, however. This is because these algorithms analyze individual pixels and features within images or videos to identify and locate objects, which requires a very large number of calculations.

    A promising new technique has recently been described by a team of researchers at the Beijing Institute of Technology that has the potential to perform real-time object detection without the need for a lot of computational horsepower. Their image-free method uses a single pixel detector and a machine learning analysis pipeline to detect objects with fairly remarkable levels of accuracy. And whereas previous image-free efforts have failed to get the class, location, and size information of all objects in a frame at the same time, this team has shown that their system is up to the task.

    This novel single-pixel object detection technique illuminates an area with a carefully crafted sequence of structured light patterns, and the intensity of the light is recorded by a single-pixel detector. This scanning process is very quick, requiring just a tiny fraction of a second to complete. And because the measurements are sparse, the algorithms that process them can be lightweight.

    The data was first processed by a transformer-based encoder, which was used to extract the most relevant and informative features from it. These features were then forwarded into a multi-scale attention network-based decoder that is capable of predicting the class, location and size of all detected targets at the same time.

    Researchers detect and classify multiple objects without images
    https://www.optica.org/en-us/about/newsroom/news_releases/2023/may/researchers_detect_and_classify_multiple_objects_w/

    WASHINGTON — Researchers have developed a new high-speed way to detect the location, size and category of multiple objects without acquiring images or requiring complex scene reconstruction. Because the new approach greatly decreases the computing power necessary for object detection, it could be useful for identifying hazards while driving.

    “Our technique is based on a single-pixel detector, which enables efficient and robust multi-object detection directly from a small number of 2D measurements,” said research team leader Liheng Bian from the Beijing Institute of Technology in China. “This type of image-free sensing technology is expected to solve the problems of heavy communication load, high computing overhead and low perception rate of existing visual perception systems.”

    Reply
  25. Tomi Engdahl says:

    Tokay Lite: Multi-purpose ESP32 AI Camera

    Battery-powered IoT camera with nightvision, motion detection and TensorFlow Lite support

    https://hackaday.io/project/189135-tokay-lite-multi-purpose-esp32-ai-camera

    Reply
  26. Tomi Engdahl says:

    Tekoäly tulee jopa 8-bittisille
    https://etn.fi/index.php?option=com_content&view=article&id=15296&via=n&datum=2023-09-08_15:28:44&mottagare=30929

    Koneoppimisesta (ML) on tulossa vakiovaatimus sulautetuille suunnittelijoille, jotka työskentelevät kehittääkseen tai parantaakseen tuotteita. Vastatakseen tähän tarpeeseen Microchip on tuonut tarjolle työkalut, joilla koneoppiminen ja tekoälymallit voidaan tuoda jopa 8-bittisilel mikro-ohjaimille.

    MPLAB Machine Learning Development Suite -ohjelmistoa voidaan käyttää kaikissa Microchipin mikro-ohjainten (MCU) ja mikroprosessorien (MPU) valikoimassa koneoppimiseen perustuvien päätelmien lisäämiseksi suorittimille nopeasti ja tehokkaasti.

    ML käyttää joukkoa algoritmisia menetelmiä kuvioiden (pattern) löytämiseen suurista tietojoukoista päätöksenteon mahdollistamiseksi. Se on yleensä nopeampi, helpommin päivitettävä ja tarkempi kuin manuaalinen käsittely. Yksi esimerkki siitä, kuinka Microchip-asiakkaat käyttävät tätä työkalua, on mahdollistaa ennakoiva huolto, jossa tekoäly huomaa laitteiden ikääntymisen tai kulumisen.

    MPLAB Machine Learning Development Suite auttaa insinöörejä rakentamaan erittäin tehokkaita, pienikokoisia ML-malleja. AutoML:n tukema työkalupakki eliminoi monia toistuvia, tylsiä ja aikaa vieviä mallinrakennustehtäviä, mukaan lukien poiminta, koulutus, validointi ja testaus. Se tarjoaa myös mallien optimointeja, jotta MCU- ja MPU-piirien muistirajoitukset voidaan ottaa huomioon.

    Uusien työkalujen hinta vaihtelee lisenssin mukaan. MPLAB Machine Learning Development Suiten ilmainen versio on saatavilla arvioitavaksi.

    Reply
  27. Tomi Engdahl says:

    Handwritten Digit Recognition Using TensorFlow Lite Micro on i.MX RT devices
    https://www.allaboutcircuits.com/partner-content-hub/nxp-semiconductors/handwritten-digit-recognition-using-tensorflow-lite-micro-on-i.mx-rt-devices/

    This application note focuses on handwritten digit recognition on embedded systems through deep learning. It explains the process of creating an embedded machine learning application that can classify handwritten digits and presents an example solution based on NXP’s SDK and the eIQTM technology.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*