ARM’s Zach Shelby introduced the use of microcontrollers for machine learning and artificial intelligence at the ECF19 event in Helsinki on last Friday. The talk showed that that artificial intelligence and machine learning can be applied to small embedded devices in addition to the cloud-based model. In particular, artificial intelligence is well suited to the devices of the Internet of Things. The use of machine learning in IoT is also sensible from an energy efficiency point of view if unnecessary power-consuming communication can be avoided (for example local keyword detection before sending voice data to cloud more more detailed analysis).
According to Shelby , we are now moving to a third wave of IoT that comes with comprehensive equipment security and voice control. In this model, machine learning techniques are one new application that can be added to previous work done on IoT.
In order to successfully use machine learning in small embedded devices, the problem to be solved is that it has reasonably little incoming information and a very limited number of possible outcomes. ARM Cortex M4 processor equipped with a DSP unit is powerful enough for simple hand writing decoding or detecting few spoken words with machine learning model. In examples the used machine learning models needed less than 100 kilobytes of memory.
The presentation can be now viewed on YouTube:
Important tools and projects mentioned on the presentation:
uTensor (ARM MicroTensor)
Articles on presentation:
https://www.uusiteknologia.fi/2019/05/20/ecf19-koneoppiminen-mahtuu-mikro-ohjaimeen/
http://www.etn.fi/index.php/72-ecf/9495-koneoppiminen-mullistaa-sulautetun-tekniikan
419 Comments
Tomi Engdahl says:
Tennis Smith’s Cat Doorbell Uses On-Device Machine Learning to Spot a Cold Cat via Sight and Sound
TensorFlow running on a Raspberry Pi triggers SMS alerts if a cat is both seen and heard at the door.
https://www.hackster.io/news/tennis-smith-s-cat-doorbell-uses-on-device-machine-learning-to-spot-a-cold-cat-via-sight-and-sound-82709e6913d1
Tomi Engdahl says:
MediaPipe for Raspberry Pi released – No-code/low-code on-device machine learning solutions
https://www.cnx-software.com/2023/08/21/mediapipe-for-raspberry-pi-released-no-code-low-code-on-device-machine-learning-solutions/
Google has just released MediaPipe Solutions for no-code/low-code on-device machine learning for the Raspberry Pi (and an iOS SDK) following the official release in May for Android, web, and Python, but it’s been years in the making as we first wrote about the MediaPipe project back in December 2019.
Tomi Engdahl says:
https://etn.fi/index.php/tekniset-artikkelit/15513-uusilla-laajennuskaeskyillae-tekoaelyae-arm-ohjaimille
Tomi Engdahl says:
Cutting the Cord
https://www.hackster.io/news/cutting-the-cord-c0098f22d4b1
This custom voice assistant uses tinyML to control smart home appliances without relying on the cloud, bypassing common privacy concerns.
Tomi Engdahl says:
This Compact Espressif ESP32-Powered Autonomous Robot Has a Machine Learning Brain Written in PHP
Streaming live video to a remote web server, this robot receives its commands from a PHP-based machine learning model.
https://www.hackster.io/news/this-compact-espressif-esp32-powered-autonomous-robot-has-a-machine-learning-brain-written-in-php-801b90223e68
Tomi Engdahl says:
MLCommons Releases Latest MLPerf Tiny Benchmark Results for On-Device TinyML
Devices from Bosch, Qualcomm, Renesas, STMicro, Skymizer, and Syntiant put to test in the latest MLPerf Tiny 1.2 benchmark.
https://www.hackster.io/news/mlcommons-releases-latest-mlperf-tiny-benchmark-results-for-on-device-tinyml-3f820ae12aae
Tomi Engdahl says:
https://www.hackster.io/news/eivind-holt-s-portenta-h7-powered-tinyml-camera-tracks-dangerous-icicle-formations-a5cea5800e4b
Tomi Engdahl says:
Generatiivinen tekoäly tulee IoT-laitteisiin
https://etn.fi/index.php/13-news/16169-generatiivinen-tekoaely-tulee-iot-laitteisiin
Englantilaislähtöisellä Arm:.a on oma neuroverkkoprosessorien sarja, joka on nimeltään Ethos. Nyt perheeseen on tuotu uusi versio. Ethos-U85 on suunniteltu tukemaan muuntaja- eli transformer-toimintoja vähävirtaisissa laitteissa. Käytännössä Arm tuo generatiiviset tekoälymallit IoT-laitteisiin.
Kannattaa toki muistaa, etteivät IoT-laitteet jatkossakaan kykene prosessoimaan suuria kielimalleja eli LLM-malleille perustuvaa tekoälylaskentaa. Tässä vaiheessa Arm kertoo siirtäneensä esimerkiksi konenäkömalli ViT-Tinyn ja generatiivisen kielimallin TinyLlama-1.1B Ethos-U85-piirille.
Ethos-U85:sta puhuttiin paljon jo kuukausi sitten Nürnbergin Embedded World -messuilla. Moni Arm:n asiakas hehkutti uutta NPU-yksikköä ja kertoi jo tuovansa sitä omille siruilleen. Julkisesti asiasta ei tietenkään saanut vielä puhua.
Ethos-U85:ssä on kolmannen sukupolven mikroarkkitehtuuri. Toisen sukupolven U65:een verrattuna U85 on suurimmassa kokoonpanossaan 4 kertaa tehokkaampi ja 20 prosenttia energiatehokkaampi.
Tomi Engdahl says:
https://etn.fi/index.php/tekniset-artikkelit/16191-tekoaelyae-hyvin-pienellae-virralla
Tomi Engdahl says:
https://etn.fi/index.php/13-news/16223-korttifarmi-tuo-koneoppimisen-verkon-reunalle
STMicroelectronics ja Amazon Web Services ovat yhdistäneet voimansa luodakseen koneoppimissovelluksen äänitapahtumien havaitsemiseen, jota ST-kumppaniohjelmaan kuuluva LACROIX aikoo käyttää älykaupungeissa. ST- ja AWS-tekniikoiden yhdistelmä avaa uusia mahdollisuuksia koneoppimissovellusten luomiseen reunalla.
Ratkaisu käyttää ST Model Zoosta löytyvää Audio Event Detection -mallia, joka on otettu käyttöön Discovery Kit for IoT -solmulle STM32U5-mikro-ohjainsarjan kanssa. Saumattoman pilviyhteyden varmistamiseksi se käyttää laajennuspakettia, joka integroi FreeRTOS:n AWS IoT Coren kanssa ja arkkitehtuuri tukee koko MLOps-prosessia. Itse asiassa koneoppimispino vastaa tietojen käsittelystä, mallin koulutuksesta ja arvioinnista, kun taas IoT-pino hoitaa automaattisen laitteen vilkkumisen OTA-päivitysten kanssa. Se varmistaa, että kaikissa laitteissa on uusimmat laiteohjelmiston tietoturvakorjaukset.
Tomi Engdahl says:
Artikkeleissa Renesas esittelee uutta RZ/V2H-ohjainpiiriään, jonka avulla voidaan tuoda tekoälylaskenta verkon reunalle ja IoT-laitteisiin. Edeltäjään verrattuna piiri on yli 10 kertaa energiatehokkaampi.
https://etn.fi/index.php/13-news/16259-muista-osallistua-etndigi-kisaan-palkintona-oneplussan-aelykellouutuus
Tomi Engdahl says:
Dual AI Camera
Dual AI Camera using Grove Vision AI V2 and Xiao ESP32S3 Sense to detect and capture images of hummingbirds.
https://www.hackster.io/Ralphjy/dual-ai-camera-e04757
Tomi Engdahl says:
This paper proposes StreamTinyNet, a novel tinyML architecture that’s able to perform multiple-frame video streaming analysis on devices as small as the Arduino Nicla Vision.
StreamTinyNet: video streaming analysis with spatial-temporal TinyML
https://arxiv.org/html/2407.17524v1?fbclid=IwY2xjawEUCaxleHRuA2FlbQIxMQABHdqVH-ar26wRAtvomNBPqIFbRD2wY73yiAI1JIFI9kiZlxVfQzaASae4Gg_aem_9YSeVO0eIBsEt5MxPbdsLg
In this paper, we present StreamTinyNet, the first TinyML architecture to perform multiple-frame VSA, enabling a variety of use cases that requires spatial-temporal analysis that were previously impossible to be carried out at a TinyML level. Experimental results on public-available datasets show the effectiveness and efficiency of the proposed solution. Finally, StreamTinyNet has been ported and tested on the Arduino Nicla Vision, showing the feasibility of what proposed.
Tomi Engdahl says:
Bringing Big AI to Tiny Devices
StreamTinyNet enables multi-frame video analysis on resource-constrained devices, like the Arduino Nicla Vision, to find temporal patterns.
https://www.hackster.io/news/bringing-big-ai-to-tiny-devices-64a40641413c
Tomi Engdahl says:
Following a recent tinyML workshop, Joao Vitor Freitas da Costa prototyped a facial recognition vehicle security system using an Arduino Nicla Vision that only allows the car to start if its owner is sitting in the driver’s seat.
https://blog.arduino.cc/2024/08/05/making-a-car-more-secure-with-the-arduino-nicla-vision-board/?fbclid=IwY2xjawEhJhhleHRuA2FlbQIxMQABHfmgObBufnCYHfP89dza5-OHCarFtdIx4_ZyN80dWa104H4h40tclJMuOg_aem_ynYJVwCT275hB8f7uNvLQA
Tomi Engdahl says:
https://www.hackster.io/news/seeed-studio-s-respeaker-lite-and-voice-assistant-kit-offer-low-power-on-device-voice-control-25ff3cb53cd5?fbclid=IwY2xjawEhKudleHRuA2FlbQIxMQABHS92r6v18TL7Sjqckxnn-5v4T2ng69ydC2Bye6QJV2cqnY6VJ3We6rx04g_aem_F7kFcAh7DD2Vj5m9e8qsFw
Tomi Engdahl says:
MechDog AI Robot Dog features ESP32-S3 controller, supports Scratch, Python, and Arduino programming
Hiwonder’s MechDog is a compact AI robot dog powered by an ESP32-S3 controller that drives eight high-speed coreless servos. It features built-in inverse kinematics for precise and agile movements and has ports for various I2C sensors such as ultrasonic and IMU sensors. The robot is equipped with a durable aluminum alloy frame and a removable 7.4V 1,500mAh lithium battery for power.
https://www.cnx-software.com/2024/08/08/mechdog-ai-robot-dog-features-esp32-s3-controller-supports-scratch-python-and-arduino-programming/
Tomi Engdahl says:
Computer Vision at the Edge? Just Zip It!
ZIP-CNN simplifies deploying CNNs on microcontrollers by estimating costs and applying reduction techniques to meet hardware constraints.
https://www.hackster.io/news/computer-vision-at-the-edge-just-zip-it-7934164ef4a1
Tomi Engdahl says:
MicroFlow is an open-source, Rust-based inference engine for the deployment of tinyML models to highly resource-constrained devices — ranging from the Arduino Nano 33 BLE Sense’s 32-bit nRF52840 SoC to the 8-bit ATmega328 MCU on the UNO Rev3 — using less flash and RAM memory than TensorFlow Lite and other state-of-the-art frameworks.
For Better TinyML, Just Go with the Flow
MicroFlow, a Rust-based framework, optimizes AI for microcontrollers and outperforms even TensorFlow Lite in terms of memory utilization.
https://www.hackster.io/news/for-better-tinyml-just-go-with-the-flow-df8fe56c3635?fbclid=IwY2xjawF1fSJleHRuA2FlbQIxMQABHZ1EwOyrlgKH-k3OGJaC2z7h8I_ebDIj2T92m9HGQ2JqP0lBEYQUYSozYg_aem_nHUVFIeXiZH-nRqevL5dyg