Computing at the Edge of IoT – Google Developers – Medium
We’ve seen that demand for low latency, offline access, and enhanced machine learning capabilities is fueling a move towards decentralization with more powerful computing devices at the edge.

Nevertheless, many distributed applications benefit more from a centralized architecture and the lowest cost hardware powered by MCUs.

Let’s examine how hardware choice and use case requirements factor into different IoT system architectures.


  1. Tomi Engdahl says:

    Tech Talk: Data-Driven Design

    How more data is shifting memory architectures.

  2. Tomi Engdahl says:

    Defining Edge Memory Requirements

    Edge compute covers a wide range of applications. Understanding bandwidth and capacity needs is critical.

    Defining edge computing memory requirements is a growing problem for chipmakers vying for a piece of this market, because it varies by platform, by application, and even by use case.

    Edge computing plays a role in artificial intelligence, automotive, IoT, data centers, as well as wearables, and each has significantly different memory requirements. So it’s important to have memory requirements nailed down early in the design process, along with the processing units and the power, performance and area tradeoffs.

    “In the IoT space, ‘edge’ to companies like Cisco is much different than ‘edge’ to companies like NXP,” observed Ron Lowman, strategic marketing manager for IoT at Synopsys. “They have completely different definitions and the scale of the type of processing required looks much different. There are definitely different thoughts out there on what edge is. The hottest trend right now is AI and everything that’s not data center is considered edge because they’re doing edge inference, where optimizations will take place for that.”


Leave a Comment

Your email address will not be published. Required fields are marked *