Computing at the Edge of IoT – Google Developers – Medium
We’ve seen that demand for low latency, offline access, and enhanced machine learning capabilities is fueling a move towards decentralization with more powerful computing devices at the edge.

Nevertheless, many distributed applications benefit more from a centralized architecture and the lowest cost hardware powered by MCUs.

Let’s examine how hardware choice and use case requirements factor into different IoT system architectures.


  1. Tomi Engdahl says:

    Designing For The Edge

    Growth in data is fueling many more options, but so far it’s not clear which of them will win.

    Chip and system architectures are beginning to change as the tech industry comes to grips with the need to process more data locally for latency, safety, and privacy/security reasons.

    The emergence of the intelligent edge is an effort to take raw data from endpoints, extract the data that requires immediate action, and forward other data to various local, regional or commercial clouds. The basic idea is to prioritize what data goes where, what data should be discarded altogether, and what data needs to be analyzed on a broader scale using more sophisticated tools than are available locally.

  2. Tomi Engdahl says:

    InferX X1 Coprocessor Takes on Inference at the Edge

    The InferX X1 edge inference coprocessor developed by Flex Logix delivers artificial-intelligence inference to IoT applications.

    Flex Logix’s nnMax machine-learning (ML) inference engine technology, originally developed for embedded FPGAs (eFPGAs), will now be available in the InferX X1 coprocessor

  3. Tomi Engdahl says:

    Preparing For War On The Edge

    The tech world is planning for an onslaught of data, but it’s far from clear who will own it.

    War clouds are gathering over the edge of the network.

    The rush by the reigning giants of data—IBM, Amazon, Facebook, Alibaba, Baidu, Microsoft and Apple—to control the cloud by building mammoth hyperscale data centers is being met with uncertainty at the edge of the network. In fact, just the emergence of the edge could mean that all bets are off when it comes to data dominance.

    It’s not that the big cloud operations are in danger of losing business. But they may be in danger of losing control over some or all of the most valuable data, and that could have significant repercussions on a global level. So what will they or others do about these changes? At this point, it’s not clear how all of this will play out.

    The edge, which even a year ago was largely ignored by investors and chipmakers, has suddenly become the next big thing. Companies are putting labels on it, defining its boundaries, and weaving marketing messages about how all of the pieces will fit together.

  4. Tomi Engdahl says:

    Rushing To The Edge

    Why the next big thing is unnerving tech giants and pushing design in new directions.

    Virtually every major tech company has an “edge” marketing presentation these days, and some even have products they are calling edge devices. But the reality is that today no one is quite sure how to define the edge or what it will become, and any attempts to pigeon-hole it are premature.

    What is becoming clear is the edge is not simply an extension of the Internet of Things. It is the result of the IoT, a way of managing and utilizing the vast and exploding amount of data produced by sensors from connected devices everywhere. But it also is its own distinct segment, even though it hasn’t been fully articulated.

  5. Tomi Engdahl says:

    AI, IoT, 5G, and Edge Computing Shape Thermal Design

    In the coming year, new thermal design priorities will largely be driven by technologies such as artificial intelligence (AI), the internet of things (IoT), 5G, and edge computing, recent research from Future Facilities, which makes thermal design software, found. “Recent advancements in technology over the past few years have resulted in unprecedented changes in the way engineers view their designs,” said Chris Aldham, product manager at Future Facilities. “The introduction of AI, 5G, edge computing, and the internet of things all have major implications for how — and where — electronics need to operate, and that, in turn, means a whole host of new considerations from a thermal perspective.”

    In a digital roundtable event, thermal designers, engineers, and experts from a variety of organizations — including Facebook, HP Enterprise, QuantaCool, Engineered Fluids, CommScope, Vertiv, 6SigmaET, and Binghamton University — gathered to compare notes. The group identified a handful of thermal design priorities, including:

    The need for hybrid cooling to cope with new IoT environments
    Remote monitoring of cooling systems in edge-computing devices
    More accurate monitoring and simulation of energy use in data centers
    Thermal cooling solutions for 5G base stations and new AI hardware
    Tools that can accurately simulate these new technologies and environments

  6. Tomi Engdahl says:

    Mary Jo Foley / ZDNet:
    Microsoft announces Azure SQL Database Edge, which will run on compute-constrained devices, and new AI, mixed reality, and IoT capabilities powered by Azure — A couple days ahead of Build 2019, Microsoft is showcasing some new AI, MR, IoT and blockchain capabilities powered by Azure …

    Microsoft adds more AI, mixed-reality, IoT services to its Azure line-up

    A couple days ahead of Build 2019, Microsoft is showcasing some new AI, MR, IoT and blockchain capabilities powered by Azure which it will be showing off at its developer confab.

  7. Tomi Engdahl says:

    Benchmarking Edge Computing

    Comparing Google, Intel, and NVIDIA accelerator hardware

  8. Tomi Engdahl says:

    Google Is Bringing AI to the Edge for Everyone in 2019

    The 2019 Google I/O developer conference brought a wave of artificial intelligence announcement as the company touted a shift toward edge-based AI, and a new focus on improved privacy.

  9. Tomi Engdahl says:

    Why Machine Learning Is Important to Embedded

    Machine learning is opening up new features and applications that will forever change how users expect their systems to behave.

    Machine learning for embedded systems has been gaining a lot of momentum over the past several years. For embedded developers, machine learning was something that data scientists were concerned with and something that lived up on the cloud, far from the resource-constrained microcontrollers that embedded developers work with on a daily basis.

    What seems like almost overnight, however, machine learning is suddenly finding its way to microcontroller and edge devices. To some developers, this may seem baffling or at least intriguing. But why is machine learning so important to embedded developers now? Let’s explore a few possibilities.

  10. Tomi Engdahl says:

    Google Is Bringing AI to the Edge for Everyone in 2019

    The 2019 Google I/O developer conference brought a wave of artificial intelligence announcement as the company touted a shift toward edge-based AI, and a new focus on improved privacy.

  11. Tomi Engdahl says:

    Bottlenecks For Edge Processors

    New processors will be blazing fast, but that doesn’t guarantee improvements in system speed.

    New processor architectures are being developed that can provide two to three orders of magnitude improvement in performance. The question now is whether the performance in systems will be anything close to the processor benchmarks.

    Most of these processors doing one thing very well. They handle specific data types and can accelerate the multiply-accumulate functions for algorithms by distributing the processing across multiple processing elements in a chip. In effect, they are parallelizing operations while also pruning algorithms and adjusting the output to whatever precision level is necessary, and they are storing and retrieving bits from memory in multiple directions rather than just left-to-right.

    There are several inherent challenges with these architectures, though. First, moving data through a chip is not particularly energy-efficient if you aren’t also reducing the amount of data that needs to be processed along the way.

    Data centers have been wrestling with this problem for decades, and hyperscale clouds have added an element of heterogeneity to the mix. The cloud essentially load-balances processing and uses high-speed optical interconnects to ship data around at the speed of light. But as this kind of operation moves closer to the data, such as in a car or an edge cloud, the ability to load balance is much more limited. There is finite real estate in an edge cloud, and far less in an autonomous vehicle.

    The second challenge is that it’s not obvious how devices will be connected throughout the edge, and that can provide its own bottlenecks. Designing a system with a data pipe running at 100Gbps or 400Gbps looks very good on paper, but the speeds are only as fast as the slowest component on the network. Anyone with a gigabit Internet connection knows the flow of data is only as fast as the server on the other end.

    Third, the economics of any compute operation that isn’t a hyperscale cloud data center are very fuzzy. Because these new processor architectures are highly tuned for certain algorithms or data types, they are unlikely to achieve the economies of scale that have made this kind of processing possible in the first place.

  12. Tomi Engdahl says:

    The distributed intelligence triple-whammy: 5G, AI and tinyML

    Back in September 2018, Forbes contributor Prakash Sangam heralded two major tech trends, AI and 5G, and made the case, in autonomous driving, for distributed intelligence: AI running at the intelligent edge AND in the intelligent cloud. His point is to put critical ‘sensing’ that is required to act immediately in the car, while processing-intensive functions in the cloud. 5G ends up being the connective glue between both intelligent systems, offering low-latency, high bandwidth and high data transfer speeds.

  13. Tomi Engdahl says:

    Hands-On with the SmartEdge Agile
    Out-of-the-box artificial intelligence at the edge

  14. Tomi Engdahl says:

    Bottlenecks For Edge Processors

    New processors will be blazing fast, but that doesn’t guarantee improvements in system speed.

  15. Tomi Engdahl says:

    Rethinking Enclosures To Support IoT & The Edge

    A watchword of business intelligence nowadays, the Internet of Things (IoT) 4.0 offers nothing less than the digitization of your company’s entire operations. IoT data-collection devices are already ubiquitous in the manufacturing world, and are projected to reach over 5 billion by 2020.

    The Internet of Things 4.0 utilizes sensors to automatically gather your data at distributed points, transmitting critical business intelligence including analytics and diagnostics. All this data and decision-making is rapidly becoming decentralized, making businesses more agile and able to make decisions remotely. It all translates to more efficient processes, improved security, and lower operational costs.

    However, to handle all this big data and to successfully implement IoT 4.0, industrial and IT businesses will need extremely short latency times for effective M2M communication. That’s where Edge computing delivers the speed you need.

    The Edge is nothing new–IT professionals and plant managers have always deployed computers in distributed locations and uncontrolled environments. Edge computing relegates data control from your central network to the periphery (or ‘edge’) of the internet where your smart devices, wireless sensors and other Internet-enabled devices are located. Using localized computing, Edge deployments store all this big data to reduce your cloud dependence.

    For example, Edge sites manage your smart grid, i.e. your energy distribution, grid services, and components at a local level, providing your high-speed internet and television. In fact, Edge data centers support 75% of all local internet usage. Municipal utilities including water and power also utilize the Edge, while electronic tolling is another common example (E-ZPass, or automated license-plate readers).

    Improving Latency, Business Intelligence
    While the Internet of Things enables you to centralize control over the systems that run your plant processes or IT infrastructure, it needs to approach real-time speeds. Naturally, with all its smart devices and sensors, IoT 4.0 is already starting to heavily depend on Edge deployments. Edge computing supports the IoT world by enabling more efficient communication and bringing the network closer to the data to reduce latency.

    In many business areas, the datacenter has been replaced with a cloud datacenter which in turn has been replaced with a ‘fog’ datacenter. The fog offers specific cloud services for data storage, while some information is also collected/sent at the local level as with Edge computing.

    Not every IT staffer or factory manager has the free choice of where to build, the power to modify servers, or the staff to design a datacenter from scratch. Consequently, Edge deployments are located on oil rigs, in hazardous plant areas, on cruise ships–just about anywhere.

    As important as the Edge is, even more so is its implementation. The Edge and the IoT are a closed loop supporting each other. However, deploying edge computing for IoT devices can be a complex task. Just as with traditional IT footprints, physical Edge deployments require their own diverse considerations, precautions, and equipment.

    Prioritize for your specific Edge deployment. For example, PUE (Power Usage Efficiency) is a very important performance metric for both traditional and Edge datacenters.

    A comprehensive assessment of current and future IoT infrastructure can save long-term headaches and ensure success. These include:

    • Safety and Security
    • Protection
    • Scalability
    • Climate Control

    In addition, the ecosystem of cloud providers, mobile network companies and microprocessor companies can be overwhelming for many industrial managers trying to understand the integration of IoT. Working with infrastructure manufacturers, like Rittal, that have global relationships with industrial, IT and IoT companies can provide insight and direction to finding the right solutions providers for your company’s needs.

  16. Tomi Engdahl says:

    The Need for Tiered Security at the Edge

    Enabling the networks of tomorrow requires organizations to Radically Reimagine the Security Tools they have in Place Today

    One of the most disruptive results of digital transformation for many organizations has been the rapid emergence of the edge, which in many ways is what has been replacing the traditionally static network perimeter. The advent and support of an edge-based networking model enables organizations to more dynamically expand their networks, embrace mobile users, IoT, and enduser devices, build flexible and responsive WAN and cloud connections, and enable distributed processing.

  17. Tomi Engdahl says:

    Big 5G stakeholders roll eyes over edge computing hype: Light Reading

    Reporting from Light Reading’s Big 5G Event, held this month in Denver (May 6-8), Mike Dano notes that “top players in the mobile networking and data center industries are voicing serious concerns” about edge computing.

    Jim Poole, VP of ecosystem business development at data center giant Equinix, said that mobile operators would need to completely revise their network designs away from voice services to get edge computing to work in a 5G world. “This whole thing needs to be changed, rearchitected,” he said. “5G is an extraordinarily daunting change.”

    The topic of edge computing has generated a significant amount of hype, and many in the space do agree it could play a key role in the ultimate development of 5G technology. But some top players in the mobile networking and data center industries are voicing serious concerns about edge computing in the near and even the medium term.

    “Spend enough time in the telecom and technology industries and it becomes clear that the hype of many new technologies usually precedes the reality by 5-10 years. We believe that is the case with micro edge data centers,” wrote Raul Martynek, the CEO of data center provider DataBank, in a lengthy post on LinkedIn.

    Data Center Firms, Mobile Operators Pour Cold Water on Edge Computing

    The topic of edge computing has generated a significant amount of hype, and many in the space do agree it could play a key role in the ultimate development of 5G technology. But some top players in the mobile networking and data center industries are voicing serious concerns about edge computing in the near and even the medium term.

    “Spend enough time in the telecom and technology industries and it becomes clear that the hype of many new technologies usually precedes the reality by 5-10 years. We believe that is the case with micro edge data centers,” wrote Raul Martynek, the CEO of data center provider DataBank, in a lengthy post on LinkedIn.

    Edge computing proponents argue that the mostly centralized nature of the Internet today won’t support the snappy, real-time services that 5G providers hope to offer, like autonomous vehicles and streaming virtual reality. Such services require almost immediate connections between computing services and users, and an edge computing design would enable that instant connection by physically locating data centers geographically next to the users that need them. Such a design — dispersed computing instead of consolidated in one location — potentially could eliminate the tens or even hundreds of milliseconds it takes for a user’s request to travel across a network to a computer that can answer it.

    At least, that’s the idea.

    DataBank’s Martynek argued that, at least so far, there’s very little need for hundreds or thousands of mini computing locations spread out all over the country. Specifically, he noted that there are already several data centers physically located in most major metro markets in the US.

    . “The incremental improvement from going from one data center location to 5 micro data center locations only improves your round trip latency by less than 1-2ms,” he wrote.

    As a result, he argued, “deploying in tens-hundreds-thousands of micro-data centers would only improve latency by 1ms or less, and in some cases introduce latency depending on where the peering occurs.” Those comments essentially represent a dig at the likes of EdgeMicro and Vapor IO that are hoping to build out mini data centers in dozens of cities across the US.

    Similarly, Equinix’s Poole noted that edge computing is already available in a basic form today, considering that Equinix operates roughly 200 data centers around the world.

    However, most speakers agreed that, eventually, 5G would help spark more demand for edge computing services. “Does 5G need edge computing? I’d say the answer is yes. Does edge computing need 5G? The answer is no.” Equinix’s Poole said.

    “I think edge computing is one of the two or three things that make 5G different,” Gedeon said.

    But Equinix’s Poole argued that wireless networks need to essentially be redesigned in order to fully take advantage of the edge computing opportunity. Instead of routing all traffic through a handful of on-ramps, mobile operators will need to instead create ways for applications to immediately access local mobile users — and to interoperate.

  18. Tomi Engdahl says:

    Is ADAS The Edge?

    Uncertainty about where processing will occur is causing confusion over definitions.

    Debate is brewing over whether ADAS applications fall on the edge, or if they are better viewed squarely within the context of the automotive camp.

    There is more to this discussion than just semantics. The edge represents a huge greenfield opportunity for electronics of all sorts, and companies from the mobile market and from the cloud are both rushing to stake their claim. At this point there is no single instruction-set architecture that owns this space, and it’s not clear exactly how this market will be carved up. There are end devices that can do pre-processing, and various levels of servers and local clouds before data is processed in the large cloud operations of companies such as Google, Amazon and Microsoft.

  19. Tomi Engdahl says:

    Mike Wheatley / SiliconANGLE:
    Nvidia unveils its first AI platform for edge devices, Nvidia EGX, compatible with AWS and Azure IoT offerings; launch partners include Cisco, HPE, and Lenovo

    Nvidia announces its first AI platform for edge devices

    Nvidia Corp. is bringing artificial intelligence to the edge of the network with the launch early Monday of its new Nvidia EGX platform that can perceive, understand and act on data in real time without sending it to the cloud or a data center first.

    Delivering AI to edge devices such as smartphones, sensors and factory machines is the next step in the technology’s evolutionary progress. The earliest AI algorithms were so complex that they could be processed only on powerful machines running in cloud data centers, and that means sending lots of information across the network. But this is undesirable because it requires lots of bandwidth and results in higher latencies, which makes “real-time” AI something less than that.

    What companies really want is AI to be performed where the data itself is created, be it at manufacturing facilities, retail stores or warehouses.

  20. Tomi Engdahl says:

    Satasen kortilla tekoäly verkon reunalle

    Nvidia tunnetaan tehokkaista grafiikkaprosessoreistaan, mutta parina viime vuonna se on alkanut muuttua tekoälyfirmaksi. Ja sen tavoitteena on viedä koneoppiminen samoilla malleilla kaikenlaisiin erilaisiin laitteisiin. Mysö verkon reunalle uudella Jetson Nano -kortilla.

    Uusi Jetson Nano -kortti on tästä hyvä esimerkki. Se on tarkoitettu 5-10 watin laitteisiin, joissa tarvitaan puolen teraFLOPSin suorituskykyä neuroverkkojen pyörittämiseen. Aivan mikro-ohjaimista Nvidia ei vielä siis puhu tekoälyn yhteydessä.

  21. Tomi Engdahl says:

    Opto 22 explains edge programmable industrial controllers for IIoT

    New white paper shows how an edge programmable industrial controller can simplify, reduce costs, and increase security for Industrial Internet of Things (IIoT) and other data-intensive applications.

  22. Tomi Engdahl says:

    Edge AI Going Beyond Voice and Vision

    Widespread public awareness of systems such as the Amazon Alexa and camera-enabled autonomous cars have made voice and vision almost automatically come to mind when discussing the role of AI in edge-device design. But AI technology is applicable well beyond voice and vision interpretation, supporting the implementation of complex system behaviors that can be intractable using conventional algorithmic development. The trick is to move the AI as close as possible to the edge.

    The two signature AI applications of voice and vision happen to also illustrate the two architectural alternatives for designing AI into an embedded system. In the case of voice, both of AI’s major tasks – learning and inferencing — are handled in the cloud, where substantial processing power is available. This allows the edge device to get along with much less processing capability. It spends most of its limited capacity capturing and forwarding data to the cloud and implementing any commands coming back. This approach has the advantage of allowing a relatively inexpensive edge device design but suffers from the high bandwidth demands and latency effects of substantial WAN communications activity.

    Vision systems, on the other hand, provide considerable local processing power to make real-time inferences and response decisions from live data.

    The two major benefits that stem from local processing have prompted a surge in development of AI processing options for the edge that seek to lower the cost of local AI inferencing

  23. Tomi Engdahl says:

    Hardware Helping Move AI to the Edge

    While artificial intelligence and machine learning computation is often performed at large scale in datacentres, the latest processing devices are enabling a trend towards embedding AI/ML capability into IoT devices at the edge of the network. AI on the edge can respond quickly, without waiting for a response from the cloud. There’s no need for an expensive data upload, or for costly compute cycles in the cloud, if the inference can be done locally. Some applications also benefit from reduced concerns about privacy.

    Outside of the two most common applications, speech processing and image recognition, machine learning can of course be applied to data from pretty much any type of sensor.

    NXP showed a module for exactly this purpose at the Microsoft Build developer conference earlier this month. The 30x40mm board combines a handful of sensors with a powerful microcontroller – the i.MX RT1060C, a high performance Cortex-M device which runs at a very fast 600MHz – plus connectivity capability

  24. Tomi Engdahl says:

    Smart AI Assistants are the Real Enabler for Edge AI

    AI at the edge will reduce overwhelming volumes of data to useful and relevant information on which we can act.

    A recent McKinsey report projects that by 2025, the CAGR for silicon containing AI functionality will be 5× that for non-AI silicon. Sure, AI is starting small, but that’s a pretty fast ramp. McKinsey also shows that the bulk of the opportunity is in AI inference and that the fastest growth area is on the edge. Stop and think about that. AI will be growing very fast in a lot of edge designs over the next six-plus years; that can’t be just for novelty value.

    Artificial-intelligence hardware: New opportunities for semiconductor companies

    Our analysis revealed three important findings about value creation:

    AI could allow semiconductor companies to capture 40 to 50 percent of total value from the technology stack, representing the best opportunity they’ve had in decades.
    Storage will experience the highest growth, but semiconductor companies will capture most value in compute, memory, and networking.
    To avoid mistakes that limited value capture in the past, semiconductor companies must undertake a new value-creation strategy that focuses on enabling customized, end-to-end solutions for specific industries, or “microverticals.”

    The AI technology stack will open many opportunities for semiconductor companies

  25. Tomi Engdahl says:

    Hardware Helping Move AI to the Edge

    Outside of the two most common applications, speech processing and image recognition, machine learning can of course be applied to data from pretty much any type of sensor.

  26. Tomi Engdahl says:

    The word edge in this context means literal geographic distribution. Edge computing is computing that’s done at or near the source of the data, instead of relying on the cloud at one of a dozen data centers to do all the work. It doesn’t mean the cloud will disappear. It means the cloud is coming to you.

  27. Tomi Engdahl says:

    According to Arpit Joshipura, The Linux Foundation’s general manager of networking, edge computing will overtake cloud computing by 2025.

    Linux Foundation exec believes edge computing will be more important than cloud computing

    According to Arpit Joshipura, The Linux Foundation’s general manager of networking, edge computing will overtake cloud computing by 2025.

  28. Tomi Engdahl says:

    Satya Nadella looks to the future with edge computing

    Speaking today at the Microsoft Government Leaders Summit in Washington DC, Microsoft CEO Satya Nadella made the case for edge computing, even while pushing the Azure cloud as what he called “the world’s computer.”

    While Amazon, Google and other competitors may have something to say about that, marketing hype aside, many companies are still in the midst of transitioning to the cloud. Nadella says the future of computing could actually be at the edge where computing is done locally before data is then transferred to the cloud for AI and machine learning purposes. What goes around, comes around.

  29. Tomi Engdahl says:

    Microsoft’s new backpack-friendly Azure device weighs under 10 pounds, runs on batteries, and meets the 810G ruggedized standard.

    Microsoft Unveils Battery-Powered Version of Azure That Fits in a Backpack

  30. Tomi Engdahl says:

    Running EdgeX on a Raspberry Pi
    This project will walk you through installing and running the essential EdgeX Foundry microservices on a Raspberry Pi.

  31. Tomi Engdahl says:

    how global cloud platforms will offer the 1ms latency for 5G. AWS Wavelength promises to do it by extending your VPCs into Wavelength Zones, where you can run local EC2 instances and EBS volumes at the edge.

    Announcing AWS Wavelength for delivering ultra-low latency applications for 5G

    AWS Wavelength embeds AWS compute and storage services at the edge of telecommunications providers’ 5G networks, and provides seamless access to the breadth of AWS services in the region. AWS Wavelength enables you to build applications that serve mobile end-users and devices with single-digit millisecond latencies over 5G networks, like game and live video streaming, machine learning inference at the edge, and augmented and virtual reality.

    AWS Wavelength brings AWS services to the edge of the 5G network, minimizing the network hops and latency to connect to an application from a 5G device. Wavelength delivers a consistent developer experience across multiple 5G networks around the world

  32. Tomi Engdahl says:

    Infrastructure and Operations (I&O) leaders who manage cloud and edge infrastructure should develop a multiyear edge computing strategy that encompasses the diversity of use cases needed by their digital organizations. According to an analyst report from Gartner, the edge computing use-case landscape is broad and early deployments are highly customized.

    Gartner highlights key topics:

    Edge computing complements cloud computing for digital business use cases, solving latency, bandwidth, autonomy and privacy requirements.
    Edge computing use-case categories are defined by digital business interactions between people, things and businesses.
    More than 90% of enterprises start with a single unique use case for edge computing; over time, a typical enterprise will have many.

    “By year-end 2023, more than 50% of large enterprises will deploy at least six edge computing use cases deployed for IoT or immersive experiences, versus less than 1% in 2019.”

  33. Tomi Engdahl says:

    The Rough Edge of Rugged Computing

    Rough around the edges. Cutting edge. On the ragged edge.
    You’d think that an edge device by nature would be able to withstand intense environments.

    However, not all edge computing devices are created equal. And companies are quickly realizing that what works in the lab for initial testing and software development, doesn’t necessarily work in the field.

  34. Tomi Engdahl says:

    Smart factory controllers bring security and connectivity
    Powerful edge controllers are replacing PCs on the factory floor and going where PCs can’t, providing a variety of “apps” for every task.

    What is edge computing again?

    Edge computing refers to the trend of increasing processing power and storage in devices that reside close to where real-time data is generated by sensors, equipment, and users. For control systems, edge controllers bring general-purpose computing power, connectivity, data processing, and storage to the process level in a compact, industrial form factor, along with input/output (I/O), real-time control and visualization options.

    On-board OPC and more

    Integrating multi-vendor programmable logic controllers (PLCs) or aggregating data from heterogeneous devices might be handled using a dedicated OPC server. It could be hosted on anything from a consumer-grade laptop on a shelf, to a rack-mounted server, to a virtual machine (VM) in an information technology (IT)-managed infrastructure. Regardless, this dependence on PCs requires additional licensing costs and management overhead.

    IT management complexity, in particular, is a thorn for factory automation. Every new PC requires configuration, user access controls, antivirus, driver and operating system updates, and so on, which invite potential disruptions to production due to maintenance or unexpected downtime. Each of these components may introduce more costs in the form of licensing and long-term support agreements. System ownership also can become contentious when maintenance procedures don’t integrate well with any particular group’s operations.

    Unlike legacy PLCs, edge controllers can provide a complete connectivity solution, including acting as OPC or messaging queuing telemetry transport (MQTT) servers. Unlike PC-based solutions, edge controllers require little IT involvement, because they’re built for industrial environments and are secure out of the box.

  35. Tomi Engdahl says:
    SE: Where is AI going in the future?
    Sabonnadière: I am a strong believer that edge AI will change our lives. Today’s microelectronics are organized with 80% of things in the cloud and 20% on the edge. In five years from now, it will be reversed. It will be 80% on the edge and only 20% in the cloud. There is some rational behind this, telling us that it will go in this direction. It is a question of the privacy of data.

  36. Tomi Engdahl says:

    Putting AI into the Edge Is a No-Brainer; Here’s Why

    In 2020, Deloitte predicts that more than 750 million edge AI chips — full chips or parts of chips that perform or accelerate machine learning tasks on-device, rather than in a remote data center — will be sold, representing US$2.6 billion in revenue. Furthermore, the edge AI chip market will grow much more quickly than the overall chip market. By 2024, we expect unit sales of edge AI chips to exceed 1.5 billion, possibly by a great deal. That represents compound annual unit sales growth of at least 20%, more than double the longer-term forecast of 9% CAGR for the overall semiconductor industry.

  37. Tomi Engdahl says:

    Choosing a Linux Solution for the Intelligent Edge

    There is no one-size-fits-all solution for building embedded systems with Linux. Glenn Seiler, vice president of Linux Solutions at Wind River Systems, walks through why he believes embedded Linux with long-term vendor support has some key advantages.

    Linux is the default environment for most software developers and is a popular choice for many embedded solutions. But one of Linux’s greatest strengths, and to some extent its biggest challenge, is that it comes in so many flavors and varieties—each suited to a particular use case.

    While most acknowledge the advantages of the Red Hat, SUSE and Ubuntu distributions for general-purpose enterprise use cases, they are generally not suitable for embedded solutions. Compared to general computing, embedded systems have tight constraints, ranging from higher reliability and security requirements to tighter resource availability and the need for engineering support that often spans up to 10 years or more.

    Everything starts with the Linux kernel that is available from To build a full operating system for application development and deployment, additional packages are required. Knowing what you are going to build determines what packages are required for your distribution. There is no one-size-fits-all solution, especially for embedded systems. A Linux-based project can be classified into three major categories.

  38. Tomi Engdahl says:

    THE OUTER LIMITS: Successfully Implementing AI at the Edge
    An Electronics Design live webinar, sponsored by Avnet

    The explosive—and often disruptive—growth of the Internet of Things has accelerated its expansion in the vertical markets of countless industries. In response, edge computing has presented itself as a solution to issues ranging from heavy use of server-oriented IoT functionality and excessive bandwidth use to advanced security and enhanced functionality.

    As AI has evolved into a significant force-multiplier in intelligent IoT devices and products, striking a balance between cloud and edge intelligence has become crucial to implementation.

  39. Tomi Engdahl says:

    A Much Smarter Edge for IoT

    The internet of things has been touted for several years as the answer to many challenges. Connected IoT devices can improve efficiencies and productivity in industrial systems, provide valuable feedback mechanisms for connected health-care systems and wearables, and provide a vast array of capabilities to improve driver assistance and enable a path toward autonomous vehicles.

    Fulfilling the promise of IoT has relied to a large extent on sensors in a network collecting data that is transmitted via a gateway to the cloud, processed, analyzed, and returned to the local system or sensor as actionable feedback.

    Over the past couple of years, however, developers and systems integrators have come to realize that this approach poses issues around latency, data security, and bandwidth cost. Adding more intelligence at the edge can mitigate those challenges by speeding response times, keeping data secure and private, and minimizing data communication costs. So putting intelligence at the edge is a no-brainer, right?

    It is — but there are, of course, practical limitations.

  40. Tomi Engdahl says:

    Understanding Edge Computing

    In this Q&A, Stratus Technologies’ CTO John Vicente discusses the key specifics involved in fostering successful edge computing, from real-time processing to ROI yield.

    How can edge operators better plan for modernization projects and use a proven model for implementation success?

    What are the most common misconceptions about the edge?

    I think one of the misconceptions I hear is that edge computing is mostly hype. That edge computing is what people have been doing for a long time or it’s the same as existing private, on-premise enterprise devices like gateways or proxies or DCS and SCADA systems in plant manufacturing.

    Another misconception is that the edge computing is basically the Internet of Things or IoT. That it’s sensors and small devices that collect data.

    But it’s much more than that. Edge computing is powering the fourth Industrial Revolution through the advancement of technologies in cloud systems, software, computing, communications, advanced storage, and memory. It’s powering what we will come to see as the age of artificial intelligence.

    What are key aspects of the edge and edge computing, regardless of industry?

    The key aspects of edge computing include low latency, the ability to perform deterministic real-time computing, support for mission-critical or safety-critical use cases, and the ability to extend computing beyond humans to the extremes of the environment and things.

    Why is real-time data processing so critical?

    There are a couple of reasons. First is time criticality. Some decisions or actions need to be executed within milliseconds or even microseconds. Think about autonomous vehicles recognizing a pedestrian or hazard in the roadway. The vehicle needs to make a deterministic decision about how to avoid injury or hazard, and there isn’t time to send that data to a cloud for processing, and then send it back to act on it. Thus, time-critical processing or computing needs to be done in vehicle.

    Second, there are many factory production scenarios where large amounts of machine data or vision need to be processed in real-time (e.g., motion control) to perform human-assisted (e.g., safety-critical) robotic control or in the networked coordination of many robots in assembly or production.

    Edge computing allows for truly autonomous installations in locations that are remote or unmanned, such as in the energy industry. Edge-computing technologies enable applications to run autonomously, where standard operational services such as security, application, or systems management can run in the background or be managed out of band with zero-touch administration.

  41. Tomi Engdahl says:

    Conflicting Demands At The Edge
    Experts at the Table: Cost, power and security clash with the need for intelligence, localized processing and customization.

    SE: The edge is an ill-defined, transitional area, but there is widespread agreement this is going to be a big opportunity for the entire semiconductor industry. Where do you see the edge and what sorts of problems are ahead?

    Enserguiex: At Arm we tend to speak about the end point. The edge spans from the first gateway that connects to the endpoint and all the way up to the cloud. Each company has its own definition, but what’s important is how we can add scalable intelligence, how we can move workloads here, and how we can add end-to-end security from the node up to the cloud. There’s a question of what is the best way to liberate all this compute power that is now distributed within the node or the edge in the best manner. Network bridges and switches have been replaced by new edge servers. You see those in 5G base stations and in the gateway. They can do inference and training and data analytics on the data plane and inside meta data. We don’t need to send everything back to the cloud. There is so much we can do closer to where the data is generated. Scalable networking across the edge is going to be key for the next few years.

  42. Tomi Engdahl says:

    An overview of industrial IoT, from edge to cloud
    Next generation distributed I/O brings users one step closer to seamless connectivity

    When remote monitoring and control becomes essential for manufacturing operations
    The COVID-19 pandemic is forcing companies to adjust their business practices and settle to a new normal. See four tips on how edge computing and the Industrial Internet of Things (IIoT) can help companies adjust.

  43. Tomi Engdahl says:

    IoT Edge Server Selection Criteria

    A guide to help system integrators and product developers evaluate the pros and cons of new IoT edge server platforms.

    Selection Criteria for IoT Edge Servers for Building Automation and Industrial Controls

    A guide to help system integrators and product developers evaluate the pros and cons of new IoT edge server platforms

  44. Tomi Engdahl says:

    Seamless Communication: The Backbone of Industry 4.0

    To gain a competitive edge in today’s Industry 4.0 landscape means having multiple advanced technologies work harmoniously together. A hybrid scenario involving the cloud and edge hardware might be the best solution.

  45. Tomi Engdahl says:

    OpenStack adds the StarlingX edge computing stack to its top-level projects

    The OpenStack Foundation today announced that StarlingX, a container-based system for running edge deployments, is now a top-level project. With this, it joins the main OpenStack private and public cloud infrastructure project, the Airship lifecycle management system, Kata Containers and the Zuul CI/CD platform.


Leave a Comment

Your email address will not be published. Required fields are marked *