Computing at the Edge of IoT – Google Developers – Medium
We’ve seen that demand for low latency, offline access, and enhanced machine learning capabilities is fueling a move towards decentralization with more powerful computing devices at the edge.

Nevertheless, many distributed applications benefit more from a centralized architecture and the lowest cost hardware powered by MCUs.

Let’s examine how hardware choice and use case requirements factor into different IoT system architectures.


  1. Tomi Engdahl says:

    Tech Talk: Data-Driven Design

    How more data is shifting memory architectures.

  2. Tomi Engdahl says:

    Defining Edge Memory Requirements

    Edge compute covers a wide range of applications. Understanding bandwidth and capacity needs is critical.

    Defining edge computing memory requirements is a growing problem for chipmakers vying for a piece of this market, because it varies by platform, by application, and even by use case.

    Edge computing plays a role in artificial intelligence, automotive, IoT, data centers, as well as wearables, and each has significantly different memory requirements. So it’s important to have memory requirements nailed down early in the design process, along with the processing units and the power, performance and area tradeoffs.

    “In the IoT space, ‘edge’ to companies like Cisco is much different than ‘edge’ to companies like NXP,” observed Ron Lowman, strategic marketing manager for IoT at Synopsys. “They have completely different definitions and the scale of the type of processing required looks much different. There are definitely different thoughts out there on what edge is. The hottest trend right now is AI and everything that’s not data center is considered edge because they’re doing edge inference, where optimizations will take place for that.”

  3. Tomi Engdahl says:

    Tech Talk: Connected Intelligence
    A look at the slowdown in Moore’s Law, and what comes next.

    Gary Patton, CTO at GlobalFoundries, talks about computing at the edge, the slowdown in scaling, and why new materials and packaging approaches will be essential in the future.

  4. Tomi Engdahl says:

    Digital twin AIs designed to learn at the edge

    Artificial intelligence (AI) startup SWIM is aiming to democratize both AI and digital twin technologies by placing them at the edge without the need for large-scale number-crunching as well as making it affordable.

    With Pure Storage and NVIDIA recently launching their artificial intelligence (AI) supercomputer, it is easy to believe enterprise-grade AI is solely about throwing massive number-crunching ability at Big Data sets and seeing what patterns emerge. But while these technologies notionally are aimed at all types of business, the cost of optimized AI hardware that can be slotted into a data center may be too high for many organizations.

    At the other end of the scale are technologies such as IBM’s Watson and Watson Assistant—which can be deployed as cloud services—and of course numerous suite-based AI tools currently offered by many companies. However, for many Internet of Things (IoT) and connected-device deployments, neither data center nor cloud options are realistic, which is why many AI systems are moving elsewhere, fast.

    For time-critical processing—such as when an autonomous vehicle needs to avoid a collision—the edge environment and the distributed core are where the real number crunching needs to take place. This is why companies such as Microsoft and Dell have announced new IoT strategies focused principally on the edge and/or the distributed core. The ability to add AI at the edge is an increasingly important element in the IoT, avoiding the need to transfer large amounts of data to supercomputers or the cloud and back again to IoT networks.

    Startup SWIM.AI aims to “turn any edge device into a data scientist,”

    The company’s AI edge product, EDX, is designed to autonomously build digital twins directly from streaming data in the edge environment. The system is built for the emerging IoT world in which real-world devices are not just interconnected, but also offer digital representations of themselves, which can be automatically created from, and continually updated by, data from their real-world siblings.

    Digital twins are digital representations of a real-world object, entity, or system, and are created either purely in data or as 3-D representations of their physical counterparts.

    SWIM’s EDX system is designed to enable digital twins to analyze, learn, and predict their future states from their own real-world data. In this way, systems can use their own behavior to train accurate behavioral models via deep neural networks.

    Gartner views digital twins as one of the top strategic enterprise trends in 2018. However, a key challenge is how enterprises can implement the technology, given their investments in legacy assets.

    SWIM believes limited skill sets in streaming analytics, coupled with an often poor understanding of the assets that generate data within complex IoT systems, make deploying digital twins too complex for some. Meanwhile, the prohibitive cost of some digital twin infrastructures puts other organizations off.

  5. Tomi Engdahl says:

    Edge Devices are Hot for IoT

    Dell has improved the IoT device Edge Gateway Models continuously & the model 5100 is a fan-less, convection cooled design to operate with reliably in extreme temperatures or difficult industrial or enterprise environments as it helps connect endpoints.

  6. Tomi Engdahl says:

    Addressing ‘Memory Wall’ is Key to Edge-Based AI

    Addressing the “memory wall” and pushing for a new architectural solution enabling highly efficient performance computing for rapidly growing artificial intelligence (AI) applications are key areas of focus for Leti, the French technology research institute of CEA Tech.

    Speaking to EE Times at Leti’s annual innovation conference here, Leti CEO Emmanuel Sabonnadière said there needs to be a highly integrated and holistic approach to moving AI from software and the cloud into an embedded chip at the edge.

    “We really need something at the edge, with a different architecture that is more than just CMOS, but is structurally integrated into the system, and enable autonomy from the cloud — for example for autonomous vehicles, you need independence of the cloud as much as possible,” Sabonnadière said.

  7. Tomi Engdahl says:

    With its Snowball Edge, AWS now lets you run EC2 on your factory floor

    AWS’s Snowball Edge devices aren’t new, but they are getting a new feature today that’ll make them infinitely more interesting than before. Until now, you could use the device to move lots of data and perform some computing tasks on them, courtesy of the AWS Greengrass service and Lambda that run on the device. But AWS is stepping it up and you can now run a local version of EC2, the canonical AWS compute service, right on a Snowball Edge.

  8. Tomi Engdahl says:

    . It’s worth noting that this was also the original idea behind OpenStack (though setting that up is far more complicated than ordering a Snowball Edge) and that Microsoft, with Azure Stack and its various edge computing services, offers similar capabilities.

  9. Tomi Engdahl says:

    Google is making a fast specialized TPU chip for edge devices and a suite of services to support it

    In a pretty substantial move into trying to own the entire AI stack, Google today announced that it will be rolling out a version of its Tensor Processing Unit — a custom chip optimized for its machine learning framework TensorFlow — optimized for inference in edge devices.

  10. Tomi Engdahl says:

    Announcing the New AIY Edge TPU Boards
    Custom ASIC for accelerated machine learning on the edge

    Earlier this morning, during his keynote at the Google Next conference in San Francisco, Injong Rhee, the VP of IoT, Google Cloud, announced two new AIY Project boards—the AIY Projects Edge TPU Dev Board, and the Edge TPU Accelerator—both based around Google’s new purpose-built Edge TPU.

  11. Tomi Engdahl says:

    Bringing intelligence to the edge with Cloud IoT

    But just as opportunities increase with IoT, so does data. IDC estimates that the total amount of data generated from connected devices will exceed 40 trillion gigabytes by 2025. This is where advanced data analytics and AI systems can help, to extract insights from all that data quickly and easily.

    There are also many benefits to be gained from intelligent, real-time decision-making at the point where these devices connect to the network—what’s known as the “edge.” Manufacturing companies can detect anomalies in high-velocity assembly lines in real time. Retailers can receive alerts as soon as a shelved item is out of stock. Automotive companies can increase safety through intelligent technologies like collision avoidance, traffic routing, and eyes-off-the-road detection systems.

    But real-time decision-making in IoT systems is still challenging due to cost, form factor limitations, latency, power consumption, and other considerations. We want to change that.

  12. Tomi Engdahl says:

    MicroZed Chronicles: The Ultra96 and Machine Learning

    A short time ago we looked at the Ultra96 board, one of the great use cases for the Ultra96 is when it comes to implementing machine learning at the edge.

  13. Tomi Engdahl says:

    Announcing the New AIY Edge TPU Boards

    Custom ASIC for accelerated machine learning on the edge

  14. Tomi Engdahl says:

    Deep Learning at the Edge on an Arm Cortex-Powered Camera Board

    It’s no secret that I’m an advocate of edge-based computing, and after a number of years where cloud computing has definitely been in ascendency, the swing back towards the edge is now well underway. Driven, not by the Internet of Things as you might perhaps expect, but by the movement of machine learning out of the cloud.

  15. Tomi Engdahl says:

    Create Intelligence at the Edge with the Ultra96 Board

    What intelligent applications could you create with the power of programmable logic?

  16. Tomi Engdahl says:

    Google unveils tiny new AI chips for on-device machine learning

    The hardware is designed for enterprise applications, like automating quality control checks in a factory

  17. Tomi Engdahl says:

    DWDM Optical Modules Take It to the Edge

    The need for low latency and quality of service is driving cloud traffic ever closer to the edge of the network. In response, cloud providers are moving toward a new distributed data center architecture of multiple edge data centers rather than a single mega-data center in a geographic market. This distributed data center model requires an orders-of-magnitude increase in optical connectivity among the edge data centers to ensure reliable and robust service quality for the end users.

    As a result, the industry is clamoring for low-cost and high-bandwidth transceivers between network elements. The advent of pluggable 100G Ethernet DWDM modules in QSFP28 form factor holds the promise of superior performance, tremendous cost savings, and scalability.

    Moving data to the edge

    According to Cisco, global IP traffic will increase nearly threefold over the next 5 years, and will have increased 127-fold from 2005 to 2021. In addition, almost half a billion (429 million) mobile devices and connections were added in 2016. Smartphones accounted for most of that growth, followed by machine-to-machine (M2M) modules. As these devices continue to multiply, the need to bring the data center closer to the sources, devices, and networks all producing data is driving the shift to the network’s edge.

    With 5G on the horizon, bandwidth will continue to be a major challenge. Cisco predicts that although 5G will only be 0.2% of connections (25 million) by 2021, it will generate 4.7 times more traffic than the average 4G connection.

    The exponential increase in point-to-point connections and the growing bandwidth demands of cloud service providers (CSPs) have driven demand for low-cost 100G optical communications. However, in contrast to a more traditional data center model (where all the data center facilities reside in a single campus), many CSPs have converged on distributed regional architectures to be able to scale sufficiently and provide cloud services with high availability and service quality. Pushing data center resources to the network’s edge and thereby closer to the consumer and enterprise customers reduces latency, improves application responsiveness, and enhances the overall end-user experience.

  18. Tomi Engdahl says:

    Energy At The Edge
    How much energy will billions of complex devices require?

    Ever since the first mention of the IoT, everyone assumed there would be billions of highly efficient battery-powered devices that drew milliwatts of energy. As it turns out, we are about to head down a rather different path.

    The enormous amount of data that will be gathered by sensors everywhere cannot possibly be sent to the cloud for processing. The existing infrastructure cannot handle it, and there are doubts that even 5G using millimeter-wave technology would suffice. This realization, which has become a hot topic of discussion across the electronics industry in the past few months, has broad implications.

    Rather than a collection of dumb, simple, mass-produced sensors, edge devices will have to be much more sophisticated and do far more processing than previously thought. They will need to assess which data should be relayed to data centers, which data should be stored locally, and which data can be thrown away.

    These are complex transactions by any metric. Data types vary greatly. Vision data is different from voice data, which is different again from data about mechanical vibration or near-field scans from an industrial or commercial operation. Understanding how data can be used, and what is useful within that data, requires sophisticated collection, partitioning and purging, which is the kind of stuff that today is being done by very powerful computers.

    How this affects electricity usage on a global scale remains to be seen.

  19. Tomi Engdahl says:

    More Processing Everywhere

    Arm’s CEO contends that a rise in data will fuel massive growth opportunities around AI and IoT, but there are significant challenges in making it all work properly.

  20. Tomi Engdahl says:

    Pace Quickens As Machine Learning Moves To The Edge

    More powerful edge devices means everyday AI applications, like social robots, are becoming feasible.

  21. Tomi Engdahl says:

    More Performance At The Edge

    Scaling is about to take on a whole different look, and it not just from shrinking features.

    Shrinking features has been a relatively inexpensive way to improve performance and, at least for the past few decades, to lower power. While device scaling will continue all the way to 3nm and maybe even further, it will happen at a slower pace. Alongside of that scaling, though, there are different approaches on tap to ratchet up performance even with chips developed at older nodes.

    This is particularly important for edge devices, which will be called on to do pre-processing of an explosion of data. Performance improvements there will come from a combination of more precise design, less accurate processing for some applications, and better layout using a multitude of general-purpose and specialized processors. There also will be different packaging options available, which will help with physical layouts to shorten the distance between processors and both memory and I/O. And there will be improvements in memory to move data back and forth faster using less power.

  22. Tomi Engdahl says:

    All edge data centers require these 3 things

    “large, centralized data centers house the racks, servers and other hardware needed to support cloud services, content delivery networks, customized enterprise workloads and other functionality. With the emergence of 5G and the latency reductions needed to support a range of applications like mobile gaming, industrial automation and autonomous driving, for instance, there’s a concurrent move to take that centralized compute power and distribute it to edge data centers.”

    “If you think about what you need to build edge computing infrastructure, you need three things: A way to house the equipment and cool it. You need real estate and ideally real estate that’s colocated with the wireless network infrastructure…and the third thing you need is fiber in order to interconnect to other sites, backhaul networks and peering sites.”

    Edge data centers need three things: Equipment housing, real estate and fiber

    Vapor IO working with Crown Castle to deploy edge data centers

  23. Tomi Engdahl says:

    Local data center can serve a local cloud

    Technology Update: Smart data management may include keeping a data center on location as part of a cybersecurity strategy, for manufacturers, aviation, defense, and other applications. An example shows how.

    For data-intensive industries such as manufacturing, aviation, defense, energy, and healthcare, debate continues about application of cloud computing; smart data management may include an on-site component to augment or replace massive off-site data center storage. No approach works for all situations but taking the cloud from the sky and adding local storage can be an option.

    “For hospitals, manufacturers, and many other industries, there’s a big struggle right now as to the right mix between the cloud and internal management of data,” said Bob Venero, CEO and founder of Future Tech Enterprise Inc. “It’s about ensuring the availability of data. If the connection to the cloud goes down, they still need to be able to work. The other challenge is figuring out which data sets are classified in which area. It’s all a delicate balance across many industries.”

    Smart hybrid data management

    Future Tech Enterprise Inc. is a proponent of iFortress, a modular, hermetically sealed, flexible data center design. Courtesy: Future Tech Enterprise Inc.Combining on-premise and cloud solutions on different hardware platforms can be a favorable combination for many applications.

    Organizations should analyze potential risks and benefits of cloud use; there’s often a good case for non-sensitive data to be stored in cloud with the goal of reducing IT costs and driving efficiency.

    The convenience-related benefits of cloud options can be incorporated into on-premise locations.

    An internal cloud network was constructed in collaboration with a major security company to “provide services and record operations. Everything syncs back to a main data center with information never crossing into the public cloud,” Venero said.

    Other possibilities include investing in advanced hardware options with superior computing power, for data intensive industries such as healthcare. Incorporating hardware-agnostic software is another option that helps reduce costs and provides flexibility.

  24. Tomi Engdahl says:

    AI Flood Drives Chips to the Edge
    Deep learning spawns a silicon tsunami

    It’s easy to list semiconductor companies working on some form of artificial intelligence — pretty much all of them are. The broad potential for machine learning is drawing nearly every chip vendor to explore the still-emerging technology, especially in inference processing at the edge of the network.

    “It seems like every week, I run into a new company in this space, sometimes someone in China that I’ve never heard of,” said David Kanter, a microprocessor analyst at Real World Technologies.

    Deep neural networks are essentially a new way of computing. Instead of writing a program to run on a processor that spits out data, you stream data through an algorithmic model that filters out results in what’s called inference processing.

  25. Tomi Engdahl says:

    Digital twin AIs designed to learn at the edge

    Artificial intelligence (AI) startup SWIM is aiming to democratize both AI and digital twin technologies by placing them at the edge without the need for large-scale number-crunching as well as making it affordable.

    Twin management

    Gartner views digital twins as one of the top strategic enterprise trends in 2018. However, a key challenge is how enterprises can implement the technology, given their investments in legacy assets.

    SWIM believes limited skill sets in streaming analytics, coupled with an often poor understanding of the assets that generate data within complex IoT systems, make deploying digital twins too complex for some. Meanwhile, the prohibitive cost of some digital twin infrastructures puts other organizations off.

    “Digital twins need to be created based on detailed understanding of how the assets they represent perform, and they need to be paired with their real-world siblings to be useful to stakeholders on the front line,” said SWIM in a statement. “Who will operate and manage digital twins? Where will the supporting infrastructure run? How can digital twins be married with enterprise resource planning (ERP) and other applications, and how can the technology be made useful for agile business decisions?”

    The company claims SWIM EDX addresses these challenges by enabling any organization with lots of data to create digital twins that learn from the real world continuously, and to do so easily, affordably, and automatically.

  26. Tomi Engdahl says:

    Reborn of Classic Shell.


    Classic style Start Menu for Windows 7, 8, 8.1, 10
    Toolbar for Windows Explorer
    Classic copy UI (Windows 7 only)
    Show file size in Explorer status bar
    Title bar and status bar for Internet Explorer

  27. Tomi Engdahl says:

    Managing IoT: A problem and solution for data center and IT managers

    The Internet of Things is one of several challenges facing network administrators, but it’s also one of the solutions to those challenges.

    You’ve probably seen some of the projections regarding the growth in the Internet of Things (IoT) in the coming years. Cisco projects there will be 23 billion devices connected to Internet Protocol (IP) networks by 2021. Gartner says 20.8 billion by 2020, while IDC puts the 2020 number at 28.1 billion.

    While there’s some discrepancy in the numbers, there’s little debate that IoT is growing fast. Whether it is enabling smart homes, smart factories, or smart cities, this growth is being driven by IoT’s potential to improve efficiency, productivity and availability.

    But IoT applications also can generate huge volumes of data that must be transmitted, processed and stored, creating data management challenges information technology (IT) professionals must prepare to address. One of the ways they can address them is by applying IoT technology to improve the management of data centers and edge sites.

    IoT in the data center

    According to the Cisco Visual Networking Index global IP traffic will grow from 1.2 zettabytes in 2016 to 3.3 zettabytes by 2021. While that represents a tripling of data in just five years, not all of that data will originate or end up in a traditional data center. A large percentage of IoT data, for example, will be generated, processed and stored at the network edge. Only a fraction will need to be transmitted to a central data center for archiving and deep learning.

    Yet, the data center is also an extremely complex and diverse environment that has left much of that operating data stranded within devices due to the variety of protocols in use and the lack of a system-level control layer.

    Using an IoT strategy provides a framework for capturing and using this data to enhance reliability and efficiency as well as enable automation. For example, system-level controls, such as those available for thermal management, enable machine-to-machine communication and coordination across units to optimize performance across the facility. They also support continuous monitoring to enhance availability.

    Management gateways designed specifically for the data center are now available to enable true, real-time, integrated monitoring, access and control across IT and facilities systems. These flexible gateways aggregate and normalize the incoming data and provide a local control point necessary for the latency requirements of some of the edge archetypes. The gateways consolidate data from multiple devices using different protocols to support centralized data center management.

    IoT on the edge

    This has led to the recognition of four edge archetypes that can guide decisions regarding edge infrastructure, particularly at the local level. These four archetypes are described here.

    Data intensive—encompasses uses cases where the amount of data is so large that layers of storage and computing are required between the endpoint and the cloud to reduce bandwidth costs or latency.
    Human-latency sensitive—includes applications where latency negatively impacts the experience of humans using a technology or service.
    Machine-to-machine latency sensitive—similar to the human-latency sensitive archetype except the tolerance for latency in machines is even less than it is for humans because of the speed at which machines process data.
    Life critical—applications that impact human health or safety and so have very low latency and very high availability requirements.

  28. Tomi Engdahl says:

    What edge computing means for the future of the data center

    In a new article for Computer Business Review, Raghavan Srinivasan, senior director of enterprise data solutions at Seagate Technology, predicts that “the large traditional data center has been the mainstay of computing and connectivity networks for more than half a century – and essentially all processing of transactions have been carried out in a centralized core – but mobility, technological advancements and economic demand mean that businesses will increasingly add edge elements to this essential core.”

    Edge Computing and the Future of the Data Center

    The rise of edge computing could see the advent of a rapidly growing array of smaller data centers built closer to population centers, says Seagate Technology’s Raghavan Srinivasan

    In almost every respect, the world is getting faster. We expect customer service problems to be resolved right away, we want our goods to arrive the day after we order them and have become used to communicating with anyone, anywhere, at any time.

    For enterprises, this trend is reflected in the increased demand for real-time data processing. Look at the trends that will power the next generation of business innovation: AI, the internet of things and 5G are driving a surge in data production that, according to a Gartner study, could see more than 7.5 billion connected devices in use in enterprises by 2020.

    This shift may power next-generation technologies from connected cars and smart drones to manufacturing and intelligent retail. More data will need to be analyzed in real time –according to the DataAge 2025 study commissioned by Seagate, by 2025, almost 20 percent of data created will be real-time in nature – rather than be sent to the core of the network for processing. This means enterprises will build on their central cloud computing architecture and develop the ability to process – and, equally importantly, securely store – more data at the edge.

    A New Network from the Old

    Micro data centers could be deployed at the base of telecom towers and other important points in the existing wireless network. Therefore, there could be far higher numbers of data centers around, but the majority of these will be unrecognizable from the warehouse-sized locations of today.

  29. Tomi Engdahl says:

    The evolution of cellular air conditioning

    Mobile networks have an interesting cost driver lurking behind the scenes: air-conditioning at remote sites. From the introduction of cellular in the early 1980s through the deployment of 3G service in the 2000s, antennas were mounted on towers or buildings and the signals were transmitted through lossy metal coaxial cables to a small weatherproof cabinet. The cabinet was located near the base of the tower containing power-hungry radio equipment and amplifiers. The equipment consumed a lot of power and generated a lot of heat. The cabinets needed air conditioning to avoid equipment damage, and cooling costs could be more than 20- to 30 percent of the annual cost of operating the tower.

    The 3G standard brought many technology advances, some of which alleviated the air conditioning expense. An architectural change was responsible for the largest shift in energy usage and heat production; 3G radio equipment was split into two parts.

    Baseband processing (which converts a digital bitstream from the network into a baseband radio signal) was separated from up-conversion and amplification (where the baseband radio signal becomes a higher-power RF radio signal). Up-conversion and amplification components were packaged into a remote radio head (RRH) and mounted on the cell tower near the antenna. The proximity to the antenna meant that far less power was required to overcome cable losses, and thus the amplifiers no longer had to be actively cooled.

    Low-loss fiber connectivity to the remote radio head allowed distances of up to 6 miles between the radio head and baseband unit (BBU). This enabled massive consolidation, moving the bulk of baseband processing into a regional office, often dubbed a “baseband hotel” because it housed multiple BBUs. The co-location ushered in a whole host of additional optimizations including lower-latency coordination between cell sites, reduced intra-site interference, more reliable user handoffs, and improved coverage via coordinated multi-point (CoMP) transmissions.

    History repeats itself
    Many technology transitions are cyclic. Consider the shift from centralized mainframe computing to independent PCs with local file storage. This trend has gone full circle, leading to the recent mass centralization of compute resources in public clouds and many mourning the demise of the PC.

    I expect a similar cycle in cellular, as network function virtualization (NFV) and software defined networking (SDN) dramatically change the way networks are built. 5G operators can leapfrog some of the tribulations of this cycle by learning from the last decade of public cloud evolution.

    Amazon, Microsoft, and Google centralized massive amounts of compute and networking into mega data centers, but customers quickly found that application response time suffered. Cloud providers adapted with a hybrid architecture that pushes latency-sensitive operations to the edge of the network, while keeping many non-latency sensitive functions in the core.

    Evolution of the baseband hotel
    In a 5G network, NFV separates software from hardware. Services that once ran on proprietary hardware, like routing, load balancing, firewalls, video caching, and transcoding can be deployed on standard servers, and these workloads can be placed anywhere in the network.

    5G also introduces network slicing functionality to maximize the utility of these capabilities. In network slicing, an operator can provision specific sub-interfaces at the air interface level, and map these to specific network function chains. This allows providers to deliver highly differentiated services, similar to the way a local area network can offer quality of service for different traffic flows.

    One of the most interesting new deployment models that extends from network slicing is multi-access edge computing. Mobile edge computing (MEC) is an architectural approach where the 5G carrier moves specific services much closer to the edge of the network, similar to the way Amazon provides Lambda processing at the edge of its cloud.

    The result is much lower latency, and helps meet requirements for next-gen applications, such as the 15ms motion-to-photon response target needed to minimize user discomfort in augmented reality and virtual reality applications. MEC can also reduce core loading by caching data such as video at the edge.

  30. Tomi Engdahl says:

    Optimizing 5G With AI At The Edge

    5G is necessary to deal with the increasing amount of data being generated, but successful rollout of mmWave calls for new techniques.

    For example, AI techniques are essential to the successful rollout of 5G wireless communications. 5G is the developing standard for ultra-fast, ultra-high-bandwidth, low-latency wireless communications systems and networks whose capabilities and performance will leapfrog that of existing technologies.

    5G-level performance isn’t a luxury; it’s a capability the world critically needs because of the exploding deployment of wirelessly connected devices. A crushing amount of data is poised to overwhelm existing systems, and the amount of data that must be accessed, transmitted, stored and processed is growing fast.

    5G needed for the upcoming data explosion
    Every minute, by some estimates, users around the world send 18 million text messages and 187 million emails, watch 4.3 million YouTube videos and make 3.7 million Google search queries. In manufacturing, analysts predict the number of connected devices will double between 2017 and 2020. Overall, by 2021 internet traffic will amount to 3.3 zettabytes per year, with Wi-Fi and mobile devices accounting for 63% of that traffic (a zettabyte is 12 orders of magnitude larger than a gigabyte, or 1021 bytes).

    The new 5G networks are needed to handle all of this data. The new networks will roll out in phases, with initial implementations leveraging the existing 4G LTE and unlicensed access infrastructure already in place. However, while these initial Phase 1 systems will support sub-6GHz applications and peak data rates >10GBps, things really begin to get interesting in Phase 2.

    In Phase 2, millimeter-wave (mmWave) systems will be deployed enabling applications requiring ultra-low latency, high security, and very high cell edge data rates. (The “edge” refers to the point where a device connects to a network. If a device can do more data processing and storage at the edge – that is, without having to send data back and forth across a network to the cloud or to a data center – then it can respond more quickly and space on the network will be freed up.)

  31. Tomi Engdahl says:

    Startup’s Funds Fuel Edge Networks
    Vapor IO plans 100 data centers in the U.S.

    A startup snagged a large Series C round to build dozens of medium-sized data centers for edge networks, at least some with its own novel gear. The news underscores work on a new class of distributed networks for carriers and web giants.

    Vapor IO suggested that it won more than $100 million in financing, although it declined to reveal an exact amount. It aims to have more than 18 sites for 150-kilowatt data centers under construction by the end of the year and more than 100 by the end of 2020.

    The 25-person Vapor IO operates in Chicago two of five planned 150-kW data centers. It is hiring to expand to three metro regions this year and 20 by the end of 2020. The startup aims to use software-defined networking and APIs to give carriers, cloud-computing providers, content distribution networks, and others a unified view of distributed networks that it maintains for them.

    “Since we were founded in 2015, we have done nothing but attempt to perfect a design that’s the equivalent of the electrical grid for the digital economy,” said Cole Crawford, founder and chief executive of Vapor IO.

    Crawford is best known as the former executive director of the Open Compute Project, a non-profit set up by Facebook to promote open hardware designs for the world’s largest data centers.

    Edge networks use much smaller versions of the multi-megawatt data centers that companies such as Facebook run. They aim to deliver response times of a few milliseconds — an order of magnitude lower than today’s data centers thanks to being in the same city as their users and sitting on fiber-optic rings near large cellular base stations. The facilities hope to enable emerging apps ranging from AR/VR to network slicing and helping robocars navigate.

    Carriers say that the edge networks could sometimes be as small as a coat closet that pairs a few servers with a cellular base station. Base station giant Nokia recently rolled out a family of compact servers for such deployments.

    Crawford sees his centers also playing a role as peering locations where carriers, content owners, and internet points of presence meet.

    Vapor IO started out with a unique doughnut-shaped design for a group of servers called a Vapor Chamber, replacing traditional 19-inch racks. Rather than forcing cool air laterally through a building, it uses one large fan in the center of a cylindrical design into which groups of servers fit like wedges of cheese.

    The design enables a 135-kW server cluster to be deployed in a single day. Vapor also has a design that packs servers into the rough equivalent of a shipping container, an approach popular in the early days of so-called hyper-scale data centers.

    In addition, Vapor designed its own boards for managing large server deployments. One monitors 72 sensors to track temperature, air pressure, vibration, and air quality. Another puts a programmable-logic controller on a mezzanine board to control server functions over a powerline network.

    The startup supports APIs that let users build programs that run on top of its control systems. “We’ve open-sourced code, turning register lookups into HTTP interfaces,” he said.


Leave a Comment

Your email address will not be published. Required fields are marked *