10GBase-T Technology

The growing importance of cloud computing along with the increasing utilization of unified data/storage connectivity and the advent of server virtualization have elevated the popularity of 10Gbps Ethernet.There are several connectivity options are available for 10Gbps Ethernet, both over optical fiber and copper cables.

10GBase-T Technology Revisited article tells that the lack of economical cabling options for 10G Ethernet beyond a single or adjacent rack has led to the popularity of Top-of-Rack (ToR) architectures, in which a stack of rack mounted servers are connected with short cables to a fixed configuration switch in close proximity — typically on top of the server rack.

10GBase-T has promise to change that. 10GBase-T is the fourth generation of IEEE standardized Base-T technologies which all use RJ45 connectors and unshielded twisted pair cabling to provide 10Mbps, 100Mbps, 1Gbps, and 10Gbps data transmission, while being backward-compatible with prior generations.

10GBase-T is arguably the most flexible, economical, backward-compatible, and user-friendly connectivity option available. 10GBase-T allows you to use the existing structured cabling infrastructure and allows cable to reach to the full 100-meter length permitted by structured cabling rules. When compared to other 10Gbps connectivity solutions, one of the most important advantages of 10GBase-T is the ability to communicate and inter-operate with legacy, often-slower Base-T systems.

IEEE 802.3an, 10-Gigabit Ethernet over twisted pair standard, also known as 10GBase-T, was ratified at 2006. Unfortunately this has not led to an immediate proliferation of compliant switches and servers in data centers. However, steady advances in semiconductor lithography, and sophisticated algorithms intended to increase electromagnetic interference (EMI) immunity and lower operating power, will make it more practical. For years 10GBase-T has been considered to be very power hungry and expensive. The reason for this has been the complexity of the signal processing that is needed. The 10GBase-T transceiver uses full duplex transmission with echo cancellation on each of the four twisted pairs available in standard Ethernet cables; thereby transmitting an effective 2.5Gbps on each pair.

10GBase-T Technology Revisited article will explore the basic operation of a 10GBase-T transceiver and the inherent advantages of 10GBase-T technology as compared to alternatives, such as optical fiber and coaxial copper.

One of the arguments against 10GBase-T has been power dissipation, but this perspective is rooted mostly in early implementations of the technology. Recent advances in semiconductor lithography have allowed 10GBase-T transceivers to enjoy a dramatic reduction in the power they dissipate during normal operation. From a per-port power of over 6W just a few years to typical Active power dissipation of 1.5W. When utilizing the EEE power saving algorithm with typical computer data patterns for 30-meter reach, newest ICs will dissipate only 750mW.



  1. Tomi Engdahl says:

    Making connections: The world according to Intel

    We all know that Intel makes networking gear. Specifically, it makes some of the best PC network interface cards (NICs) available.

    It should be a shock to no one, then, that Intel is going whole-hog on 10 Gigabit Ethernet (GbE). Expect to see server LAN on motherboard ports moved entirely to 10GBase-T (10GbE over copper) very shortly.

    USB will connect our widgets to our Ultrabooks, and Thunderbolt will connect our Ultrabooks to their monitors. Those monitors will contain things like powerful external graphics cards, wired networking and the sorts of static peripherals we already expect from our desktops.

  2. Tomi Engdahl says:

    PLX refreshes 40-nm 10GBase-T chips

    PLX Technology announced three new transceivers for 10 Gbit/s Ethernet over copper, cutting die size and cost nearly in half while adding features and shaving half a Watt off power consumption. The TN8000 parts come at a time when the 10GBase-T standard is finally seeing some market traction, more than two years after the standard was ratified.

    The single-, dual- and quad-port devices are made in 40-nm technology and support distances of 120 m on Cat 6A cables. The chips provide new low power modes enhancing the Energy Efficient Ethernet standard (IEEE 802.3az).

    OEMs did not use the first generations of 10GBase-T transceivers because they consumed too much power. The new PLX chips consume 2.5W when sending data over 30 m and 3.5W when sending over 100 meters.

    The company competes with Broadcom, Marvell and startup Aquantia who also sell physical-layer chips for 10 Gbit/s Ethernet over copper.

  3. Tomi Engdahl says:

    No more tiers for flatter networks

    The traditional three-tier, hierarchical data centre networks as defined and championed by Cisco Systems since the commercialisation of the internet protocol inside the glass house no longer matches the systems and applications that are running in those data centres.

    Traditionally, traffic through data centres flowed up and down through the network in a north-south orientation – from access, distribution and core layers and back again.

    But no more. According to recent vendor surveys as much as 80 to 85 per cent of the traffic in virtualised server infrastructure – what we now call clouds – moves from server node to server node. The east-west traffic problem is what is really killing the three-tier network in the data centre.

    The leaf-spine network architecture takes a top-of-rack switch that can reach down into server nodes directly and links it back to a set of non-blocking spine switches that have enough bandwidth to allow for clusters of servers to be linked to each other in the tens of thousands.

  4. Tomi Engdahl says:

    10GBASE-T PHYs take only two watts/channel

    The PLX Technology TN8000 family of 10GBASE-T PHYs include single-, dual- and quad-port devices and require only two watts/channel for 10-meter distances over standard copper cabling. If long reach is needed, the chips are capable of 10 Gbits/s rates for 120 meters on standard Cat6A cabling.

    The devices include Enhanced Energy Efficient Ethernet (eEEE) for both 10 Gigabit and Gigabit speeds — a customization of the IEEE 802.3az standard.

  5. Tomi Engdahl says:

    Cisco Nexus ports stretched to take 40GE and 100GE loads
    Catalyst campus switches bumped up to 40GE

    If you want 10 Gigabit Ethernet to take off on servers, then you need fat backbones on the campuses and in the data centers to absorb the increase in traffic. And so Cisco Systems is ramping up the bandwidth on its Nexus 7000 series of end-of-row converged Ethernet switches as well as on its Catalyst 6500 campus switches.

    Server motherboard makers are expected to start laying down 10GE ports on their boards in greater numbers, and that is when the 10GE ramp will take off in earnest.

  6. Tomi Engdahl says:

    Ethernet standards for hyper-scale cloud networking

    A hyper-scale Ethernet network will be global in scale and embrace tens of thousands of cables and switches, millions of ports, and trillions, perhaps quadrillions, of packets of data flowing across the network a year, possibly even more.

    The Ethernet that will be used in such a network has not been developed yet, but it is going in that direction and it will be based on standards and speeds that are coming into use now

    Currently we are seeing 10Gbit/s Ethernet links and ports being used for high data throughput end-points of Ethernet fabrics. The inter-switch links, the fabric trunk lines, are moving to 40Gbit/s with backbones, network spines, beginning to feature 100Gbit/s Ethernet.

    in an attempt to layer Fibre Channel storage networking on Ethernet, the IEEE is developing Data Centre Ethernet (DCE) to stop packet-loss and provide predictable packet delivery latency.

    Where standardisation efforts seem to be failing is in coping with the limitations of Ethernet’s Spanning Tree protocol.

    TRILL (Transparent Interconnect of Lots of Links) is an IETF standard aiming to get over this, with, for example, Brocade and Cisco supporting it. It provides for multiple path use in Ethernet, and so doesn’t waste bandwidth.

  7. Tomi Engdahl says:

    Reap the benefits of 10GBase-T connectivity in data centers–Part I

    Ethernet at 10G speeds has arrived! The growing importance of cloud computing and the increasing utilization of unified data/storage connectivity and server virtualization by enterprise data centers, have conspired to elevate the importance and popularity of 10Gbps Ethernet.

    Not long ago considered an exotic connectivity option relegated to high-capacity backhaul, more and more applications are taking advantage of the availability and cost-effectiveness of 10GE links. As was the case with three prior generations of Ethernet, the ubiquity, the ready and familiar management tools, and the compelling cost structure are allowing 10G Ethernet (10GE) to quickly dominate the computer networking scene.

    Starting in 2002, The Institute of Electrical and Electronics Engineers (IEEE) has created several standards for 10G Ethernet connectivity.

    In addition, a non-IEEE Standard approach called SFP+ Direct Attach has also gained popularity. This method uses a passive twin-ax cable assembly, which connects directly into a SFP+ module housing.

    Ratified in June 2006, IEEE 802.3an provided a stable blueprint for chip manufacturers to develop and introduce compliant and interoperable devices allowing for 10Gbps communications over unshielded twisted pair cabling.

    The 10GBase-T transceiver uses full duplex transmission with echo cancellation on each of the four twisted pairs available in standard Ethernet cables, thereby transmitting an effective 2.5Gbps on each pair.

    While Cat6A cabling is ideal, Cat6 cabling, which was primarily targeted to support 100Base-T and 1000Base-T transceivers, may also be applicable to some 10GBase-T applications.

  8. Tomi Engdahl says:

    Reap the benefits of 10GBase-T connectivity in data centers–Part I

    This article explains the basics of 10GBase-T, its protocols, cabling options, and how electromagnetic interference is mitigated.

    Reap the benefits of 10GBase-T connectivity in data centers–PART II

    Part II dives deeper into how a new generation of 10GBase-T technology is being deployed in, and revolutionizing, the data center.

  9. Tomi Engdahl says:

    Designing with 10GBase-T transceivers

    Take advantage of 10GBase-T board layout and routing guidelines, power distribution and decoupling requirements, and EMI reduction design concepts to employ best practices in network designs.

    As was the case with three prior generations of Ethernet, the ubiquity, the ready and familiar management tools, and the compelling cost structure are allowing 10G Ethernet to quickly dominate the computer networking scene.

    Crehan Research, a leading industry analyst of data center technologies, estimates that by 2014, 10G Ethernet will overtake 1G Ethernet as the preferred network connectivity option in computer servers. And in one of its most reports on the subject, The Linley Group, another leading industry analyst, predicted robust 10GbE growth and estimated that 10GbE NIC/LAN-on-motherboard (LOM) shipments alone will surpass 16 million ports in 2014.

    Several standards-based options exist for 10G Ethernet and span the gamut, from single-mode fiber to twin-ax cable. But of all the options available, 10GBase-T, which is also known as IEEE 802.3an, is arguably the most flexible, economical, backwards compatible, and user friendly 10G Ethernet connectivity option available. It was designed to operate with the familiar unshielded twisted-pair cabling technology, which is already pervasive for 1G Ethernet and can interoperate directly with it.

    10GBase-T is capable of covering, with a single cable type, any distance up to 100 meters and thereby reaches 99% of the distance requirements in data centers and enterprise environments.

  10. Tomi Engdahl says:

    Debunking 10GBase-T Myths

    While it may be true that good things come to those who wait, too much waiting can lead to uncertainty. Take 10GBase-T networking products, for example. The 10GBase-T standard published almost six years ago and the long wait for network gear has provided fodder for the digital rumor mill to churn. This has led to the misperception that 10GBase-T is the end of the line for copper balanced twisted-pair media and network equipment. The fact is that the extended time to market can be explained by the recent economic recession and the desire to integrate significant power-efficiency enhancements into this new technology. These challenges have been overcome and all indicators are that adoption of 10GBase-T solutions is poised to take off in 2012.

    With cost and power dissipation significantly reduced with the newer 40-nm PHY devices, and further reductions enabled by 28-nm devices expected in 2013, data center managers can now capitalize on the fundamental advantages offered by 10GBase-T technology

    Interoperability with legacy Ethernet equipment via auto-negotiation is of particular significance as it enables data center expansions and expenditures to occur incrementally.

    A 10GBase-T switch can communicate effectively with legacy 1-Gbit/sec and 100-Mbit/sec servers today and allow 10-Gbit/sec servers to be introduced when required and supported by expense allocations tomorrow.

  11. Tomi Engdahl says:

    Network switch device equipment balances performance, cost and power in the cloud

    Cloud and Web 2.0 applications deployed in private and public cloud environments are significantly influencing network infrastructure design due to their increasing scale and performance requirements. Data centers must be purpose-built to handle current and future workloads – evolving rapidly and driven by high volumes of end users, application types, cluster nodes, and overall data movement in the cloud. A primary design challenge in this networking landscape is to select and deploy intelligent network switches that robustly scale the performance of applications, and achieve this goal cost-effectively. Ethernet switches must be architected at the silicon level to ensure that cloud network requirements can be implemented comprehensively, economically and in volume scale.

    The design of a switch device’s memory management unit (MMU), including its packet buffering resources, is a key element in meeting network design challenges. The MMU directly impacts both performance and cost of network switching equipment

    “Bursty” traffic patterns are prevalent in cloud data centers that have high levels of peak utilization, and workloads that are typically varied and non-uniform in nature.

    When application traffic exceeds the burst absorption capability in the access layer of a cloud network, TCP (Transmission Control Protocol) incast can become a problem. In this scenario, a parent server sends a barrier-synchronized request for data to many child nodes in a cluster. When multiple child nodes respond synchronously to the singular parent – either because they take the same time to complete the operation, or return partial results within a parent-specified time limit – significant congestion occurs at the network switch port to which the parent server is connected. If the switch’s egress port to the parent server lacks adequate burst absorption capability, packets overrun their buffer allocation and get dropped, causing the TCP back-off algorithm to kick in. If excessive frame loss occurs in the network, the result can be a TCP collapse phenomenon; many flows simultaneously reduce bandwidth resulting in link underutilization, and a catastrophic loss of throughput results from inadequate switch buffering.

    Overdesigning buffer capacity at each network node would certainly reduce the probability of congestion at any given egress port. However this is not realistic or viable given the critical cost and power factors constraining today’s data centers.

    Traditionally, switch MMU designs have enabled high burst absorption through the use of large, external packet buffer memories

    As servers transition from GbE to 10GbE network interfaces, the packet processing bandwidth currently deployed in a fully integrated top-of-rack switch device ranges from 480 to 640 Gigabits per second (Gbps). Assuming a single, in-order processing pipeline in the switch device core, this processing bandwidth amounts to a “packet time” as fast as one nanosecond. In this scenario, each pipeline step or memory access required to resolve a packet (such as L2/L3 forwarding lookups, buffer admission control, credit accounting, and traffic management decisions) must be completed with each single nanosecond in order to maintain wire rate performance. This sharp increase in aggregate switching throughput per access switch system has important implications for switch silicon architectures.

    Much like microprocessors – which several years ago hit a scalability ceiling in terms of single-core processing throughput – switch chip architectures now face aggregate processing bandwidth requirements that favor a multi-core approach in order to meet data center performance, cost, and power requirements. Yet adopting a multi-pipeline design creates MMU partitioning challenges that demand careful consideration

    Centralized, Shared, Intelligent MMU is the Solution

    Data center workloads demand high throughput and robust, consistent performance from Ethernet switches; these performance features are required in order to handle characteristic traffic patterns in their networks. Cloud-centric workloads such as Hadoop/MapReduce require network switches with excellent burst absorption capabilities in order to avoid TCP incast problems. With the current transition of server interfaces from GbE to 10GbE performance, demands in server access infrastructure necessitate highly integrated network switch devices that utilize multiple switching cores and pipelines. At the same time, cost and power metrics in the cloud drive the need for fully integrated buffers and sophistication in switch MMU design.

  12. Tomi Engdahl says:

    MRJ21 preterminated copper cabling system for 10-GbE

    The MRJ21 XG from TE Connectivity is a preterminated copper cabling system designed to deliver 10-Gbit Ethernet throughput in all architectures where copper cabling is required. It is based on TE Connectivity’s MRJ21 high-density interface and, according to the manufacturer, helps facilitate management of data center rack and floor space, density and throughput – all while capitalizing on its ease-of-deployment and quick installation.

  13. Tomi Engdahl says:

    Broadcom launches Trident II switch chip
    Blasting over 100 10GE ports into the clouds

    All of those apps you run on your smartphones and tablets and the surfing you do from PCs and other devices ultimately ends up whacking some data center network somewhere in the world. The appetite for bandwidth and low latency continues apace, and switch and adapter chip maker Broadcom aims to keep up with that demand with its new Strata XGS Trident II switch ASICs.

    The company is touting the fact that this is the first switch ASIC that can drive more than a hundred 10GE ports from a single chip, and that there is enough bandwidth in there to make a pretty fat 40GE switch, too. The prior Trident ASICs offered 640Gbps of Ethernet switching capacity, but the new Trident II will boost that to 980Gbps for some models and up to 1.28Tbps for other models.

    Broadcom helped cook up the VXLAN standard along with VMware and Cisco to virtualize the Layer 3 network and separate out Layer 2 networks on the fly, and the Trident II chip has support for VXLAN etched into its circuits. Specifically, the chip has a VXLAN transit switch and gateway.

    VXLAN is a Layer 2 overlay for a Layer 3 network that gives each Layer 2 segment a 24-bit segment identification called the VXLAN Network Identifier, or VNI. This 24-bit ID allows up to 16 million VXLAN segments to coexist on the same network administration domain, which is a lot more than the 4,094 VLANs supported with the current virtual LAN technology in Ethernet switches.

    VXLAN is made for clouds – very large clouds – while VLAN was made for regular-sized data centers. VXLAN can support up to 8,000 separate tenants, managing network isolation and providing quality of service provisioning for them.

  14. Tomi Engdahl says:

    Ethernet switch sales sizzle
    Everybody needs – and is buying – bigger pipes

    The server market may have stalled a bit as Intel, AMD, IBM, Oracle, and Fujitsu work through various stages of processor transitions, but the Ethernet switch market is going gangbusters.

    According to the box counters at IDC, the worldwide market for Layer 2 and 3 switching gear that adheres to the Ethernet protocol accounted for $5.52bn in revenues in the second quarter as companies begin the transition to from Gigabit to 10 Gigabit Ethernet switching in the data center

    a move toward flatter and fatter Layer 2-3 networks for many workloads rather than the tiered networks that have been common for the past two decades.

    In the quarter ended in June, IDC reckons that Gigabit Ethernet switches collectively accounted for 55 million ports and revenues rose 6.5 per cent.

    10GbE ports are taking off, know that shipments rose above 3 million ports in the second quarter ( up 22.9 per cent )

    With the latest Intel Xeon E5 processors, server makers are also putting 10GbE ports on their motherboards, essentially making the 10GbE networking free as 100Mbit and Gigabit were ahead of them

  15. Tomi Engdahl says:

    IEEE forms study group to explore next-generation 802.3 BASE-T

    “As high-density 10GBASE-T switches become more common in data center and enterprise environments, the approval of this study group to review the next-generation BASE-T technology is timely,” said a representative for IEEE. “A next-generation BASE-T technology will complement the rich and diverse higher-speed Ethernet interfaces, ensuring that next-generation switch and server application requirements are addressed.”

    “extension to 40 Gigabit Ethernet and higher speeds will be required in coming years,”

  16. Tomi Engdahl says:

    Tips & Trends: 10GBASE-T adoption status and forecast
    9/27/2012 1:57 PM EDT

    10GBASE-T is the standard technology that enables 10 Gigabit Ethernet operations over balanced twisted-pair copper, including Category 6A unshielded and shielded cabling. 10GBASE-T provides great flexibility in network design due to its 100-meter reach capability, and also provides the requisite backward compatibility that allows most end users to transparently upgrade from existing 100/1000-Mbps networks.

    New 10GBASE-T physical layers allow lower-cost and lower-power high-density designs. The latency of the 10GBASE-T PHY has also been improved and allows the building of 10GBASE-T networks to support most of today’s applications.

    10GBASE-T equipment from multiple vendors is available in the marketplace

    The article is a review of progress in removing the technical and economic barriers that prevented broad 10GBASE-T deployment.

    With Intel’s March 2012 launch of its Romley generation server platforms with 10GBASE-T LOM connectivity, the 10GBASE-T interconnect market is now seeing an explosive uptick. Intel’s Romley platform is also driving the adoption of 10GbE LOMs as the I/O subsystem will need to catch up to the improved processors.

    The cost of 10GBASE-T PHYs in terms of Gbps/port has been dramatically declining and is down approximately 70 percent since 2008, with further decline forecasts for the next year.

    PHYs are following Moore’s Law, and new processes have significantly decreased both cost and power use, and subsequent process improvements will continue to enhance these decreases.

    A range of products are available today that take advantage of this new technology, and the coming year promises to deliver additional offerings across the server, switch, adapter, and networking appliance, and storage vendor ecosystem. Therefore, vendors should start planning now to integrate 10 Gigabit Ethernet and 10GBASE-T into their next-generation designs.

    The 10GBASE-T ecosystem continues to grow and offer a robust number of options in the marketplace

  17. Tomi Engdahl says:

    White paper explains why alien crosstalk matters most for 10GBase-T

    A white paper recently published by Superior Essex explains why, for 10GBase-T, alien crosstalk is a twisted-pair cabling system’s most-critical performance characteristic. The document, titled “Alien Crosstalk: The Limiting Noise Factor in Category 6A Channel Performance,” describes the importance of alien crosstalk, particularly in light of 10GBase-T transceivers’ ability to cancel internally generated noise.


Leave a Comment

Your email address will not be published. Required fields are marked *