Data center backbone design

Cells vs. packets: What’s best in the cloud computing data center? article from few years back tells that resource constrained data centers cannot waste anything on their way to efficiency. One important piece on this is right communications technology between different parts of data center.

In the late 1990s and early 2000s, proprietary switch fabrics were developed by multiple companies to serve the telecom market with features for lossless operation, guaranteed bandwidth, and fine-grained traffic management. During this same time, Ethernet fabrics were relegated to the LAN and enterprise, where latency was not important and quality of service (QoS) meant adding more bandwidth or dropping packets during congestion.

Over the past few years, 10Gb Ethernet switches have emerged with congestion management and QoS features that rival proprietary telecom fabrics. With the emergence of more feature-rich 10GbE switches, InfiniBand no longer has a monopoly on low-latency fabrics. It’s important to find the right 10GbE switch architecture that can function effectively in a 2-tier fat tree.

137 Comments

  1. Tomi Engdahl says:

    Exercises to keep your data centre on its toes
    Flatten the structure to stay nimble
    http://www.theregister.co.uk/2012/05/08/ethernet_standards_developments/

    Given the size of networks today, networking should be open to promote interoperability, affordability and competition among suppliers to provide the best products.

    Let’s drill down a little to explore new developments in the ubiquitous Ethernet standard and see how open networking can help you do jobs more efficiently.

    Currently Ethernet networks have a very controlled and directed infrastructure, with edge devices talking to each other via core switches and pathways through the network controlled by a technology called spanning tree.

    This design prevents network loops by ensuring that there is only one path across the network between devices at the edge of the network.

    Air travel infrastructure has been painstakingly built up to enhance safety and stop planes colliding, as well as to take advantage of economies of scale at hub airports.

    The hub-and-spoke design helps airline and airport operators but not passengers. They could get to their destination much faster by not flying through Heathrow and Chicago.

    So too with Ethernet and packets at the Layer 2 level. Data would arrive at its destination more quickly if it could cross the network without having to go up the network tree (“northward”) to the main or core switches and into Layer 3, get processed and then return down the tree (“southward”) to the destination edge device.

    This Layer 3 supervision is an obstacle to packets travelling more directly, east-west as it were, and only in Layer 2 between the edge devices.

    Ethernet is being transformed to provide edge-to-edge device communications within Layer 2 and without direct core switch supervision.

    What is needed is for the network to be virtualised, to have its data traffic and its control or management traffic separated, and to give networking staff the ability to reconfigure the network dynamically, setting up different bandwidth allocations, routing decisions, and so forth.

    With servers, admin staff can spin up virtual machines and tear them down on demand, with no need to install and decommission physical machines.

    Open secret

    There have to be standards to do this, otherwise it won’t be open.

    One approach to overcoming this challenge is the OpenFlow protocol. The idea is that networks should be software-defined and programmable to improve traffic flows and facilitate the introduction of new networking features.

    Reply
  2. Tomi Engdahl says:

    Report: 10-GbE entering next major stage of volume server adoption
    http://www.cablinginstall.com/articles/2013/february/crehan-server-adoption-report.html

    In its recently released 4Q12 Server-class Adapter & LAN-on-Motherboard (LOM) Report, Crehan Research sees 10 Gigabit Ethernet (10-GbE) entering its next major stage of volume adoption, driven by many public cloud, Web 2.0, and massively scalable data center companies deploying 10GbE servers and server-access data center switches.

    “Although there is some overlap in terms of timelines and segments, we believe that we are now in the second of three major adoption stages that 10GbE server networking will follow,”

    First Stage: The majority of the transition during this stage happens in the 2009-2013 timeframe and mostly involves blade servers.

    Second Stage: This stage, which Crehan considers to be the current one, mostly involves public cloud and massively scalable data center companies. Although there have already been some early adopters of 10-GbE server and server-access switches within this segment, the confluence of high-bandwidth applications, technology maturity and attractive pricing are now leading to increased deployments, says the researcher.

    Third Stage: This stage is characterized by the upgrading of the traditional enterprise segment’s large installed base of rack and tower server ports from 1-GbE to 10-GbE. Crehan expects this stage to gain good traction in 2013. Since much of the infrastructure in this segment is 1GBase-T, Crehan forecasts that this is where 10GBase-T will start to see mainstream adoption, and it predicts strong growth for this technology. Crehan also anticipates that this stage will offer the largest server-access port and revenue opportunity.

    Reply
  3. Tomi Engdahl says:

    Are data centers making ‘market corrections’ on risk assessment?
    http://www.cablinginstall.com/articles/2013/february/data-center-risk.html

    A short report from DCD Intelligence, the research arm of DatacenterDyanamics, points out that data center administrators previously unconcerned about the costs associated with risk aversion are now taking such costs into consideration. As a result, they are now taking a harder look at their real risks and making budget-based decisions accordingly, in contrast to the previous common practice that probably amounted to overspending.

    The research conducted for the report “strongly indicates that companies are more willing to take on risk than they were before the crisis,”

    Hayes pointed out, “All of this is not to say that companies are taking unnecessary risks. Indeed it would appear that for the past decade companies have been overestimating risk-based concerns since when money was readily available this was the more cautious approach.”

    “Now, even where a high degree of resilience is warranted, a Tier 3 facility is being looked on as sufficient to save on the significant cost of building a Tier 4 facility,”

    Reply
  4. Tomi Engdahl says:

    Systems planning for new data center projects
    http://www.cablinginstall.com/articles/2013/february/data-center-systems-planning.html

    A new white paper from APC-Schneider Electric advances the idea that the planning of a data center physical infrastructure project need not be a time consuming or frustrating task.

    “Experience shows that if the right issues are resolved in the right order by the right people, vague requirements can be quickly translated into a detailed design,” states the paper’s introduction.

    Reply
  5. Tomi Engdahl says:

    Google’s 10 tips for better data center design
    http://www.cablinginstall.com/articles/2013/03/google-data-center-design.html

    As reported by GigaOM’s Ucilia Wang, Google’s vice president of data centers, Joe Kava, recently described how the Internet search giant’s pursuit of innovative data center designs largely corresponds to the company’s original ten governing rules or core defining principles.

    Reply
  6. Tomi Engdahl says:

    Tapping: It’s not just for phones anymore
    http://www.cablinginstall.com/articles/print/volume-21/issue-3/features/tapping-its-not-just-for-phones-anymore.html?cmpid=$trackid

    Integrated tapping technology allows administrators to monitor data center traffic without disrupting the production environment.

    Reply
  7. Tomi Engdahl says:

    TIA data-center standard
    http://www.cablinginstall.com/articles/print/volume-12/issue-8/contents/design/tia-data-center-standard-nearing-completion.html?cmpid=$trackid

    The TIA/EIA-942 standard will define new terms and address media selections, the center’s physical environment, and equipment placement.

    Reply
  8. Tomi Engdahl says:

    Google’s 10 rules for designing data centers
    http://gigaom.com/2013/03/05/googles-10-rules-for-designing-data-centers/

    Google’ vice president of data centers, Joe Kava, outlines how the search giant’s pursuit of data center designs corresponds nicely to the company’s ten governing rules. Well, almost.

    Reply
  9. Tomi Engdahl says:

    Silicon photonics to re-vamp the data center?
    http://www.cablinginstall.com/articles/2013/04/luxtera-ethernet-summit.html

    “Silicon CMOS Photonics continues to move into mainstream market sectors by enabling flexibility, scalability and throughput, for truly universal optical connectivity,” comments Bergey. “At this year’s Ethernet Summit, we look forward to discussing how we are best equipped to support high-performance data centers insatiable demand for low-cost bandwidth.”

    Silicon CMOS photonics technology is set to figure in as a disruptive technology for data center networking and computing demands, contends the company. The presentation by Luxtera will present the opportunities around Silicon CMOS photonics and how the technology will solve issues of today and tomorrow inside the data center, paving the way from 100Gb to 400Gb bandwidth, and beyond.

    Reply
  10. Tomi Engdahl says:

    Where to place your cooling units
    http://www.cablinginstall.com/articles/print/volume-21/issue-4/features/where-to-place-your-cooling-units.html?cmpid=$trackid

    Options abound and the stakes are high, so choosing where to put cooling units in a data center is of paramount concern.

    There is an awful lot of heat to remove from the racks in your data center. And with every technology refresh, there is ever-more heat. Per-rack dissipation has gone from 1 to 4 or 5 kW, with some facilities at 12 to 15 kW and 60 kW possible. But you need to juggle space, cooling efficiency and a load of other factors. So where should you put your chillers–within row, within rack, at the top, bottom or side?

    There are many ways to reduce the heat in the data center

    Once the technology has been chosen, the next step is deciding where to place it. Despite the rise of free air cooling and liquid cooling solutions, CRAC units are still the most common way of cooling a data center. As mentioned, however, once installed CRAC units are inflexible.

    With hot/cold aisle containment, CRAC units need to be perpendicular to the hot aisle; careful monitoring of airflow is important to ensure that heat is evenly removed from the aisle. Otherwise, hot spots will still occur.

    For blade servers and HPC, consider in-rack cooling.

    Reply
  11. Tomi Engdahl says:

    Cisco taps Microsoft’s Cloud OS for new datacenter portfolio
    http://www.zdnet.com/cisco-taps-microsofts-cloud-os-for-new-datacenter-portfolio-7000013836/

    Summary: Cisco is launching new datacenters that revolve around Microsoft technologies to deliver go-to-market initiatives.

    Cisco and Microsoft have teamed up on a series of new datacenter ventures designed to simplify cloud deployments.

    Based on Microsoft’s Cloud OS, the suite of solutions rely on the combination of Cisco’s Unified Data Center architecture with Microsoft’s Fast Track architecture for private clouds.

    The tech giants tout that the merger of these architectures should streamline the management of combined Cisco and Microsoft data center environments while improving agility to provision resources.

    Microsoft customers, in particular, will also gain access to Cisco’s Unified Computing System of servers, which combine networking, virtualization, and storage access into a single architecture.

    Reply
  12. Tomi Engdahl says:

    Cisco ports Nexus 1000V virtual switch to Microsoft’s Hyper-V
    Hooks UCS servers into Systems Center, fast-tracks Windows infrastructure clouds
    http://www.theregister.co.uk/2013/04/11/cisco_nexus_ucs_microsoft_hyperv_sc2012/

    Upstart server-maker Cisco is bounding around the Microsoft Management Summit this week in Las Vegas to talk about how it is plugging its technologies into Redmond’s cloud stack.

    First up, as it promised it would do back in the summer of 2011, Cisco has ported its Nexus 1000V virtual switch to run atop Microsoft’s Hyper-V server virtualization hypervisor.

    While Microsoft has its own virtual switch, called Hyper-V Extensible Switch, there are others that run in conjunction with Windows, such as the vNetwork switch buried inside of VMware’s ESXi hypervisor. The Open vSwitch created by Nicira is popular on Linux-based virtualization platforms and is now controlled by VMware. NEC in January launched its own freebie ProgrammableFlow 1000 Virtual Switch for Hyper-V.

    Still, support for the Nexus 1000V is important for the Windows stack because Cisco is still, by far, the dominant supplier of physical switches in the data center.

    The important thing for Cisco is that its Nexus 1000V virtual switch is the common element across whatever hypervisors enterprises choose. This is why Cisco already has the Nexus 1000V plugged into Red Hat’s KVM hypervisor and is working on integrating it with Citrix Systems’ Xen hypervisor – which has fallen into fourth place in the server virtualization beauty pageant unless you count all of the public clouds that use either XenServer (like Rackspace Hosting does) or a tweaked homegrown Xen (as Amazon Web Services does).

    The Nexus 1000V comes in a freebie Essential Edition with the basic switch. The Advanced Edition that costs $695 and has a plug-in for VMware’s vCenter Server management console as well as extending Cisco’s TrustSec security policies for physical switches down to virtual switches.

    Reply
  13. Tomi Engdahl says:

    Intel unveils new reference architectures for data centers, telecom networks
    http://www.cablinginstall.com/articles/2013/04/intel-unveils.html

    Three strategic reference architectures that will enable the IT and telecom industries to accelerate hardware and software development for software-defined networking (SDN) and network function virtualization (NFV) were recently announced by Intel at the Open Networking Summit conference.

    Aimed at the telecommunications, cloud data center and enterprise data center infrastructure market segments, the new reference architectures combine open standards for SDN and NFV with Intel’s hardware and software to enable networks to be more agile and intelligent so they can adapt to changing market dynamics, says the company.

    SDN and NFV are complementary networking technologies poised to transform how networks are designed, deployed and managed across data center and telecom infrastructure environments, claims Intel.

    By separating control and data planes, SDN allows the network to be programmed and managed externally at much larger and more dynamic scale for better traffic control across the entire data center. NFV allows service providers to virtualize and manage networking functions such as firewall, VPN or intrusion detection service as virtual applications running on a high-volume Intel x86-based server.

    “SDN and NFV are critical elements of Intel’s vision to transform the expensive, complex networks of today to a virtualized, programmable, standards-based architecture running commercial off-the-shelf hardware,”

    Reply
  14. Tomi Engdahl says:

    Infographic: 5 vital signs of healthy data center infrastructure
    http://www.cablinginstall.com/articles/2013/05/data-center-infographic.html

    Emerson Network Power (NYSE: EMR) has released a list of five vital signs to help critical facilities managers assess the health of their data center infrastructure. The list, and accompanying infographic, details five vital signs of a healthy 5,000 square-foot data center.

    According to Emerson Network Power, data center managers can begin assessing the state of their own data center by examining the current performance of the following vital signs.

    1. Effective Cooling: Cooling accounts for approximately 40 percent of total energy used within the average data center.

    2. Flexibility and Scalability: Healthy data center designs should incorporate well thought out floor layouts, systems and equipment to meet current data center requirements, while ensuring the ability to adapt to future growth and demands.

    3. Reliable and Cost-Saving Power and Energy: Emerson’s Energy Logic industry intelligence has shown that 1 W of savings at the server component level can create 2.84 W of savings at the facility level.

    4. Routine Service and Maintenance: For established facilities, preventive maintenance has proven to increase system reliability.

    5. Proper Planning and Assessment: Preventive maintenance should be supplemented by periodic data center assessments, which can help identify vulnerabilities and inefficiencies resulting from constant change.

    Reply
  15. Tomi Engdahl says:

    CloudEthernet Forum formed to scale networks into virtual age
    All the kids in the Metro Ethernet Forum want to be members of the new club
    http://www.theregister.co.uk/2013/05/27/cloud_ethernet_forum_launched/

    Nine of the majors in the Ethernet market have joined up to create the CloudEthernet Forum which they say will help the venerable networking protocol adapt to the challenges of large-scale cloud services.

    The forum is being spun out of the Metro Ethernet Forum – MEF – with the initial cabal comprising Alcatel-Lucent, Avaya, Equinix, HP, Juniper Networks, PCCW, Spirent Communications, Tata Communications and Verizon.

    There’s no doubt that Ethernet’s showing its age: in a world where tens of thousands of virtual server machines can live in a data centre, some of Ethernet’s constraints look quaint (for example, 802.1Q’s 4,096 limit on the VLAN number).

    As well as VLAN scaling, the forum will be addressing performance at Layer 2, large network resilience, and trying to displace the last pockets of non-Ethernet networking (particularly in big storage).

    Reply
  16. Tomi Engdahl says:

    Building a People-Centric Datacenter
    http://www.zdnet.com/building-a-people-centric-datacenter-7000016428/

    Summary: When redesigning your datacenter, people-centric IT must be a priority.

    If you’re ever looking for a way to frustrate your users, making it difficult to log on is a great way to start. Unfortunately, the pressure to adopt a wide variety of cloud-based services is forcing many companies in exactly that direction.

    We saw some of the benefits of a hybrid cloud in Keeping Your Options Open with a Hybrid Cloud. Flexible sourcing of IT allows organizations to optimize each service for cost, functionality and usability, so it is a great opportunity. However, left unchecked, this approach has the potential to lead to an authentication nightmare. Duplicate credentials to remember, re-authentication with each service…what a way to alienate the business.

    Of course, that’s not the only downside. Poor identity management also makes the systems less secure. Users find their own ways to cope: for example, re-using passwords across many systems, choosing weak credentials, or writing passwords down in an accessible location.

    A much more attractive option is to make people-centric IT a priority as you redesign your datacenter. You will need a central identity store if you want to manage your users across multiple datacenters and cloud providers. The store itself may be on-premises (e.g. Windows Server Active Directory) or it could be hosted in the cloud (e.g. Windows Azure Active Directory).

    Reply
  17. Tomi Engdahl says:

    Animated CAD illustrates options for sustainable data center wiring
    http://www.cablinginstall.com/articles/2013/06/sust-data-center-video.html

    This new video from Hubbell Premise Wiring relies on a kind of animated CAD presentation to illustrate the many options and angles for wiring up and otherwise provisioning a sustainable data center.

    Reply
  18. Tomi Engdahl says:

    Sears converts retail stores into data centers
    http://www.cablinginstall.com/articles/2013/05/sears-data-centers.html

    “Recognizing the world needs less space for retail and more to store data, Sears plans to turn some of Sears and Kmart locations into data centers and disaster recovery spaces.”

    Reply
  19. Tomi Engdahl says:

    Ensuring code compliance in your data center
    http://www.cablinginstall.com/articles/print/volume-21/issue-5/features/ensuring-code-compliance-in-your-data-center.html

    There’s more than just the NEC to think about, as a data center’s electrical equipment is subject to other NFPA as well as OSHA specifications.

    Working with electrical equipment in a data center can be a dangerous job. Electrocutions are the fourth-leading cause of traumatic occupational fatalities, and according to the American Society of Safety Engineers, the United States averages more than 3,600 disabling electrical contact injuries annually. On average, one person dies in the workplace from electrocution every day.

    While working on electrical equipment can be dangerous, the U.S. Occupational Safety and Health Administration (OSHA) and the National Fire Protection Association (NFPA) have gone to great lengths to create standards and codes to help create safe environments and to prevent electrical accidents.. Most recently, in 2012, the NFPA released an update to NFPA 70E: Standard for Electrical Safety in the Workplace, making significant changes in the areas of safety, maintenance and training.

    Compliance with OSHA codes and regulations is mandatory. Compliance with NFPA 70E, while technically not required, is about the only practical way to demonstrate and assure compliance with OSHA requirements. If a data center is noncompliant, it not only jeopardizes the safety of its workers, but it also faces costly fines, shutdowns or even litigation.

    Reply
  20. Tomi Engdahl says:

    Everything you need to know about physical security and cybersecurity in the Google data center

    The following video from Google covers most major aspects of physical security and cybersecurity measures for the Internet search monolith’s data center operations. Topics including IP-based key access and video monitoring are covered, as are fire-suppression and data backup technologies.

    Google data center security YouTube
    http://www.youtube.com/watch?feature=player_embedded&v=wNyFhZTSnPg

    Reply
  21. Tomi Engdahl says:

    Data Center Game-Changer: How Will You Be Impacted by Data Center Fabrics?
    http://www.belden.com/blog/datacenters/Data-Center-Game-Changer-How-Will-You-Be-Impacted-by-Data-Center-Fabrics.cfm

    A traditional three-tier switching architecture using core, aggregation and access switches is not ideal for large, virtualized data centers. For one server to communicate with another, the data may need to traverse north from an access switch along a hierarchical path through aggregation switches and a core switch and then south again through more switches before reaching the other server.

    Data center switch fabrics that typically use only one or two tiers of switches are now widely viewed as the optimal architectures to enable east-west traffic. These flattened architectures provide low-latency and high-bandwidth communications between any two points to meet the needs of virtualized networks and ever-increasing application and traffic load.

    In data center switch fabric architectures, any server can communicate with any other server via no more than one interconnection switch path between any two access switches. Data center switch fabric architectures feature switches with large numbers of connections to other switches that are all active

    Reply
  22. Tomi Engdahl says:

    Technological Advancements Enable Easy Back-to-Basic Structured Cabling Design
    http://www.belden.com/blog/datacenters/Technological-Advancements-Enable-Easy-Back-to-Basic-Structured-Cabling-Design.cfm

    In the Data Center, migration to 40 and 100-gigabit infrastructure deployment and flattened architectures are causing optical loss budgets to shrink.

    Unfortunately, the loss values of many pre-terminated fiber solutions have only allowed for two mated pairs in a channel, which has limited the ability to deploy manageable, scalable and secure networks.

    In fact, the current insertion loss of 0.75 dB per mated pair defined by TIA allows for just one mated pair in both 10- and 40-GbE fiber channels

    How many connection points is optimum? Let’s take a look.

    2-Point Topology

    While less expensive in material, two mated pairs requires high-density patching at the core. This can cause difficult, unsecure access to critical switch ports, creating the risk of interrupting live traffic.

    5-Point Topology

    Considered the pinnacle solution, a five-point topology allows for both a cross-connect at the core and ZDAs at each equipment row. This offers low-density, easy-access patching at the core and enables all cabling to be preinstalled from the core to the ZDAs.

    While few vendors’ pre-terminated assemblies can even support four (or even three) mated connections, Belden offers the industry’s lowest loss connectivity (0.2 dB for MPOs and 0.15 dB for LCs) to support a four-point topology in 40 GbE channels using both OM3 and OM4 fiber, or a five-point topology in both 40 GbE and 16 Gb Fibre Channel applications using OM4.

    Reply
  23. Tomi Engdahl says:

    Maintaining Polarity—A Not-So-Simple Necessity
    http://www.belden.com/blog/datacenters/Maintaining-Polarity-A-Not-So-Simple-Necessity.cfm

    In today’s Data Centers, 12-fiber preterminated array cabling is frequently used to establish an optical path between switch tiers. Accomplishing this path in a way that matches the transmit signal (Tx) on one switch port to the corresponding receive signal (Rx) on the other switch port is referred to as polarity.

    In August 2012, TIA published Addendum 2 to the ANSI/TIA 568-C.0 Generic Telecommunications Cabling for Customer Premises Standard that provides three example methods to establish polarity of optical fiber array systems—Connectivity Method A, B and C.

    The systems components deployed for multiple duplex signals includes a breakout cassette at each end that consists of a multi-fiber push-on (MPO) adapter to multiple duplex adapters, typically LC. Preterminated 12-fiber trunk cables (or multiples thereof) connect to the MPO adapter on the back of the two cassettes, and duplex patch cords are used to connect the equipment to the front of the cassettes.

    A simple way to look at these polarity methods is that Method A is the most straight forward but requires a different patch cord at one end. Method B uses the same patch cord at both ends, but the cassettes (circled in red in Figure 1) must be flipped over at one end so that the fiber that originated in position 1 is mapped to position 12. Method C is a variant of Method A, but with the cross-over implemented in the trunk cable instead of the patch cord. Both Method B and Method C have the advantage of using the same patch cords at both ends.

    Reply
  24. Tilly says:

    Wow! I’m really enjoying the style and design of your blog. Are you using a custom theme or is this readily available to all individuals? If you don’t
    want to say the name of it out in the general public,
    please be sure to e-mail me at: [email protected].

    I’d really like to get my hands on this theme! Kudos.

    Also visit my blog post; cheap backlink service – Tilly -

    Reply
  25. Claudio Baskas says:

    I like it when folks come together and share ideas. Great website, continue the good work!

    Reply
  26. Tomi Engdahl says:

    Technical paper: Reconsidering physical topologies with 10GBase-T
    http://www.cablinginstall.com/articles/2013/06/broadcom-10gbaset-physical-topologies.html

    A new technical white paper from Broadcom illustrates how 10GBase-T and twisted-pair cabling can dramatically lower the capex of interconnect in the data center. The paper highlights the ways in which data center interconnect includes both the connection between the Top-of-Rack switch (ToR) and the server, and the connection between the ToR and the spine switch.

    Reply
  27. Tomi Engdahl says:

    Study: Outmoded data center infrastructure hampering virtualization, cloud adoption
    http://www.cablinginstall.com/articles/2013/07/brocade-antiquated-datacenter-survey.html

    A global study commissioned by Brocade (NASDAQ: BRCD) reveals that many organizations still depend on antiquated data center infrastructure, with significant negative impact on both productivity and end-user experience.

    The premise of the study was that the data center network has never been placed under greater strain, as today’s organizations interact with data and applications constantly, whether for video conferencing or accessing database applications on remote devices. However, 61 percent of data center personnel confided that their corporate networks are not fit for the intended purpose, with almost half (41 percent) admitting that network downtime has caused their business financial hardship either directly — through lost revenue or breached SLAs — or from their customers’ lack of confidence.

    “Many data centers that exist today are based on 20-year-old technologies, and the simple fact is that they can no longer keep up with demand,” remarks Jason Nolet, vice president, data center switching and routing at Brocade. “Virtualization and cloud models require greater network agility and performance, as well as reduced operational cost and complexity. The findings clearly show that despite apparent investment in the past few years, most organizations are still ill-equipped for current business demands.”

    Reply
  28. Tomi Engdahl says:

    10 mistakes to avoid when commissioning a data center
    http://www.cablinginstall.com/articles/2013/07/ten-data-center-mistakes.html

    According to the paper, the ten mistakes are as follows:

    1. Failure to engage a commissioning agent early on.

    2. Failure to align with current technology.

    3. Failure to identify clear roles.

    4. Failure to validate script.

    5. Failure to avoid budget cuts.

    6. Failure to simulate heat loads.

    7. Failure to identify weak links.

    8. Failure to publish emergency procedures.

    9. Failure to limit human fatigue.

    10. Failure to update documentation.

    Reply
  29. Tomi Engdahl says:

    A 24-fiber interconnect solution: The right migration path to 40/100G
    http://www.cablinginstall.com/articles/print/volume-20/issue-9/features/a-24-fiber-interconnect-solution-the-right-migration-path-to-40-100g.html?cmpid=$trackid

    Maximum fiber use, reduced cable congestion and increased fiber density make a 24-fiber trunking and interconnect a preferred option to prepare for next-generation speeds.

    Video views on YouTube climbed from 100 million per day in 2006 to well over 4 billion per day in 2012. Song downloads from iTunes increased from 5 billion in 2008 to more than 16 billion by 2012. According to Cisco’s Visual Networking Index Global Mobile Data Traffic Forecast Update 2011-2016, average smartphone use tripled in 2011 and forecasts estimate that by the end of 2012, the number of mobile-connected devices will exceed the number of people on earth. Over the next two years, our world will create, process and store more data than in the entire history of mankind.

    Data centers are at the heart of the tremendous amount of business data needing to be transmitted, processed and stored. In the data center, fiber-optic links are vital for providing the bandwidth and speed needed to transmit huge amounts of data to and from a large number of sources.

    Typical transmission speeds in the data center are beginning to increase beyond 10 Gbits/sec. In 2010 the Institute of Electrical and Electronics Engineers (IEEE) ratified the 40- and 100-Gbit Ethernet standard, and already leading switch manufacturers are offering 40-GbE blades and more than 25 percent of data centers have implemented these next-generation speeds. It is anticipated that by the end of 2013, nearly half of all data centers will follow suit. Today’s enterprise businesses are therefore seeking the most effective method to migrate from current 10-GbE data center applications to 40/100-GbE in the near future.

    A 24-fiber data center fiber trunking and interconnect solution allows enterprise data center managers to effectively migrate from 10-GbE to 40/100-GbE.

    In 2002, the IEEE ratified the 802.3ae standard for 10-GbE over fiber using duplex-fiber links and vertical-cavity surface-emitting laser (VCSEL) transceivers. Most 10-GbE applications use duplex LC-style connectors; in these setups, one fiber transmits and the other receives.

    To run 100-GbE, two 12-fiber MPO connectors can be used—one transmitting 10-Gbits/sec on 10 fibers and the other receiving 10-Gbits/sec on 10 fibers. However, the recommended method for 100-GbE is to use a 24-fiber MPO-style connector with the 20 fibers in the middle of the connector transmitting and receiving at 10-Gbits/sec and the 2 top and bottom fibers on the left and right unused.

    To keep costs down, the objective of the IEEE was to leverage existing 10-GbE VCSELs and Om3/Om4 multimode fiber. The standards therefore relaxed transceiver requirements, allowing both 40- and 100-GbE to use arrayed transceivers containing either 4 or 10 VCSELs and detectors, accordingly. This prevented the cost of 40-GbE transceivers being 4 times that of existing 10-GbE transceivers for 40-GbE, or 10 times the cost of existing 10-GbE transceivers for 100-GbE. According to the IEEE 802.3ba standard, multimode optical fiber supports both 40- and 100-GbE over link lengths up to 150 meters when using Om4 optical fiber and up to 100 meters when using Om3 optical fiber.

    It is important to note that singlemode fiber can also be used for running 40- and 100-GbE to much greater distances using wavelength division multiplexing (WDM). While this is ideal for longer-reach applications like long campus backbones, metropolitan area networks and other long-haul applications, the finer tolerances of singlemode fiber components and optoelectronics used for sending and receiving over singlemode are much more expensive and are therefore not feasible for most data center applications of less than 150 meters. Copper twinax cable is also capable of supporting 40- and 100-GbE but only to distances of 7 meters.

    Reply
  30. Tomi Engdahl says:

    Survey: Most heavy data center equipment is manually lifted
    September 6, 2013
    http://www.cablinginstall.com/articles/2013/09/serverlift-survey.html

    ServerLift, a company that provides equipment used to lift and move heavy data center equipment, recently reported the results of an anonymous survey it conducted “at a large tech trade show,” the company said.

    When reporting these results, ServerLift cited an OSHA document as support for the use of equipment like its own to move heavy objects—in data centers or any other work environment. The company says that OSHA’s Technical Lifting Manual “recommends using ‘mechanical means to avoid injury when lifting equipment heavier than 50 pounds.’

    Reply
  31. Tomi Engdahl says:

    Schneider Electric, Intel integrate DCIM, KVM for improved data center server access
    http://www.cablinginstall.com/articles/2013/08/schneider-intel-dcim-kvm.html

    Schneider Electric has introduced what it claims is the first data center infrastructure management (DCIM) software that provides server access without the need for additional hardware. The new product module for Schneider’s StruxureWare Data Center Operation is the result of leveraging Intel’s Virtual Gateway technology to provide full server lifecycle access and power cycling for remote management.

    “Virtual Gateway is an extension of Intel’s Data Center Manager (DCM) software, and provides important technological advances for our middleware,” says Jeff Klaus, general manager of Intel Data Center Solutions. “The joint effort with Schneider Electric broadens the use of our technology and will help data centers eliminate unnecessary hardware spend.”

    Reply
  32. Tomi Engdahl says:

    New power protection devices from ABB safeguard industrial, data center environments
    http://www.cablinginstall.com/articles/2013/09/abb-power-protection.html

    ABB’s Power Conversion business (New Berlin, WI) has released its PCS100 UPS-I and PCS100 AVC power protection devices to the North American market. These inverter-based systems protect sensitive industrial loads from voltage sags and other voltage disturbances with fast, accurate regulation and load voltage compensation, says the company. The AVC is effective in a wide range of manufacturing and industrial settings, and the UPS-I is specially designed for semiconductor fabrication and data center applications. The North American introduction of the products follows a successful launch in Europe and Asia.

    Reply
  33. Tomi Engdahl says:

    Too soon for 40GBase-T? Structured cabling giant R&M warns firms against ‘premature investment’
    http://www.cablinginstall.com/articles/2013/07/rm-warns-40g-investment.html

    40GBASE-T data center cabling standardization [predicted] to be introduced by 2016; R&M cautions businesses against premature investment

    While R&M is convinced that the advantages 40GBASE-T offers in terms of speed and data volume will outclass the entire performance of previous copper cabling in data centers, the cabling specialist has warned Middle East organizations against premature investments as long as standards are not defined and appropriate components are not fully developed.

    While 10GBASE-T was defined for general applications, 40GBASE-T is intended for use directly in data centers. Jean-Pierre Labry added that the market cannot ignore 40GBASE-T. “Its economic potential is simply too significant judging from current R&M market observations and experience with millions of installations for its own high-end Cat. 6A and fiber optic solutions,” he said.

    With a range of 30 meters, the future standard closes the gap between direct-attach cables 7 or 15 meters long for intra-rack cabling and structured fiber cabling with a range of up to 150 meters. An inexpensive copper alternative capable of carrying greater data volumes more quickly than before is therefore needed for structured cabling over medium distances, e.g. between cabinets in an aisle in a computer room.

    Reply
  34. Tomi Engdahl says:

    40-, 100 Gigabit Ethernet seen topping $4B by 2017, driven by cloud
    http://www.cablinginstall.com/articles/2013/07/delloro-40-100g.html

    Dell’Oro Group is forecasting that, within the larger Ethernet switch market, revenues for 40 Gigabit Ethernet and 100 Gigabit Ethernet will exceed $4 billion by 2017.

    According to the firm’s latest research, the L2-3 Ethernet Switch market is forecast to approach $25B in 2017, with future growth to be driven primarily by sales of higher speed Ethernet switches optimized for larger data center deployments, as the core of the data center quickly migrates to 40 Gigabit and 100 Gigabit Ethernet.

    “The data center will be the site of almost all revenue growth during the forecast horizon, as the cloud forever changes how networks are built,” comments Alan Weckel, vice president at Dell’Oro Group. “In general, we are moving toward a period of data center consolidation, where there will be fewer, larger data centers and the ownership of data center equipment will change.”

    Reply
  35. Tomi Engdahl says:

    Why ‘multi-everything’ is normal for cabling-certification
    http://www.cablinginstall.com/articles/print/volume-21/issue-7/features/why-multi-everything-is-normal-for-cablling-certification.html

    Today’s contractors must be able to manage multiple environments, media, standards and technologies in order to succeed.

    Today’s information technology (IT) discussions are filled with terms like cloud, virtualization, SAN, SaaS and SLA. Rarely is the physical layer part of the buzz, but as we know in our industry, all network technologies lead back to that critical, foundational layer and the cabling infrastructure that supports it. Like the technologies around it, Layer 1 of the seven-layer OSI Model is changing. Consultants and network owners who do not embrace this change by addressing the mounting complexities of installation and certification will struggle for profitability and the very survival as a business.

    Certification’s future

    One possible answer to today’s demanding requirements is to add more expert project managers to the process. They could apply the insight, training and oversight needed to eliminate errors and improve efficiency. This option, however, is not economically feasible in many circumstances. In those cases, another solution is a testing tool that can help take on that role, managing the test process as well as the test itself.

    A test solution with these capabilities is more agile, has the ability to address all six steps in the certification process, and can manage multiple testing scenarios. To solve the multiple challenges that exist in today’s certification environment, a tool will need to be built from the ground up for the “multi” environment. If it is built that way, it can help project managers and technicians meet the evolving challenges associated with cable certification. ::

    Reply
  36. Tomi Engdahl says:

    BSRIA: Global structured cabling market will exceed $8B by 2020
    http://www.cablinginstall.com/articles/2013/09/bsria-market-2020.html?cmpid=$trackid

    A newly released “hot topic study” from BSRIA charts the global structured cabling market through the year 2020, concluding that by then the total market will exceed $8 billion. The market totaled $6 billion in 2012, BSRIA says, and will chart a course of 4-percent annual growth through 2020.

    “The structured cabling market is facing a turbulent time,” BSRIA said when announcing the study’s availability. “Structured cabling in data centers continues to move toward the use of fiber. The number of smaller data centers that mainly use copper will decline as the penetration of cloud services increases and outsources become more prevalent.”

    The global structured cabling market is expected to continue to grow, from $6 billion in 2012 to $8.3 billion in 2020.

    Cabling in data centers accounts for $1.1 billion in 2012 and is expected to grow to $1.6 billion in 2020. The increasing use of smart phones, tablets and laptops and the need for file sharing, networking and instant access will massively increase the need for data storage and speed.

    Reply
  37. Tomi Engdahl says:

    Tutorial: Installing overhead infrastructure in raised-floor applications
    http://www.cablinginstall.com/articles/2013/09/leviton-tutorial-video.html

    The following short (4:27) video provides step-by-step walk-through instructions for installing Leviton’s newly launched Overhead Infrastructure Platform in a raised-floor application. The tutorial covers mounting and assembling a single bay of the platform on a raised floor.

    Reply
  38. Tomi Engdahl says:

    11 hot cabling tips for cool data centers
    http://www.cablinginstall.com/blogs/2013/09/11-hot-cabling-tips-for-cool-data-centers.html

    Directly quoted, they are as follows:

    1. Measure twice, cut once.
    2. Label , label, label.
    3. Don’t skimp on terminations.
    4. Don’t skip the test .
    5. Keep patch cables short.
    6. Color code.
    7. Upsize your conduit .
    8. Make your design cable-friendly.
    9. Separate Cat 5 and power lines.
    10. Keep cables cool.
    11. Spaghetti prevention .

    Reply
  39. Tomi Engdahl says:

    New power protection devices from ABB safeguard industrial, data center environments
    http://www.cablinginstall.com/articles/2013/09/abb-power-protection.html

    ABB’s Power Conversion business (New Berlin, WI) has released its PCS100 UPS-I and PCS100 AVC power protection devices to the North American market. These inverter-based systems protect sensitive industrial loads from voltage sags and other voltage disturbances with fast, accurate regulation and load voltage compensation, says the company. The AVC is effective in a wide range of manufacturing and industrial settings, and the UPS-I is specially designed for semiconductor fabrication and data center applications. The North American introduction of the products follows a successful launch in Europe and Asia.

    Voltage sags and other voltage disturbances are common in industrial electricity supplies, accounting for up to 70 percent of all unscheduled production downtime, and resulting in expensive damage to equipment and product loss, estimates ABB. The problem has increased in recent years as modern industrial facilities have installed more complex equipment such as PLCs, control relays, variable speed drives and robots that are more sensitive to voltage sags and resulting outages.

    Voltage sag events, also known as a voltage dips or brownouts, are a reduction in the incoming voltage for a short period of time, typically less than 0.25 seconds. They are characterized by amplitudes below 90 percent of the nominal range. While not complete voltage interruptions, they are the most common industrial power quality problem, and are often deep enough to cause equipment control circuits to drop out and reset. The consequences of sags for industrial operations include unexpected downtime, lost revenue wasted materials, poor product quality, equipment damage, and in the worst scenarios, injury to personnel. The aggregate cost of unreliable electricity to the US economy are approximately $160 billion annually, with the average premium grid manufacturing facility experiencing six to twenty significant voltage sags per year.

    The cost of a single voltage sag ranges from several thousand to several million dollars or higher. In one high profile example, a voltage sag at a major Japanese electronics manufacturer caused a production disruption of a popular computer chip, resulting in a 20 percent drop in shipments for the following two months, curtailing the availability of many consumer electronic devices

    The ABB AVC and UPS-I are battery-free power protection solutions focused on significantly reducing unplanned process downtime by ensuring that industrial loads continue to receive a clean, uninterrupted flow of power during major grid disturbances.

    Reply
  40. Tomi Engdahl says:

    Voltage performance monitor sniffs out data center, critical IT equipment failures
    http://www.cablinginstall.com/articles/2013/07/ideal-voltage-performance-monitor.html

    New from from Ideal Industries, the VPM Voltage Performance Monitor works where the symptoms of poor quality voltage occur: at the point where equipment is connected. The company contends that, when a voltage problem is suspected as a cause of equipment failure, the traditional solution has been to place an analyzer on the main service. However, Ideal notes that this approach misses problems at the branch level where sags, swells, impulses, harmonics and other voltage events can adversely affect electronics.

    Simple to use, the VPM offers real time monitoring of TRMS voltage, frequency and harmonics. Once plugged into an outlet, the VPM will measure, categorize and list each voltage event, including its magnitude, duration and the exact time the event occurred.

    The VPM can also be used to determine if voltage is stable enough to connect additional equipment to a circuit or if power conditioners, such as a UPS or surge protector, are required. This is an especially important capability for hospital and IT maintenance engineers who need to monitor voltage quality for mission-critical equipment in the data center or operating room.

    Reply
  41. Tomi Engdahl says:

    Continued buuming – again a new data center in Finland

    The data center boom in Finland received today the continuation of the Vantaa opened a plush Hansa data center, taking advantage of the latest technologies in energy use and cooling. The server is hosted by a leading European operator-independent data center company TelecityGroup. The new machine at the gym havitellaan clients seeking to establish the East-West junction.

    TelecityGroup acquired last year for Finnish companies Academica and Tenue. It had already been four data center, metropolitan area, is the fifth-look Vantaa.

    Hansa is located in Vantaa, Martin Valley in the former Hansa’s printing house, Vantaa Energy’s 195-megawatt power plant smoke stacks in the shade. Energy supply provides two data centers pull the 10 MVA power line, but it was, however, less important reason for the rankings.

    “Electricity is cheaper in Finland than in Britain, but more savings in the data center cooling capacity. Cold weather here to keep the data center cool, free of charge about 80 per cent of the year, “by City Group CEO Mike Tobin said.

    “We are always in the new data centers to implement something new, and here the new invention are stacked on top of each server cabinets, between which the flow of cold air from the bottom. This has been done here in the first world. The solution is a high-performance. ”

    Tobin noted that the Finnish company is a very important hub between East and West. In Finland, attracted by good transport links, good infrastructure, good skills availability, and security.

    “Finland is very important to us for many reasons, one of which is the fact that Helsinki is the transport port. Russia’s connections to go through this. The German submarine cable project is important to us, because emphasizing the role of Finland. The political position of various data protection laws at the border is good, “Tobin described. As a new trump card, he mentioned the planned energy tax reduction for data centers.

    The company hamuaa data center customers not only from Finland to Russia, Baltic States and Western Europe.

    Vantaa room capacity is 5 MW. And can accommodate a couple thousand Rack cabinet. Now the areas have only one pair of rows of server racks, but the cabinets are empty.

    “The goal is that the spaces are filled within five years,” TelecityGroup Finland, Sales Sami Holopainen said.

    New kinds of cooling solutions make the Hansa unique in the world. A special feature is how the rows of server racks are placed in two layers on top of each other so that the two lines form a pair cabinet. Distance between them is two storeys high so-called hot aisle and back sides are responsible for the cold aisles, of which the cooling air is drawn through the cabinets. The hot exhaust air from the passage and up the waste heat is recovered. Cold and warm air do not mix.

    The data center cold air is 24 degrees, not freezing cold Finnish winter. Cooling fans are the climate, despite the heavy duty.

    Source: http://www.3t.fi/artikkeli/uutiset/talous/jatkoa_buumille_taas_uusi_konesali_suomeen

    Reply
  42. Tomi Engdahl says:

    Closet cleanup: Before and after photos
    http://www.cablinginstall.com/articles/slideshow/2013/09/closet-cleanup-before-and-after-photos.html

    It took four technicians a full week to straighten out five years’ worth of telecom room neglect.

    Reply
  43. Tomi Engdahl says:

    PLX, FCI team to demo PCIe data center connectivity over optical cabling
    http://www.cablinginstall.com/articles/2013/09/pcie-over-optical-cabling.html

    PLX Technology (NASDAQ: PLXT), a specialist in PCI Express (PCIe) silicon and software connectivity for enabling emerging data center architectures, and FCI, a manufacturer of connectors and interconnect systems, are collaborating on a live demonstration of PCIe over optical cabling, showcased at the 39th European Conference on Optical Communications (ECOC) event in London (September 23-25). PLX and FCI are demonstrating the use of FCI’s new mini-SAS high-density (MSHD) active optical cable (AOC) to provide 32Gbps (PCIe Gen3, x4) optical connectivity in a small-form-factor solution.

    The demonstration highlights how a PLX PCIe switch card connects to a PLX five-bay PCIe expansion card through the use of a standard MSHD connector and FCI’s new MSHD AOC. The expansion card allows any devices connected (such as PCIe adaptors, solid-state drives and NIC cards) to interact with the main motherboard/server as if it were a device installed inside the chassis. The demonstration platform can be used by those interested in developing system solutions for PCIe over non-standard PCIe cabling.

    “We have seen a significant number of customers coming to PLX and asking for high-density, high-performance, low-cost connectivity solutions to expand PCI Express outside the box,” comments Reginald Conley, vice president, applications engineering, PLX. “To be effective and meet the needs of a wide range of PCI Express users, these solutions must possess what we call the ‘triple threat’ connectivity option — copper, optical and AOC. In doing so, the widest range of performance and cost metrics can be met, and Mini-SAS HD has the potential to be one such solution.”

    Reply
  44. Tomi Engdahl says:

    Schneider Electric unveils pre-fabricated data centers up to 2MW, plus reference designs
    http://www.cablinginstall.com/articles/2013/10/schneider-prefab-data-centers.html

    To aid data center operators in their on-going quest for increased capacity at reduced cost, Schneider Electric has introduced 15 new prefabricated data center modules and 14 “industry-first” data center reference designs.

    The prefabricated data center modules deliver IT, power, and/or cooling integrated with best-in-class data center infrastructure components and Schneider Electric’s StruxureWare Data Center Infrastructure Management (DCIM) software for an easy-to-deploy, predictable data center. Prefabricated modules range in capacities from 90kW to 1.2MW and are customizable to meet end user requirements. The reference designs detail complete data centers scalable in 250kW to 2MW increments and meet Uptime Tier II and Tier III standards.

    Reply
  45. Tomi Engdahl says:

    Tier 1 and Tier 2 testing, troubleshooting and documentation
    http://www.cablinginstall.com/articles/print/volume-21/issue-10/features/tier-1-and-tier-2-testing-troubleshooting-and-documentation.html

    Familiarity with test standards is critical to ensuring performance in high-speed fiber-optic networks.

    Controlling network loss has become increasingly important as loss budgets get smaller and demands on networks increase. High losses and optical network failure are often caused by contaminated, damaged or poorly polished connectors, poor splices, and micro or macrobends introduced in shipping or installation. The best way to keep your network running efficiently? Test and inspect.

    Contamination of connector ends is the leading cause of optical network failures, based on this information originating from NTT Advanced Technology.

    Before we go on to the reasons for testing fiber networks, the types of testing needed, and how to perform this testing, remember that safety is paramount and keep the following in mind.

    Laser eye safety–Do not look directly into lasers on network equipment or test equipment.

    Fiber scrap and shards from fiber preparation–These must be handled properly to avoid punctures and cuts.

    Ensuring all systems are powered off–Unless specifically testing a live system, all network equipment should be shut off.

    Reply
  46. Tomi Engdahl says:

    Services fuel the next generation data centre
    It’s more than just boxes
    http://www.theregister.co.uk/2013/11/17/services_fuel_data_centre_evolution/

    In its basic form, a data centre is just a big room full of cages and cabinets, with highly reliable power, efficient security, fire and flood protection and a variety of internal and external network connectivity.

    In recent years providers have tried to differentiate their offerings but there is not really that much you can do to dress up what is basically a big noisy room. So for “cold aisle technology”, for instance, read “we have improved the airflow a bit”.

    How, then, can data centres evolve into anything better? Simple: if there is not much you can do with the environment, develop what you can do in that environment and what you can connect it to.

    Keep it private

    If you are a data centre provider with multiple premises, you have the opportunity to provide high-speed physical links between locations.

    Giving your customers the ability to extend their LAN between premises has huge benefits for their disaster recovery strategies and capabilities: with a low-latency physical Gigabit Ethernet link between your premises you can do real replication.

    If your data centres are not close to each other, though, point-to-point links are too costly. One alternative is to look at a layer 3 offering – managed MPLS services are ten-a-penny these days.

    The emerging concept, though, is the virtual private LAN service, or VPLS. This is a virtualised layer 2 service – think of it as a virtual LAN switch in the cloud, into which you can plumb your endpoints.

    VPLS is less well known but the concept has been around for some time

    There is, of course, a fundamental problem with VPLS. In fact, on reflection, there are two.

    The first is that if layer 2 were a good way to do wide area networking (WAN), we wouldn’t bother with layer 3 in any of our WAN applications. Layer 2 is brilliant when you want to connect A to B in a point-to-point sense because it gives a native connection to the endpoints that looks just like they are plugged into the same LAN switch.

    The big problem with layer 2 networks is broadcast domains. A layer 2 network or VLAN is a single broadcast domain, and if you suddenly connect five distant things together via VPLS you have made yourself a great big broadcast domain whose traffic levels grow exponentially as you introduce new nodes.

    Connect three offices at 10Mbps and two data centres at 100Mbps (a fairy typical starting point) without enough thought, and a broadcast storm on a data centre edge port will wipe out your offices’ connectivity. Not great.

    So what you will end up doing is putting in layer 3 transit networks to control the traffic flowing over the VPLS network and restrict wide area layer 2 operations to the devices that really need to talk natively at layer 2 to their distant counterparts.

    The second issue is that a VPLS service will, by its nature, operate over some kind of layer 3 (IP/MPLS) network. So in the scenario above you are running layer 3 on a layer 2 tunnel that is established through a layer 3 network, which sits on top of layer 2 technologies.

    So yes, it may well be slower than just having a boring old MPLS network in the first place.

    A quick re-cap: we have said that the way forward for data centre evolution revolves around VPLS, but that VPLS isn’t actually good for very much.

    In fact, VPLS is only a bad choice if you are trying to shoehorn it into a traditional network model. If you use it for the cool things it can provide, it is perfect.

    Like many data centres you don’t provide internet connectivity yourself but instead have three pet internet providers – let’s call them X, Y and Z – with presentations in your telco room. Each of your customers signs up to one of them for its internet service and you patch it into the right ISP with an Ethernet cross-connect. All very traditional.

    Now let’s flex this and introduce VPLS. You define two virtual private switches, one for each customer, in your VPLS network.

    Data centre evolution is initially all about services

    In fact I have known providers whose entire raison d’être is to sell services; they are almost reluctant to rent rack space and do so only because the customer demands it. Others have continued to simply be big, noisy, low-risk rooms.

    More than this, though, data centre evolution is about connectivity to services. Yes, you can achieve a lot using layer 3 networking, but the VPLS model makes service provision an order of magnitude more flexible and quicker to market.

    Reply
  47. Tomi Engdahl says:

    Collaborating to build faster, more energy-efficient data centers
    http://www.te.com/everyconnectioncounts/en/home.s_cid_corp_corp_te_var_aol_ECCgeneral_Onlineadv_701G0000000anHBIAY.html#datacenter
    http://www.te.com/everyconnectioncounts/en/home.s_cid_corp_corp_te_var_aol_ECCgeneral_Onlineadv_701G0000000anHBIAY.html#datacentervideo

    With the rise in popularity of cloud computing and users’ need to access and store vast amounts of data, traditional data centers are seeking new ways to deliver more speed while reducing overall operational costs.

    TE Connectivity engineers are working with our customers to improve data center infrastructure by upgrading the speed, energy efficiency, and density of connections. Through delivering innovative new data center components and products at the connection level, TE Connectivity gives today’s modern data center architects greater design flexibility and the ability to manage operational costs more effectively than ever before.

    Reply
  48. Tomi Engdahl says:

    Review: Puppet vs. Chef vs. Ansible vs. Salt
    The leading configuration management and orchestration tools take different paths to server automation
    http://www.infoworld.com/d/data-center/review-puppet-vs-chef-vs-ansible-vs-salt-231308

    The proliferation of virtualization coupled with the increasing power of industry-standard servers and the availability of cloud computing has led to a significant uptick in the number of servers that need to be managed within and without an organization. Where we once made do with racks of physical servers that we could access in the data center down the hall, we now have to manage many more servers that could be spread all over the globe.

    This is where data center orchestration and configuration management tools come into play. In many cases, we’re managing groups of identical servers, running identical applications and services. They’re deployed on virtualization frameworks within the organization, or they’re running as cloud or hosted instances in remote data centers.

    Puppet, Chef, Ansible, and Salt were all built with that very goal in mind: to make it much easier to configure and maintain dozens, hundreds, or even thousands of servers. That’s not to say that smaller shops won’t benefit from these tools, as automation and orchestration generally make life easier in an infrastructure of any size.

    Reply
  49. Tomi Engdahl says:

    Why the Cloud Requires a Totally Different Data Center
    If there’s one mind-blowing statistic about Amazon Web Services, it’s the company’s scale.
    http://www.cio.com/article/743387/Why_the_Cloud_Requires_a_Totally_Different_Data_Center

    f there’s one mind-blowing statistic about Amazon Web Services, it’s the company’s scale.

    The cloud is a nascent technology, but AWS is already a multi-billion-dollar business and its cloud is reportedly five times bigger than its 14 top competitors combined, according to Gartner. Amazon’s Simple Storage Service (S3) stores more than a trillion files and processes 1.5 million requests per second. DynamoDB, the AWS-designed NoSQL database, is less than a year old and last month it already had more than 2 trillion input or output requests.

    Supplying all those services at that scale requires a lot of hardware. The cloud division is growing fast though, which means that AWS is continually adding more hardware to its data centers. A A

    How does AWS keep up with all that? The man who directs the strategy behind it, AWS Vice President and Distinguished Engineer James Hamilton, shared insights into this at the company’s re:Invent customer conference in Las Vegas last week. In a nutshell, “Scale is the enabler of everything,” he says.

    AWS has optimized its hardware for its specific use cases, he says. AWS has built custom compute, storage and networking servers, which allow the company to customize down to a granular level. Its storage servers are “far denser” than anything on the market and each weighs more than a ton, Hamilton says. Most recently AWS customized its networking gear to create routers and protocol stacks that provision high performance workloads.

    AWS even customizes its power consumption processes. The company has negotiated bulk power purchase agreements with suppliers to get the energy needed to power its dozens of data centers across nine regions of the globe

    Even with all the customization, AWS can’t always predict exactly how much of its resources will be used. If AWS can increase its utilization, its costs will be lower because it will get more bang for its buck from the hardware.

    There will still be under-utilization, but AWS has tried to turn that into an advantage. The introduction of spot-instances, which allow customers to place bids on excess instances, enables this.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*