Data center backbone design

Cells vs. packets: What’s best in the cloud computing data center? article from few years back tells that resource constrained data centers cannot waste anything on their way to efficiency. One important piece on this is right communications technology between different parts of data center.

In the late 1990s and early 2000s, proprietary switch fabrics were developed by multiple companies to serve the telecom market with features for lossless operation, guaranteed bandwidth, and fine-grained traffic management. During this same time, Ethernet fabrics were relegated to the LAN and enterprise, where latency was not important and quality of service (QoS) meant adding more bandwidth or dropping packets during congestion.

Over the past few years, 10Gb Ethernet switches have emerged with congestion management and QoS features that rival proprietary telecom fabrics. With the emergence of more feature-rich 10GbE switches, InfiniBand no longer has a monopoly on low-latency fabrics. It’s important to find the right 10GbE switch architecture that can function effectively in a 2-tier fat tree.

127 Comments

  1. Tomi Engdahl says:

    MOVE IT! 10 top tips for shifting your data centre
    From cable ties to rack size – sweat the small stuff
    http://www.theregister.co.uk/2015/03/18/moving_your_datacenter/

    The scenario’s a hauntingly familiar one. You’re the IT person who’s just been told by the boss: “We’re moving the kit to , now get on and do it.”

    Here’s my ten steps to setting up your kit in a data centre.

    1. The connection
    Although you can theorise that it’s OK for things to be a little slower than they were with an on-premise installation, the reality is that the users won’t be convinced and so you have to be sensible about connectivity.

    2. Security
    First, make sure your boss has put you and whomever you’re taking with you (the only person who’ll thank you for installing rack-mounted servers single-handed is your osteopath as he hands you the bill) on the access list

    3. Your data centre toolkit
    Anyone who visits a data centre regularly has a standard kit of stuff to take with them. Some bits are obvious: cable ties and velcro loops, a variety of patch cords of different colours and lengths, power cords (with the right plugs and sockets), screwdrivers and a bag of rack nuts and bolts. Other bits are less obvious. The two most useful things I’ve ever owned were a pair of four-way adaptors. Both had four UK sockets; one had a US domestic plug and the other an IEC-13 (kettle-style) plug.

    4. Rack design
    Before you rock up to install the kit, design the rack layout. Doesn’t have to be a Dali-esque masterpiece in Visio – in fact I use a spreadsheet for all of mine – but it has to be sensible. Start with the keyboard/monitor shelf – they’ve got to be located at a height where you can actually use them. Next come the heavy servers: try to put them near the bottom
    Make sure the rack design includes every last detail, including (and especially) power and network cabling. Use the right length cables and plot the location of every device’s network and power connection. If your in-rack power strips are IEC-13 format, then beware of any gadgets that are powered from UK-style transformer plugs

    5. Cable management
    Cable management: the most tedious thing in the world, and the thing you’ll be thankful for if you do it properly. Don’t just cram all the kit in the rack: make sure that you put cable management units in so you can run the cabling in a fashion that lets you work with the wiring later.

    6. Move or replace?
    When you’re moving your stuff to the data centre, take the opportunity to consider replacing some of your kit. The average on-premise set-up has equipment of varying ages, and it’s definitely worth looking into replacing at least some of the older stuff

    7. Co-ordinating the move
    Moving kit to the data centre is a combination of diligence and care. For the really crucial stuff, give strong consideration to employing professional movers – or at the very least use proper padded crates that are designed for IT kit.

    8. Documentation
    Document the set-up to death, down to every last detail – such as power and network connections – and store a copy of the docs in the racks as well as keeping them electronically: there’s absolutely no harm having a copy taped somewhere convenient in the rack so long as it’s not interrupting airflow. Make sure every single device in the rack is labelled clearly and correctly

    9. Remote monitoring
    With your equipment located some way from the office, you’ll feel like someone’s cut off one of your limbs unless you compensate for your inability to simply wander into the server room and look at stuff. At the very least you need to run up some monitoring tools so you can keep a weather eye on the behaviour of the servers.

    10. Your stash of bits
    I mentioned earlier that alongside your own toolkit that you carry around, you should have a stash of useful stuff in the data centre. Some providers let you have a plastic box in the bottom of your rack (assuming there’s room) while others will rent you a small locker: whichever’s the case, do it. Keep a comprehensive (but not vast) stock of power and network cables of all lengths and colours you might need, and if you can you should also have some spare hard disks and power supplies for the key equipment as these are the two things that blow up most frequently

    What you’ve just read is just the start to working in data centres: although they look simple there’s a lot to know if you want to do it properly.

    Reply
  2. Tomi Engdahl says:

    Sounding off on a quieter data center
    http://www.cablinginstall.com/articles/2015/03/cisco-data-center-quiet.html

    Look out below. Electrical conduits, data cabling and cooling were all routed beneath the data center’s raised floor – very conventional. As I entered one of the room’s cold aisles and glanced at the perforated tile under my feet, though, I was startled to see the basement floor about 15 feet below. The data center’s air handlers and power distribution units were down there and with air flowing up through the tiles, designers saw no reason to install a true floor for the ground level. It’s not a design choice I might make – an open floor tile in the data hall poses a potential safety hazard – but it was certainly interesting to see.

    Quiet data centers still aren’t the norm these days. Check out the video below – no ear plugs required – for a stroll around the noisier locations in a data center, and a discussion of what can be done to lower the volume.

    Reply
  3. Tomi Engdahl says:

    Data centre doesn’t like your face? That’s a good thing
    Winning strategies for selecting a new server centre
    http://www.theregister.co.uk/2015/04/09/security_on_your_brand_new_data_center_wont_let_you_in_thats_a_good_thing/

    Your company has decided, quite sensibly, that it wants to move its application infrastructure to a data centre rather than living with the risk of an on-premise approach. So how do you choose the data centre you should move to?
    Location

    Location is a compromise of locality versus suitability, but in my mind you should lean toward suitability. A “suitable” location is one that’s close enough to civilisation for the power supply to be appropriate (more about that in a bit) and for you to be able to get sensibly priced telecoms links from multiple reliable providers.

    What tier?

    Data centres are classified in “tiers”, which describe the level of resilience you can expect from each location.

    The tiers are numbered from one to four (higher is better): so while a Tier-1 data centre has single points of failure and an availability level that expects downtime of up to about a day a year, at the other end of the scale a Tier-4 data centre has multiple fault tolerance and an expected downtime of no more than about half an hour each year.

    Usability

    The concept of usability for a data centre sounds a bit bonkers, but actually it’s a crucial consideration. I’ve come across data centres – particularly in big UK cities – that were dark, gloomy, hard to navigate and entirely devoid of equipment, as well as being a bloody nightmare to get to.

    A good data centre will be well lit. Although you should have a torch to hand to see into the darker recesses of your cabinets, for general work you should be expect to be able to see what is going on.

    The aisles between the cabinets need to be sensibly sized

    Connectivity

    I mentioned the ready availability of suitable connectivity: this means both power and networking. It’s rare that a data centre provider is able to take power provision from more than one supplier, so what you should care about are its secondary and tertiary plans for power.

    You’re looking for N+1 UPS protection (N+1 means they can lose one core element from the UPS and still provide their total committed power) and generator protection (with guaranteed diesel stocks) backing that up.

    Network provision is an interesting one, and is something you’ll have to work at to get right. The reason for this is that the provider of your leased line or internet connection isn’t necessarily the same as the provider that puts in the physical link.

    Security and rules

    Security in data centres is a pain in the backside, and rightly so. If you’re being shown round a data centre by your prospective supplier and it’s easy to get in, you should walk away.

    Data centres should have CCTV inside and out, with the images easy to view, and on entry you should need to show photo ID and be listed on the provider’s formal access list.

    You should have access only to the areas where your cabinets are located (and the communal rest/refreshment/toilets areas, of course) and if your cabinets have keylocks (as opposed to combination locks, which are preferable) you should have to sign the keys in and out. The provider should be able to give monthly reports on entry and exit for every visitor.

    Certifications

    Certifications such as ISO27001 and ISO9001 are considered essential by many customers: they help tick the boxes when the auditors pay their annual visit to check that all is in order with governance and compliance.

    Reputation

    One of the most valuable aspects of your choice of data centre is what others think of that company and/or that site and how they’ve actually performed over the years.

    For example, I’ve said you ought to ensure you go for at least a Tier-3 installation, but even though they’re in theory allowed 90 minutes or so downtime per year, there are plenty out there that have had no downtime, ever.

    Summary

    Be diligent when you choose your data centre. You’ll be with that provider for a significant amount of time, so spend a decent amount of time understanding what you need, clarifying what they will do for you, asking your peers and their other users what they’re like and conferring with network providers to ensure that you can get the connectivity you need.

    Reply
  4. Tomi Engdahl says:

    Forget the density, just unlock your cloud’s power sweet spot
    Watch your investments and build out gradually
    http://www.theregister.co.uk/2015/06/02/data_center_density_myth_power/

    Talk to a friend or colleague in IT, and pretty soon you’ll get on the topic of big data, the data explosions, and mega data centres.

    Bar stool logic — and prevailing wisdom — suggests there is no end to our collective hunger for data, and so this must be fed by ever-larger, ever-denser data centres designed, powered and chilled in ever more innovative ways. Hyper-scaling is fun and interesting to discuss, and so we all do it.

    Makers of servers, racking and networking equipment, power management and neat cooling doodads are keen to have us pursue this line of thought — to have us build out using the latest shiny, slim yet powerful gear that is filled with greater numbers of racks, packed with huge volumes of data and therefore demands ever more power. Well, maybe not.

    European multinational Schneider Electric is bucking the trend of trying to squeeze in more power. Rather, this equipment and services supplier has been looking for ways that IT managers might find the sweet spot in terms of the power density of their equipment.

    Schneider’s premise is simple enough — that rack power densities have a direct impact on the capital costs of any data centre.

    So, rather than try to pack in more, perhaps a sweet spot of average rack density exists where maximum savings can be made before the additional infrastructure investments required exceed the efficiencies gained?

    Wendy Torell, senior research analyst for Schneider Electric’s Data Center Science Center, explained: “We began research for a white paper because we were not convinced by all the forecasts claiming that densities would climb towards 20kW-30kW a rack and beyond. The only way to know for sure was to take a dive into why data centres in reality are still not at those densities, and what that might mean.”

    Also, most people overestimate the actual power consumption of their servers because they simply look at the boilerplate ratings rather than measuring them.

    So, instead of investigating how to implement the highest possible densities, Schneider decided to focus on optimising densities in the light of current technology trends, enduring mixed IT environments in our data centres, and the cost of purchasing and implementing infrastructure at varying densities.

    Specifically it looked at the cost per watt as density increases, and claimed that there is a steep decrease in cost per watt from 1kW per rack to 5kW per rack until the curve begins to level off.

    Based on these findings, Schneider now suggests the optimal average density per rack in a practical data centre is between 5kW-8kW per rack. Lower densities are too expensive per watt, while higher delivers poor return for the added complexity.

    “There are solid reasons for the curve. The cost of the actual racks can be a significant capital expenditure, so configuring them with very low densities or indeed anything below 5kW decreases the cost efficiency per rack.”

    “There is a sweet spot of efficiency from 5kW-8kW, but this again diminishes at higher densities due to the cost of wider racks, more powerful and expensive power distribution units, the costs of cooling, more intensive maintenance, and so on,”

    “While many data centre managers have the ability to manage one or two racks with very high density if required, they are very rarely able to reach those levels as an average density,”

    “Our findings continue to point to a target overall density of around 5kW per rack, allowing for specific pods housing higher density equipment with dedicated power distribution to limit over-provisioning,”

    Schneider’s overriding advice for data managers is to build out gradually, and not to make investment decisions across entire facilities too quickly.

    “Generally we all dislike lengthy planning projects, and we hate seeking repeat approvals at multiple stages,” explained Bunger.

    “It’s a major reason why companies want to build all in one go, but this approach doesn’t provide flexibility or allow room for future insight and adaptation.”

    Schneider recommends setting and enforcing policies on how equipment is deployed across the whole business, and not simply leaving it to project managers.

    Reply
  5. Tomi Engdahl says:

    Ten extreme data centres. OK…nine
    If you have nothing to hide, you have nothing to fear
    http://www.theregister.co.uk/2015/08/24/ten_extreme_data_centres/

    Data centre technology moves at a glacial pace, and haven’t always been considered the sexiest technology in the world. However, recently, thanks to the cloud and Edward Snowden – the patron saint of the data centre – data centres have become a lot more extreme. So here’s just as taste of the data centres at the edge of technology and in some cases the edge of the world.

    Reply
  6. Tomi Engdahl says:

    How to build a server room: Back to basics
    Reg reader compiles handy checklist for SMEs
    http://www.theregister.co.uk/2015/09/22/how_to_build_a_server_room_sme_advice/

    the following still needs to be pointed out far too often:

    1. A cloak room is not a server room, even if you put servers inside.

    2. You cannot power 100A worth of equipment from a 16A wall socket. Not even if there are 2 of them.

    3. You cannot cool the above by opening a window and puting two fans in front of the computers. A domestic AirCon unit won’t do much good either. Also, don’t put drippy things above sparky things.

    4. Ground lines are not for decoration. They need to be used and tested regularly for safety.

    5. DIY plugs cannot be wired any which way. Not even in countries that allow Line and Neutral to be swapped.

    6. The circuit breakers at the end of your circuits are part of your installation.

    7. You cannot protect a rack of equipment with a UPS from PC World. If you really need this, you’re going to have to buy something which is very big, expensive and very very heavy. And the batteries are only good for 3 to 5 years.

    8. Buildings have structural limits. You cannot put several tonnes of densely packed metal just anywhere. Know your point and rolling loads and then check the building.

    9. Electrical fires are nasty. Chemical fires are worse. You need stuff to protect the installation.

    10. If you want 24h service, you’ll need 24h staffing. A guy that “does computers for us” won’t do.

    11. A 1 Gbps uplink cannot feed 48 non-blocking 1 Gbps ports.

    12. Metered and managed PDUs will save you bacon one day. Buy them.

    13. Label all the cables BEFORE installing them. (No, you cannot just “follow the cable” afterwards)

    14. My favourite: Don’t blow your entire equipment grant on computers. All the stuff above costs money.

    Reply
  7. Tomi Engdahl says:

    Which data centre network topology’s best? Depends on what you want to break
    Boffins beat up DCell and BCube to see what breaks
    http://www.theregister.co.uk/2015/10/13/which_data_centre_topology_is_best_depends_on_what_you_want_to_break/

    Which data centre topology is better is an arcane, vexed and vital question: after all, as any cloud users knows while they’re thumping the table/monitor/keyboard/whatever, we’ve gone long beyond a world where outages can be regarded as trivial.

    Researchers from France and Brazil reckon one important question is where you expect failures – in switches, or in data links.

    In this paper at Arxiv, the researchers led by Rodrigo de Souza Couto of the Universidade Federal do Rio de Janeiro compared how traditional three-layer, Fat-Tree, DCell and BCube architectures behave when things go wrong (compared to the traditional three-layer edge/aggregation/core model).

    The paper explains that these three topologies were chosen because they have this in common: they’re designed for the modern data centre design that combines modular infrastructure and low-cost equipment.

    Their conclusion is that if a link fails, BCube recovers better, but if a whole switch goes dark, DCell is better.

    Reliability and Survivability Analysis of Data Center Network Topologies
    http://arxiv.org/abs/1510.02735

    Reply
  8. Tomi Engdahl says:

    Modularity for all! The data centres you actually want to build
    Democratising the build out of racks
    http://www.theregister.co.uk/2015/10/14/modular_datacenter/

    Portability and modularity in the world of data centres aren’t new: for years, they’ve been something unique to the military and others operating in either temporary or hostile environments.

    You put your data center gear in a ruggedised and self-supporting unit of some kind and walk away, managing it remotely. Increasingly, however, modularity is becoming something for those of in the mainstream – at least, those of us still building data centers.

    Data centres are an expensive proposition; a 10,000sq/ft facility designed to last 15 or 20 years will costs about $33m. And, unlike in years past, the returns are not guaranteed.

    Firms are less interested in running their own tin and are shipping out the compute to the public cloud. Service providers, meanwhile, are struggling to make a profit against the likes of Amazon.

    Increasingly, it makes less sense to initiate an old-school blanket data centre rollout. Increasingly, such projects are the preserve of the web-tier super league, such as Facebook and Microsoft.

    Rather, modularity is the new approach.

    The value of the pre-fabricated, modular data centre market is calculated to grow at a CAGR of 30.1 per cent by 2018, up from $1.5bn last year, according to 451 Research. Last year also saw a bout of activity that included Schnider buying AST Modular and UK specialist Bladeroom entering the US market.

    Reply
  9. Tomi Engdahl says:

    Are the rules different for hyperscale data centers?
    http://www.cablinginstall.com/articles/pt/2015/11/are-the-rules-different-for-hyperscale-data-centers.html?cmpid=EnlCIMCablingNewsNovember232015&eid=289644432&bid=1240370

    “inside hyperscale data centers there is an insatiable appetite for the fastest equipment and connections possible — which means literal forklift updates every three years. It also means ordering equipment in such quantities that common market dynamics no longer apply. Unconventional technologies, flexible Ethernet and on-board optics, for example, become attractive.”

    “Hyperscale data centers are huge, measuring in multiples of sports arenas, so finding space for another rack is not an issue. The problem is the lack of enough real estate on server faceplates. Vendors are working feverishly to make interconnect smaller and denser, but it’s a tough slog, and they still need to leave space for airflow vents.”

    Rules Change for Hyperscale Data Centers
    http://www.lightreading.com/data-center/data-center-infrastructure/rules-change-for-hyperscale-data-centers/d/d-id/719188

    Inside hyperscale data centers there is an insatiable appetite for the fastest equipment and connections possible, which means literal forklift updates every three years. It also means ordering equipment in such quantities that common market dynamics no longer apply. Unconventional technologies, flexible Ethernet and on-board optics for example, become attractive.

    Oh, and there seems to be little interest in white boxes at the hyperscale level.

    Ever try to get a tour of a hyperscale data center? It’s easier to get a prom date with Mila Kunis

    Microsoft Azure currently runs about 100 data centers globally, equipped with over 1.4 million servers (and counting), mostly running 10G and 40G, looking to ramp to 25G and 100G, Booth said.

    Cloud computing is growing at such eye-popping rates that Microsoft Azure will install the fastest equipment and connectivity it can get the moment it can get it, Booth explained.

    “We’re planning to go 50G to each server. Our core will be 100G. That leaves little difference between data center and core. That’s why we’re looking at 400G,” he said.

    He noted that the IEEE committee developing the standard for 400G recently announced a ten-month slip in its schedule, so that the standard is now due in December of 2017.

    “People ask me, are you interested in 400G? Yeah, I’d buy it today. 1.6 terabits? I’d buy it today.”

    Ordinarily a vendor brings a product to the market and sales are slow at first as first adopters test it out. If successful, the sales chart will get the classic hockey stick appearance — relatively flat and then turning sharply upward.

    When Microsoft Azure makes a major upgrade, it happens immediately. “We walk in and say we need tens of thousands of these things this week. It changes the economics,”

    Microsoft is a member of the Consortium for On-Board Optics (COBO), along with Broadcom, Juniper, Cisco, Finisar, Intel and others. The idea is to just move the interconnect, which also shortens the distance between connections, which means cheaper copper cable remains practical.

    With standard interconnect, if there’s a problem with interconnect, you just swap out the connector. With OBO, though, if something goes wrong, the problem is inaccessible.

    To get to 40G, it is possible to combine four 10Gs. But there still isn’t support for freely mixing and matching any connection you want to get to any increment you want.

    “I don’t want to be constrained to 100G,” Booth said. “You go 80 kilometers, 100 kilometers? That gets expensive. If I can get 1G more out of them, it’s worth it.”

    Equinix specialises in interconnect and colocation. The company has 105 data centers, and it intends to have 150. Tarazi boasted of over 1,000 peering agreements, including the most connections with the biggest hyperscale data center companies.

    “We connect with AWS, Azure, Cisco, IBM. You can send 80% of your traffic on one connection.”

    The company is planning to have 12 to 20 switches in every data center. In order to scale up, the company went fully SDN, using Tail-f (now part of Cisco).

    Reply
  10. Tomi Engdahl says:

    Russian nuke plant operator to build on-site data centre
    ‘Status: Green’ may not be what you want to hear if you put data in Kalinin
    http://www.theregister.co.uk/2015/11/27/russian_nuke_plant_operator_to_build_onsite_data_centre/

    Russia’s sole nuclear power plant operator, Rosenergoatom, has reportedly hit on the idea of building a data centre next to one of its power plants.

    Telecom Daily reports that that the Kalinin nuclear power plant will gain a ten-thousand-rack, 80-megawatt data centre as a near neighbour.

    Co-locating power sources and data centres is nothing new – the US state of Oregon’s hydro-electric facilities have attracted numerous bit barns. Nuclear plants, however, raise emotional arguments that hydro plants do not.

    Rosenergoatom apparently hopes to cash in on Russia’s decree that its citizens’ personal data must be stored on its own soil.

    Reply
  11. Tomi Engdahl says:

    The data center up in four weeks

    Swedish Bitcoins mined the power KnCMinerilta lasts only four weeks the construction of an entire data center.

    “Most of the time going to sign the agreement,” President and CEO of Sam Cole.

    The company said on Friday the establishment of Boden near the new 20-megawatt data center in Luleå in the city’s technology focus. The same area is also located Facebookon fresh energy efficient facility.

    This is in half a year already the second data center, which the company starts to build, the news agency Idgns says.

    KNC says that the virtual currency of the extraction process speed is everything.

    Worldwide bitcoin-mining is conducted at about 633 petahash seconds – KNC manages about 5 percent of that

    Source: http://www.tivi.fi/Kaikki_uutiset/datakeskus-pystyyn-neljassa-viikossa-6237958

    Reply
  12. Tomi Engdahl says:

    The Data Center Density Debate: Generational Change Brings Higher Densities
    http://it.slashdot.org/story/15/12/20/053243/the-data-center-density-debate-generational-change-brings-higher-densities

    Over the past decade, there have been repeated predictions of the imminent arrival of higher rack power densities. Yet extreme densities have remained focused in high performance computing. Now data center providers are beginning to adapt their designs for higher densities.

    The Density Debate: Is Cooling Door Adoption a Sign of Coming Shift?
    http://datacenterfrontier.com/data-center-density-debate-colovore/

    Density is coming to the data center. But thus far, it’s been taking its time.

    Over the past decade, there have been numerous predictions of the imminent arrival of higher rack power densities. Yet extreme densities remain limited, primarily seen in high performance computing (HPC) and specialty processing such as bitcoin mining.

    The team from Colovore believes data centers will be denser, and the shift will accelerate over the next few years. The colocation specialist believes it is on the front edge of a broader move to denser server cabinets, driven in part by a generational change in IT teams.

    Traditional data hall designs will struggle to cool these higher densities. That’s why Colovore is filling its data center in Santa Clara with high-density racks featuring water-chilled rear-door cooling units.

    “When we came to market last year, people weren’t buying the density yet,”

    Colovore is not alone in adopting rear-door heat exchangers at scale. LinkedIn is implementing a new data center design featuring rear-door cooling units for its new facility near Portland, Oregon. The company said its next-generation design will use “cabinet-level heat rejection” that will double the cabinet densities from its previous data center builds. The new data center will be hosted by Infomart, where LinkedIn has reportedly leased 8 megawatts of space.

    As the rear-door cooling unit gains traction in both the colocation and hyperscale markets, some see it as one of several portents that rack power densities are finally starting to edge higher. It’s a trend that offers both challenges and opportunities for data center operators, both of which are beginning to drive new data center designs like those at Colovore and LinkedIn.

    How long has the data center industry been talking about the arrival of higher densities? My first story on the topic dates to 2002, when cooling vendors demonstrated water-cooled cabinets at a meeting of the 7×24 Exchange, and predicted a “new paradigm” in cooling.

    If density is a long-awaited problem, it’s also one that data center customers have been bracing for in their capacity planning, seeking “headroom” for denser workloads and often provisioning more cooling than they are likely to need.

    “The reality is that everyone says they want 200 to 250 watts per square foot, but almost nobody’s using it,” said Jeff Burges, President and founder of colocation specialist DataSite. “There will be some high density users, but also a lot of low density users.”

    The typical enterprise data center user is probably running densities of 3kW to 5kW per rack, according to Shawn Conaway, Director of Cloud Services at FIS. “I see going on more often is pockets of high-density workloads, especially in internal private cloud, where you can see 10 to 15 kW,” said Conaway. “I think we’ll see more of this.”

    Conaway said his firm, which specializes in IT solutions for the financial services industry, runs its own racks at 15kW to 30kW a cabinet. But there are those who are testing the boundaries of even higher densities.

    “We are pushing 50kW a rack,” said Richard Donaldson, the Director of Infrastructure Management and operations at eBay.

    eBay was among the first companies to use water-chilled rear door cooling units at scale.

    “I would argue that given where the technology is headed, we’re going to be seeing more density,” said Donaldson. “We’re now seeing densities shift from 1kW per rack to 5 kW a rack. That trend is coming. We’re already seeing it in Equinix and Digital Realty.”

    The cooling doors allow several significant changes in data center design. They attach to the back of a cabinet, and use the server fans within the rack to provide airflow through the unit, pushing hot air through the door-based coil that cools the air and returns it to the room at close to the same temperature as the air entering the rack.

    These units can cool higher densities than air cooling – up to 35kW per rack – and eliminate the need to place CRACs (computer room air conditioners) around the perimeter of the room, making more room for cabinets. It allows users to can run the data hall at a warmer temperature, in this case just below 80 degrees.

    “If you’re used to your data center being a meat locker, it’s odd to be in a new experience,” said Holzknecht. “In our facility, there’s no differential between the hot and cold aisle.”

    Generational Shift Could Accelerate Density

    An interesting wrinkle is that Colovore believes the presence of younger engineers in data center teams is changing attitudes about density and water cooling.

    “It’s a generational thing in a lot of ways,” said Holzknecht. “Our selling cycle is about engineers. We’re seeing a lot of 200KW to 1 MW deals, and internal infrastructure is often the requirement. At some of these companies, everything is virtualized and cloudified. Instead of having lazy servers being underutilized, they have grids.”

    “A lot of them grew up with PC gaming and water cooling right in their living room,”

    Reply
  13. Tomi Engdahl says:

    Is a modular data center the right option?
    Modular data centers can meet the needs of building owners that need a flexible data center quickly and with less upfront cost.
    Bill Kosik, PE, CEM, LEED AP BD+C, BEMP, Hewlett-Packard Co., Chicago
    12/22/2015
    http://www.csemag.com/single-article/is-a-modular-data-center-the-right-option/251a7ed5ae7e1faccb1565d086633d16.html

    The data center market has expanded dramatically in the past few years, and it doesn’t show signs of slowing down. Many clients and building owners are requesting modular data centers, which can be placed anywhere data capacity is needed. Modular data centers can help cash-strapped building owners add a new data center (or more capacity) to their site, and can assist facilities with unplanned outages, such as disruptions due to storms. Owners look to modular data centers to accelerate the “floor ready” date as compared with a traditional brick-and-mortar facility. Modular data centers are not for everyone; however, this Q&A will explore whether it’s appropriate for your next project.

    What is the range of modular data center design approaches?

    At one extreme you’ll find the typical monolithic “brick-and-mortar” data center. This type of data center is usually custom-built on-site. It can be costly and not very scalable. It often requires a long deployment time. Its design has one goal in mind: build it now for all future eventualities.

    At the other end of the spectrum are containerized data centers that can vary greatly in information technology (IT) capacity and type of power/cooling systems. This solution takes a minimalist approach, with racks of servers preinstalled in an industrial-type container. An excellent choice when the speed of deployment is important, the containerized data center works best for small-scale data center environments or emergency situations. The container solution enables very rapid deployment of IT assets when the capabilities of a more permanent facility aren’t required.

    Is there a modular data center approach that is somewhere in the middle of these two extremes?

    There is another option: an industrialized, comprehensive, turnkey solution with a modular architecture, and built using tilt-up, precast, or prefabricated construction techniques. Typical characteristics of this data center type include a menu-driven selection of mechanical and electrical components, and cooling systems that take advantage of local climate to significantly reduce energy costs.

    How does the cost of an industrialized data center compare to traditional, brick-and-mortar solutions?

    The modular design and construction of this type of data center can significantly improve time-to-commissioning. In fact, from concept and commissioning, you can occupy the data center within a year. Cost is another advantage. Because of a number of factors, generating meaningful comparisons of actual construction costs for data centers is difficult. However, based on a midlevel estimate of capital costs for a traditional data center at about $15 million per megawatt, building a 6-MW data center appropriate for enterprise use would require an outlay of $90 million with a median estimate of around $9 million/MW for a modular design

    What about ongoing energy costs?

    It’s clear that on an annual basis the flexible data center will use less power than the conventional data center. Moreover, power-usage effectiveness (PUE) for the modular data center is also lower (1.19 versus 1.34), indicating its superior efficiency as compared with the monolithic structure

    Reply
  14. Tomi Engdahl says:

    Service Provider Builds National Network of Unmanned Data Centers
    http://hardware.slashdot.org/story/16/01/14/2146215/service-provider-builds-national-network-of-unmanned-data-centers

    Colocation and content delivery specialist EdgeConneX is operating unmanned “lights out” data centers in 20 markets across the United States, marking the most ambitious use to date of automation to streamline data center operations. While some companies have operated prototypes of “lights out” unmanned facilities (including AOL) or deployed unmanned containers with server gear, EdgeConneX built its broader deployment strategy around a lean operations model.

    Scaling Up the Lights Out Data Center
    http://datacenterfrontier.com/lights-out-data-center-edgeconnex/

    The “lights out” server farm has been living large in the imaginations of data center futurists. It’s been 10 years since HP first made headlines with its vision of unmanned data centers, filled with computers that monitor and manage themselves. Even Dilbert has had sport with the notion.

    But the list of those who have successfully implemented lights out data centers is much shorter. HP still has humans staffing its consolidated data centers, although it has used automation to expand their reach (each HP admin now manages 200 servers, compared to an initial 1-to-15 ratio). In 2011, AOL announced that it had implemented a small unmanned data center, but that doesn’t appear to have progressed beyond a pilot project.

    EdgeConneX is changing that. The company has pursued a lights out operations model in building out its network of 24 data centers across the United States and Europe. EdgeConneX, which specializes in content distribution in second-tier markets, designs its facilities to operate without full-time staff on site, using sophisticated monitoring and remote hands when on-site service is needed.

    The EdgeConneX design is perhaps the most ambitious example yet of the use of automation to streamline data center operations, and using design as a tool to alter the economics of a business model.

    The Deployment Template as Secret Sauce

    The key to this approach is an advanced design and operations template that allows EdgeConneX to rapidly retrofit existing buildings into data centers with Tier III redundancy that can support high-density workloads of more than 20kW per cabinet. This allowed the company to deploy 18 new data centers in 2014.

    A lean operations model was baked into the equation from the beginning

    “Our primary build is a 2 to 4 megawatt data center and about 10,000 square feet,” said Lawson-Shanks. ” We always build with a view that we’ll have to expand. We always have an anchor tenant before we go to market.”

    That anchor is usually a cable multi-system operator (MSO) like Comcast or Liberty Global,

    Solving the Netflix Dilemma

    “We’re helping the cable companies solve a problem: to get Netflix and YouTube off their backbones,” said Lawson-Shanks. “The network is being overwhelmed with content, especially rich media. The edge is growing faster than you can possibly imagine.”

    Data center site selection is extremely important in the EdgeConneX model. In each new market, the company does extensive research of local network and telecom infrastructure, seeking to identify an existing building that can support its deployment template.

    “This is a patented operations management system and pricing model that makes every Edge Data Center a consistent experience for our customers nationwide,”

    Managing Infrastructure from Afar

    The lynchpin of the lights out approach is data center infrastructure management (DCIM) doftware. EdgeConneX uses a patented data center operating system called EdgeOS to monitor and manage its facilities. The company has the ability to remotely control the generators and UPS systems at each data center.

    EdgeConneX facilities are managed from a central network operations center in Santa Clara, with backup provided by INOC

    Currently 20 of the 24 EdgeConnex data centers are unmanned. Each facility has a multi-stage security system that uses biometrics, PIN and keycard access, with secured corridors (“mantraps”) and video surveillance.

    EdgeConneX expects to be building data centers for some time to come. Demand for edge-based content caching is growing fast

    “The user experiences and devices are changing,” he said. “But fundamentally, it’s latency, latency, latency.”

    Much of this technology wasn’t in the mix in 2005 when the first visions emerged of an unmanned data center. But as we see edge data centers proliferate, the EdgeConneX model has demonstrated the possibility of using automation to approach these facilities differently. This approach won’t be appropriate for many types of workloads, as most data centers in second-tier and third-tier markets will serve local businesses with compliance mandates that require high-touch service from trained staff.

    But one thing is certain: The unmanned “lights out” data center is no longer a science project or flight of fancy. In 20 cities across America, it’s delivering Netflix and YouTube videos to your devices.

    Reply
  15. Tomi Engdahl says:

    Think Big-Picture ‘Hyperscale’ Cooling
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1328719&

    When you have megawatts to dissipate, cooling is about a lot more than heat sinks and other localized heat-removal approaches.

    Most electronic, mechanical, and thermal engineers are concerned with keeping the temperature of their IC or printed circuit board below some maximum allowable value. Others are more worried about the overall enclosure, which can be range from a self-contained package such as a DVR to a standard rack of boards and power supplies.

    Basic techniques for getting heat from an IC, board, or enclosure involve one or more of heat sinks, heat spreaders (PC-board copper), head pipes, cold plates, and fans; it can sometimes move up to more-active cooling approaches including air conditioning or embedded pipes with liquid flow. That’s all well and good, but obviously not good enough for the megawatts of a “hyperscale” data center. (If you are not sure what a hyperscale data center is, there’s a good explanation here). While there is no apparent formal standard on the minimum power dissipation to be considered hyperscale, you can be sure it’s in the hundreds of kilowatt to megawatt range.

    But where does all that heat go? Where is the “away” to which the heat is sent? If you’re cooling a large data center, that “away” to hard to get to, and doesn’t necessarily want to take all that heat you are dissipating.

    A recent market study from a BSRIA offered some insight in the hyperscale data-center cooling options and trends. I saw a story on the report in the November issue of Cabling Installation & Maintenance, a publication which gives great real-world perspective into the nasty details of actually running all those network cables, building codes, cabling standards, and more. (After looking through this magazine you’ll never casually say, it’s “no big deal, it’s just an RJ-45 connector” again.)

    BSRIA summarized their report and used a four-quadrant graph (below) of techniques versus data-center temperatures to clarify what is feasible and what is coming on strong. Among the options are reducing dissipation via variable-speed drives and modular DC supplies, cooling techniques from liquid cooling to adiabatic evaporative cooling, or allowing a rise in server-inlet temperature. The graph also shows the growth potential versus investment level required for each approach; apparently, adiabatic/evaporative cooling is the “rising star.”

    Reply
  16. Tomi Engdahl says:

    Data centers’ intricate design
    http://www.csemag.com/single-article/data-centers-intricate-design/7ff170677500e930389dc7de73b495a5.html

    Data centers are important structures that hold vital information for businesses, schools, public agencies, and private individuals. If these mission critical facilities aren’t properly designed and equipped, the gear inside and the data the servers handle is at risk.

    CSE: What’s the No. 1 trend you see today in data center design?

    Tim Chadwick: Scalability would be the top trend we have been seeing for the past 3 or more years, and it continues today. The challenge with designing for data centers is creating a facility designed to last for 15 to 20 years whose technologies will refresh or be changed out every 3 to 5 years. We are guessing at what the latest in server and storage design technologies will hold 4 to 5 years into the future.

    Barton Hogge: That is removing as many infrastructure dependencies as possible—and seeing water as a critical utility to be treated as seriously as backup power; clients are requesting designs that have little or no dependency on water usage. Smaller-scaled and lower-density sites can achieve this with reasonable ease, while sites with high-density and HPC applications are continuing to rely on the efficiencies of water as a heat-rejection source, but are investing more often in local storage systems.

    Bill Kosik: Cloud computing has really reshaped how data centers traditionally were realized. In most instances, cloud computing moves the computer power out of the customer’s facilities and into cloud computing providers. However, sensitive business-critical applications will typically remain in the customer’s facilities. In certain circumstances, the power and cooling demands in a customer’s data center facility could be reduced or filled in by other types of computer requirements.

    Keith Lane: I see energy efficiency as the No. 1 trend. On the electrical side, we are seeing more efficient uninterruptible power supply (UPS) systems, 400/230 V system transformers, and topologies that allow for more efficient loading of the electrical components. On the mechanical side, we are seeing increased cold-aisle temperatures, increased delta T, outside-air economizers, and hot-aisle containment. On the information technology (IT) side, the 230 V electrical systems also increase the efficiency of the servers. UPS battery technology is also improving. We are seeing absorbed-glass-mat and pure-lead batteries as well as advances in battery-monitoring systems.

    Reply
  17. Tomi Engdahl says:

    A snapshot of today’s data center interconnect
    http://www.cablinginstall.com/articles/2016/06/lightwave-dci-article.html?cmpid=Enl_CIM_DataCenters_June282016&eid=289644432&bid=1445640

    Now that “DCI” has become a “thing,” a well-known acronym with a wide range of connotations, “data center interconnect” has gone from being unknown and somewhat misunderstood to overused and overhyped. But what exactly the term DCI constitutes is changing all the time. What it meant last month might not be what it means in a few weeks’ time. With that in mind, let’s try to take a snapshot of this rapidly evolving beast. Let’s attempt to pin down what DCI currently is and where it’s headed…

    …At the risk of oversimplifying DCI, it involves merely the interconnection of the routers/switches within data centers over distances longer than client optics can achieve and at spectral densities greater than a single grey client per fiber pair (see figure below). This is traditionally achieved by connecting said client with a WDM system that transponds, multiplexes, and amplifies that signal. However, at data rates of 100 Gbps and above, there is a performance/price gap that continues to grow as data rates climb.

    The 100 Gbps and above club is currently served by coherent optics that can achieve distances over 4,000 km with over 256 signals per fiber pair. The best a 100G client can achieve is 40 km, with a signal count of one.

    Reply
  18. Tomi Engdahl says:

    Planning and designing resilient, efficient data centers
    http://www.csemag.com/single-article/planning-and-designing-resilient-efficient-data-centers/dfa51b4182d8867e80f5bce2f2e5a2d0.html?OCVALIDATE&ocid=101781

    Evolving technologies have developed into best practices to create secure, reliable, highly available, and adaptable data center spaces.

    Designing increasingly efficient and reliable data centers continues to be a high priority for consulting engineers. From the continuity of business and government operations to the recent rise in new cloud services and outsourcing, the increasing demands on Internet service continually places strains on design, energy consumption, and the operators who make these facilities run. While designing to incorporate state-of-the-art systems and equipment, we must not forget the functional needs of the data center operators and facilities staff.

    Energy efficiency is important, but it does not complete the picture of design. Facilities also must provoke a human response. The perception of beauty, proportion, and style that inspires emotion is critical. Design is more than energy and performance. Integrated design—done well—produces an emotional response. We experience this when looking at a stylish car. The balance of form and function, combined with the place the facility shares in the environment, illustrate integrated design. Efficiency is only a part of the equation, but is the key to operational effectiveness and energy cost control over the life of the facility, just as engine performance and gas mileage are to a stylish ride. Here, the engineer can have a great impact on the economic and environmental concerns that support the business of data center operations.

    Although PUE does not capture the IT hardware deployment efficiency (i.e., percentage virtualized, percentage used, etc.), it does normalize the result to reveal how well the electrical and largely the mechanical engineering response maintains the data center environment while lowering its impact on the natural environment.

    PUE is only a measurement method. Many codes and standards have emerged over the last 10 years to specifically address data centers.

    Electrical system efficiency strives to minimize voltage and current conversion losses. The impedance in transformers, uninterruptible power supplies (UPSs), power supplies, lighting, mechanical equipment, and the wiring plant—combined with controls—affect electrical efficiency opportunities. Higher voltages to the rack, UPS bypass or interactive modes, and switch-mode power supplies form the heart of electrical energy advances. The use of transformers optimized to achieve efficient low-loss performance at lower loads (30% and above) have emerged as a mainstay. Increasingly, these transformers also deliver higher voltages (240 Vac) to the rack, which lowers IT equipment switch-mode power supply energy losses. Perhaps UPS systems have seen the most attention with improved conversion technologies and even line interactive operation mode. In the past, line interactive mode would have been considered risky.

    To address the most common and often most physically destructive fault condition, ground faults, and simultaneously maintain the highest degree of availability, engineers must consider pushing ground fault interruption further into the distribution. By using ground fault detection and interruption to isolated individual main distribution segments, main breakers can be engaged at different fault conditions. Avoiding main breaker ground fault interruption should be a priority. Main switchgear provided with optic fault detection and current reduction circuitry—a relative newcomer to selective coordination—can isolate faults to switchgear compartments.

    Electrical engineers also must pay close attention to the site’s soil conductivity when significant power conductors are located underground or under slabs. Energy losses from continuous high load factors require careful analysis to accurately size these underground feeders for the heating effects unique to the data center’s continuous loads.

    New cloud-based data centers have pressed ever-higher power densities and load factors, which create a strong undertow for efficiency. To achieve 10 to 30 kW (or more) per rack load, designs may require the addition of cooling liquids to the rack, closely coupled redundant cooling, and thermal storage systems.

    Mechanical systems designed for data centers strive to manage machine efficiency, thermal transfer, controls, and air/water flow losses to achieve greater efficiency. Today’s best practice strategies must focus on airflow management and economizer operations.

    IT equipment manufacturers and data center operators play critical roles in energy management. This type of equipment consumes the most energy.

    Data centers will likely see 10 to 30 generations of varying IT equipment over the life of the facility. Informed by technology’s history and a vision of the future, we create efficient, adaptable environments that last long into the future. Energy efficiency will be one part of the conversation.

    Site-specific space considerations

    A data center’s proximity to ample electrical capacity, telecommunications carriers, and water utilities is a major priority. However, several other considerations play critical roles.

    The data center site’s relationship to neighboring properties and roadways plays an important role in assessing and establishing appropriate levels of security.

    Business operations and continuity

    Even with a redundant design in place, designers can continue to improve service continuity by striving to identify and mitigate the risks that cause downtime. A history of these causes shows ways that designers can help operations improve continuity and reduce costs. Studies (Ponemon Institute, 2010, 2013, 2016) show the cost of an outage ($5,000/minute in 2010 to $9,800/minute in 2016) grows as society’s reliance on data center information increases. Exponential growth and reliance necessitate continual action to manage the future’s higher expectations.

    By providing data center facility managers with the tools and the information graphics necessary to operate and test their infrastructure, designers can help create even greater business continuity. More intuitive, responsive, and manageable systems improve knowledge and judgment when the operators must respond in a moment’s notice.

    Mission critical reliability/availability, people, and intuition

    The fundamentally important issues pushing design and operations of data center and mission critical facilities remain relatively unchanged. Highly available, efficient, and durable facilities that balance the client’s capital and operating cost requirements are the goal. These spaces are designed for tomorrow’s IT systems and for the people who will operate them. This is because of the increasing demand for the information these facilities generate and the data they store. Society now depends on data center services more than any other time in history, and that dependence continues to accelerate. Consequently, the need for the professionals that artfully combine IT equipment with those who can operate these facilities—without downtime—continue to increase

    Reliable design in all infrastructure elements is key. Provide the simplest, most reliable series of delivery paths to serve the load redundantly. Paths should be isolated from each other and designed to allow the operators to service each element in that path without taking the data center down.

    The operator’s mean time to repair (MTTR) captures this critical time to evaluate, diagnose, assemble parts, repair, inspect, and return the system to service. This is a critical point. Mean time between failure (MTBF), the key factor in component reliability, does not speak to the critical challenges that operators face.

    AI = MTBF/MTBF + MTTR

    Where mean time between failure (MTBF) = uptime/number of system failures

    MTTR = corrective maintenance downtime/number of system failures

    Looking at the available calculations, one should seem strikingly similar to PUE—a number that should be familiar to all designers. As with PUE, but inverted, by reducing the denominator (MTTR) to as near the zero repair time as possible, the best availability (of 1) can be achieved.

    Data center characteristics and the future

    The characteristic of any data center can be derived from three factors:

    1. The IT mix and proportion used to deliver services
    2. The place and infrastructure that support that technology mix
    3. The vision of the people that support both.

    Data center infrastructure design has always tried to intelligently predict and adapt to the future IT equipment that must be supported. Rapid, continual change is expected. The recent emergence of sufficient, low-cost bandwidth to homes and businesses has created greater opportunities to serve customers.

    Reply
  19. Tomi Engdahl says:

    Slideshow:
    Data Center Fire Protection
    http://www.securityinfowatch.com/article/12232840/data-center-fire-protection

    Although there is a low probability of fire in data centers, a small fire in a single piece of electronic equipment can result in huge damage and costly interruption of IT operations and services.

    Lost revenue is a direct result of unplanned downtime. The Aberdeen Group estimates the average loss at $138,000 per hour. Indirect costs — such as lost employee productivity, the cost of time to recreate lost work, damage to a firm’s reputation or brand, and losing customers — are more difficult to quantify but perhaps just as significant. Therefore, very early warning fire detection, which can be provided by aspiration systems, is critical for identifying such small fire events like precursor smoldering or overheating equipment.

    As data center criteria evolve, fire protection technologies and strategies continue to acclimate to provide integrated solutions.

    Reply
  20. Tomi Engdahl says:

    How to choose a modular data center
    Modular data centers can be cost-effective, scalable options. There are several variables to consider, however, when comparing them to brick-and-mortar facilities.
    http://www.csemag.com/single-article/how-to-choose-a-modular-data-center/ed01a400e7a905e417fd43100aae3e0d.html

    The lack of data center capacity, low efficiency, flexibility and scalability, time to market, and limited capital are some of the major issues today’s building owners and clients have to address with their data centers. Modular data centers (MDCs) are well-suited to address these issues. Owners are also looking for “plug-and-play” installations and are turning to MDCs for the solution. And why not? MDCs can be up and running in a very short time frame and with minimal investment-while also meeting corporate criteria for sustainability. They have been used successfully since 2009 (and earlier) by Internet giants, such as Microsoft and Google, and other institutions like Purdue University.

    With that said, Microsoft has recently indicated the company is abandoning the use of their version of the MDC known as information technology pre-assembled components (IT-PACs) because they couldn’t expand the data center’s capacity fast enough. So which is it? Are MDCs the modern alternative to traditional brick-and-mortar data centers? This contradicting information may have some owners concerned and confused as they ask if MDCs are right for their building.

    MDCs versus traditional data centers

    There are many terms used to describe MDCs: containerized, self-contained, prefabricated, portable, mobile, skid, performance-optimized data center (POD), and many others. An MDC is a pre-engineered, factory-built and integrated, tested assembly that is mounted on a skid or in an enclosure with systems that are traditionally installed onsite by one or more contractors. An MDC uses standard components in a repeatable and scalable design, allowing for rapid deployment. A containerized data center incorporates the necessary power and/or cooling infrastructure, along with the information technology (IT) hardware, in a container that is built in accordance with the International Standards Organization (ISO) for shipping containers. A modular data center is not the same thing as a containerized data center; however, a containerized data center may be a component of a modular data center.

    The IT capacity of an MDC can vary significantly. Networking MDCs are typically 50 kW or less, standalone MDCs with power, mechanical, and IT systems can range up to 750 kW, and blade-packed PODs connected to redundant utilities may be 1 MW or greater.

    There is a lot of disagreement in the data center industry in regards to the performance, cost-effectiveness, time efficiency, and standardization of MDCs, and their ability to outperform traditional brick-and-mortar data centers that use a standard, repeatable, and scalable design.

    According to 451 Research, the MDC market is expected to reach $4 billion by 2018, up from $1.5 billion in 2014.

    MDCs can reduce capital investment, construction, and schedule, provide for faster deployment, and offer flexibility for changing IT technologies. MDCs also reduce the risk associated with design, such as the technical risk of the design not adhering to the requirements, the schedule risk of the design not being completed on time, and the cost risk of the final product exceeding the budget.

    Reply
  21. Tomi Engdahl says:

    42U rack segment likely to dominate global data center market til 2022: Analyst
    http://www.cablinginstall.com/articles/pt/2017/04/42u-rack-segment-likely-to-dominate-global-data-center-market-til-2022-analyst.html?cmpid=enl_cim_cimdatacenternewsletter_2017-05-09

    Beige Market Intelligence (Bangalore, India) reports that, while the market share of the 42U rack segment is likely to decline moderately from 2017 onwards, the segment nonetheless is expected to account for more than 50% market share by 2022. “The demand for taller racks will be high with steady growth in revenue,” adds the analyst.

    According to the analyst, the 42U rack segment comprised of almost 62% of the global data center rack market in 2016, followed by the 36U racks segment.

    Many data centers, which were built five to seven years ago, are now being upgraded through the installation of taller racks. In certain cases, this process involves the addition of taller racks alongside existing 36U or 42U racks. The number of servers being deployed in data centers is also projected as likely to increase, prompting data center operators to opt for taller racks. 47U, 48U, and 51U racks are expected to increase at a CAGR of more than 20% in 2018-2022, finds Beige.

    “The market for 36U racks is likely to decline substantially on the year-over-year basis as there a few customers for these products,” continues the report’s summary. “The dominance of 42U racks will persist during the forecast period; however, some of its market share will likely be lost to taller rack products,”

    Reply
  22. Tomi Engdahl says:

    The evolution of data center infrastructure in North America
    http://www.controleng.com/single-article/the-evolution-of-data-center-infrastructure-in-north-america/9c061a12c0e8b05dee9dd7280032c95c.html

    Data centers have become increasingly important under the Industrial Internet of Things revolution. Physical and cybersecurity have to be assessed and continuously improved. What are the most crucial considerations for the IT infrastructure of a data center?

    Is the data center secure enough?

    With rising cyber security concerns, protecting servers and information assets in data centers is critical. Security—both physical and cyber—has to be assessed, continuously improved and new systems may need to be put in place to increase the security posture in this sector. IT operations are a crucial aspect of most organizational operations around the world.

    How to cool the data centers down?

    A number of data center hosts are selecting geographic areas that take advantage of the cold climate to mitigate the extensive costs of cooling their server infrastructure. As data centers pack more computing power, managing the significant heat that the semi-conductors generate is consuming more and more of the operating costs of a data center; consumption is at approximately 2% of U.S. total power consumption.

    Public, private or hybrid: What’s best for you data?

    For companies that continue to own and operate their own data center, their servers are used for running the Internet and intranet services needed by internal users within the organization, e.g., e-mail servers, proxy servers, and domain name system (DNS) servers. Network security elements should be deployed: firewalls, virtual private network (VPN) gateways, situational awareness platforms, intrusion detection systems, etc. An on-site monitoring system for the network and applications also should be deployed to provide insight to hardware health, multi-vendor device support, automated network device discovery and quick deployment. In addition, off-site monitoring systems can be implemented to provide a holistic view of the LAN and WAN performance.

    Data center infrastructure management

    Data center infrastructure management (DCIM) is the integration of information technology (IT) and facility management disciplines to centralize monitoring, management and intelligent capacity planning of a data center’s critical systems.

    Data center boom in North America

    The combination of cheap power and cold weather puts Canada and upper regions of the United States in a similar class with Sweden and Finland, which host huge data centers for Facebook and Google.

    Reply
  23. Tomi Engdahl says:

    Article series compares and contrasts TIA-942 and Uptime Institute data center specifications
    http://www.cablinginstall.com/articles/2017/07/edward-van-leent-tia-942-vs-uptime-institute.html?cmpid=enl_cim_cimdatacenternewsletter_2017-07-11

    “started spotting a clear trend that there is a lot of misperception about data center facilities benchmarking in relation to ANSI/TIA-942 vs. Uptime. Some of those misperceptions are based on outdated information, as some customers didn’t keep up with the development developments in that space as well as deception, created by some parties not representing the facts truthfully either by ignorance or intentionally for commercial reasons [emphasis added.”

    Several of the articles prompted lively commenting on subjects including the claim that the original TIA-942 standard was based on Uptime Institute specifications; the role of personal and corporate agenda in standards development; and certifying to the TIA-942 standard. Commentary on that topic—certification to TIA-942—remains an ongoing dialogue in the “easy/difficult” article, which van Leent posted on July 4.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*