<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments on: Software-Defined Data Centers</title>
	<atom:link href="http://www.epanorama.net/blog/2013/06/24/software-defined-data-centers/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.epanorama.net/blog/2013/06/24/software-defined-data-centers/</link>
	<description>All about electronics and circuit design</description>
	<lastBuildDate>Sat, 04 Apr 2026 21:59:57 +0000</lastBuildDate>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.9.14</generator>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2013/06/24/software-defined-data-centers/comment-page-1/#comment-1535467</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Wed, 01 Feb 2017 08:39:31 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=20438#comment-1535467</guid>
		<description><![CDATA[Could &#039;software-defined power&#039; unlock hidden data center capacities?
http://www.cablinginstall.com/articles/pt/2017/01/could-software-defined-power-unlock-hidden-data-center-capacities.html?cmpid=enl_cim_cimdatacenternewsletter_2017-01-31

Booming demand for cloud computing and data services will only accelerate as the number of conventional computer-based users is rapidly dwarfed by the multitude of connected “things” that the Internet of Things (IoT) threatens to bring about.  

What’s needed here is a method to even out power-supply loads, perhaps by redistributing processing tasks to other servers or by pausing non-time-critical tasks or rescheduling them to quieter times of day. Other methods can address demand fluctuations by using battery-power storage to meet peak demands without impacting the load presented to the utility supply.

What characterizes all these potential solutions is the need for greater intelligence, not just in the management of the data center’s processing operations, but in the way power is managed. One potential solution is Software Defined Power (SDP), which might unlock the underutilized power capacity available within existing systems.]]></description>
		<content:encoded><![CDATA[<p>Could &#8216;software-defined power&#8217; unlock hidden data center capacities?<br />
<a href="http://www.cablinginstall.com/articles/pt/2017/01/could-software-defined-power-unlock-hidden-data-center-capacities.html?cmpid=enl_cim_cimdatacenternewsletter_2017-01-31" rel="nofollow">http://www.cablinginstall.com/articles/pt/2017/01/could-software-defined-power-unlock-hidden-data-center-capacities.html?cmpid=enl_cim_cimdatacenternewsletter_2017-01-31</a></p>
<p>Booming demand for cloud computing and data services will only accelerate as the number of conventional computer-based users is rapidly dwarfed by the multitude of connected “things” that the Internet of Things (IoT) threatens to bring about.  </p>
<p>What’s needed here is a method to even out power-supply loads, perhaps by redistributing processing tasks to other servers or by pausing non-time-critical tasks or rescheduling them to quieter times of day. Other methods can address demand fluctuations by using battery-power storage to meet peak demands without impacting the load presented to the utility supply.</p>
<p>What characterizes all these potential solutions is the need for greater intelligence, not just in the management of the data center’s processing operations, but in the way power is managed. One potential solution is Software Defined Power (SDP), which might unlock the underutilized power capacity available within existing systems.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2013/06/24/software-defined-data-centers/comment-page-1/#comment-1490541</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Mon, 16 May 2016 10:48:46 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=20438#comment-1490541</guid>
		<description><![CDATA[Security the key to software-defined datacentre takeup
http://www.cloudpro.co.uk/saas/5997/security-the-key-to-software-defined-datacentre-takeup

 94 per cent of executives think security is more important than cost savings

A report by HyTrust has revealed security is the key factor that will make more executives take up Software-Defined Data Centre (SDDC) services, ranking higher than cost savings, agility and performance enhancements.

A total 94 per cent of the executives questioned said better security would help companies realise the benefits of the technology. Additionally, 93 per cent agreed that the benefits of migration to virtualisation and the cloud are undeniable and quantifiable, suggesting there will be a faster drive towards SDDC infrastructure in the future.

A further 88 per cent of respondents think optimal SDDC strategies and deployment will drive the take-up of virtualisation ratios and server optimisation, while also improving finances in the organisation.

“It’s always been hard to deny the potential benefits of SDDC infrastructure, but in the past the obvious advantages have sometimes been overshadowed by concerns over security and compliance,” said Eric Chiu, president of HyTrust.

Almost all (94 per cent) think current security levels on SDDC platforms and strategies meet their organisation&#039;s needs &#039;very well&#039; or &#039;somewhat well&#039;, with only four per cent saying they don&#039;t address the needs of the company.

“What we’re seeing now is clear progress in this exciting arena, as technology solutions that balance high-quality workload security with effortless automation push back those fears,&quot; Chiu added.

&quot;The focus is now exactly where it should be: ensuring that the virtualized or cloud infrastructure enables tremendous cost savings with unparalleled agility and flexibility.”]]></description>
		<content:encoded><![CDATA[<p>Security the key to software-defined datacentre takeup<br />
<a href="http://www.cloudpro.co.uk/saas/5997/security-the-key-to-software-defined-datacentre-takeup" rel="nofollow">http://www.cloudpro.co.uk/saas/5997/security-the-key-to-software-defined-datacentre-takeup</a></p>
<p> 94 per cent of executives think security is more important than cost savings</p>
<p>A report by HyTrust has revealed security is the key factor that will make more executives take up Software-Defined Data Centre (SDDC) services, ranking higher than cost savings, agility and performance enhancements.</p>
<p>A total 94 per cent of the executives questioned said better security would help companies realise the benefits of the technology. Additionally, 93 per cent agreed that the benefits of migration to virtualisation and the cloud are undeniable and quantifiable, suggesting there will be a faster drive towards SDDC infrastructure in the future.</p>
<p>A further 88 per cent of respondents think optimal SDDC strategies and deployment will drive the take-up of virtualisation ratios and server optimisation, while also improving finances in the organisation.</p>
<p>“It’s always been hard to deny the potential benefits of SDDC infrastructure, but in the past the obvious advantages have sometimes been overshadowed by concerns over security and compliance,” said Eric Chiu, president of HyTrust.</p>
<p>Almost all (94 per cent) think current security levels on SDDC platforms and strategies meet their organisation&#8217;s needs &#8216;very well&#8217; or &#8216;somewhat well&#8217;, with only four per cent saying they don&#8217;t address the needs of the company.</p>
<p>“What we’re seeing now is clear progress in this exciting arena, as technology solutions that balance high-quality workload security with effortless automation push back those fears,&#8221; Chiu added.</p>
<p>&#8220;The focus is now exactly where it should be: ensuring that the virtualized or cloud infrastructure enables tremendous cost savings with unparalleled agility and flexibility.”</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2013/06/24/software-defined-data-centers/comment-page-1/#comment-1464424</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Tue, 05 Jan 2016 11:34:39 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=20438#comment-1464424</guid>
		<description><![CDATA[The Register guide to software-defined infrastructure
Our very own Trevor Pott does his best to cut through the marketing fluff
http://www.theregister.co.uk/2016/01/04/software_defined_infrastructure_explainer/

Software-Defined Infrastructure (SDI) has, in a very short time, become a completely overused term.

As the individual components of SDI have started to become automated the marketing usage of the term has approached “cloud” or “X as a Service” levels of abstracted pointlessness.

Understanding what different groups mean when they use the term “software-defined” means cutting through a lot of fluff to find it. Ultimately, this is why I chose to eventually use the term Infrastructure Endgame Machine to describe what I see as the ultimate evolution of SDI: the marketing bullshit has run so far ahead of the technical realities that describing theoretical concepts can only be done using ridiculous absolutist terminology like “endgame machine”.

I don’t think even tech marketers are willing to go there quite yet.

SDI wars: WTF is software defined infrastructure?
This time we play for ALL the marbles
http://www.theregister.co.uk/2014/10/17/sdi_wars_what_is_software_defined_infrastructure

In order to understand the problem with “software-defined” anything, let’s start the discussion with the most overused subterm of all: Software-Defined Storage (SDS).

All storage is software-defined.

SDS vendors want you to become locked into their software instead of being locked in to EMC’s combination of software and hardware. Pure and simple.

Software-Defined Networking (SDN) is another often confused term. It comes in two flavours: virtual and physical, and is often lumped together with Network Functions Virtualisation (NFV) which also comes in two flavours: telco and everyone else.

The two flavours of SDN are not mutually incompatible. Indeed, a hybrid between the two is starting to emerge as the most likely candidate, once everyone is done stabbing Cisco to death with the shiv of cutthroat margins.

Software-defined really means developer-controlled

But the thing to notice here is the bit about the “API-fiddling developer”. When you strip all of the blither, marketing speak, infighting, politics, lies, damned lies and the pestilent reek of desperation away what you have is Amazon envy. “Software-defined” means nothing more than “be as good as – or better than – Amazon at making the lives of developers easy”.

That’s it, right there, ladies and gentlemen. The holy grail of modern tech CxO thinking. It’s been nearly 10 years since AWS launched and the movers and shakers in our industry still can’t come up with anything better. Software-defined X, the Docker/containerisation love affair, the “rise of the API”, keynotes about the irrelevance of open source and the replacement of it with “open standards” … all of it is nothing more than the perpetual, frenetic and frenzied attempt to be like Amazon.

Developers are not engineers

Where it all goes wrong – and it has – is that while many engineers are developers, not all developers are engineers. In the “bad old days”, we had a separation of powers. In a well-balanced IT department no one idiot could ruin everything for everyone one else.

A virtual admin with a burning idea would need to get the network, storage, OS, application and security guys to all sign off on it.

The new way is to dispense with all of that and let the devs run the asylum. Hell, most software teams have almost entirely done away with testing and quality assurance. It’s common practice for even the mightiest software houses to throw beta software out as “release” and let the customers beat through the bugs in production.

It’s a rare company that – like Netflix – invests in building a chaos monkey. Rarer still are those still building software using proper engineering principles.

Software-defined change management

With the exception of a handful of Israeli startups run by terrifying ex-Mossad InfoSec types, these are the sorts of questions and discussions that make software-defined X startups very, very angry. They really don’t want to talk about things like rate limiting change requests from a given authentication key, how one might implement mitigation via segmentation or automated incident response.

There’s money to be made and any concerns about privacy, security or data sovereignty are to be viciously stamped out. The hell of it is … they’re not wrong.

Change management is seen as a problematic impediment by pretty much anyone who isn’t a traditional infrastructure nerd or a security specialist. Developers, sales, marketing and most executives want what they want and they want it now. If IT can’t deliver, they’ll go do their thing in Amazon. Every time that happens that is money those startups – or even the staid old guard – aren’t getting.

Eventually, the software-defined crew will realise that if they are going to be around for more than a single refresh cycle they need to put a truly unholy amount of time and effort into idiot-proofing their offerings. Those that don’t won’t be around long.

When someone talks about “software-defined”, that’s what they’re trying to be. Or, at least, they’re trying to be some small piece of that puzzle. If they do talk about “software-defined”, however, take the time to ask them hard questions about security, privacy and data sovereignty. After all, in a “software-defined” world, those sorts of considerations are now automated. Welcome to the future.]]></description>
		<content:encoded><![CDATA[<p>The Register guide to software-defined infrastructure<br />
Our very own Trevor Pott does his best to cut through the marketing fluff<br />
<a href="http://www.theregister.co.uk/2016/01/04/software_defined_infrastructure_explainer/" rel="nofollow">http://www.theregister.co.uk/2016/01/04/software_defined_infrastructure_explainer/</a></p>
<p>Software-Defined Infrastructure (SDI) has, in a very short time, become a completely overused term.</p>
<p>As the individual components of SDI have started to become automated the marketing usage of the term has approached “cloud” or “X as a Service” levels of abstracted pointlessness.</p>
<p>Understanding what different groups mean when they use the term “software-defined” means cutting through a lot of fluff to find it. Ultimately, this is why I chose to eventually use the term Infrastructure Endgame Machine to describe what I see as the ultimate evolution of SDI: the marketing bullshit has run so far ahead of the technical realities that describing theoretical concepts can only be done using ridiculous absolutist terminology like “endgame machine”.</p>
<p>I don’t think even tech marketers are willing to go there quite yet.</p>
<p>SDI wars: WTF is software defined infrastructure?<br />
This time we play for ALL the marbles<br />
<a href="http://www.theregister.co.uk/2014/10/17/sdi_wars_what_is_software_defined_infrastructure" rel="nofollow">http://www.theregister.co.uk/2014/10/17/sdi_wars_what_is_software_defined_infrastructure</a></p>
<p>In order to understand the problem with “software-defined” anything, let’s start the discussion with the most overused subterm of all: Software-Defined Storage (SDS).</p>
<p>All storage is software-defined.</p>
<p>SDS vendors want you to become locked into their software instead of being locked in to EMC’s combination of software and hardware. Pure and simple.</p>
<p>Software-Defined Networking (SDN) is another often confused term. It comes in two flavours: virtual and physical, and is often lumped together with Network Functions Virtualisation (NFV) which also comes in two flavours: telco and everyone else.</p>
<p>The two flavours of SDN are not mutually incompatible. Indeed, a hybrid between the two is starting to emerge as the most likely candidate, once everyone is done stabbing Cisco to death with the shiv of cutthroat margins.</p>
<p>Software-defined really means developer-controlled</p>
<p>But the thing to notice here is the bit about the “API-fiddling developer”. When you strip all of the blither, marketing speak, infighting, politics, lies, damned lies and the pestilent reek of desperation away what you have is Amazon envy. “Software-defined” means nothing more than “be as good as – or better than – Amazon at making the lives of developers easy”.</p>
<p>That’s it, right there, ladies and gentlemen. The holy grail of modern tech CxO thinking. It’s been nearly 10 years since AWS launched and the movers and shakers in our industry still can’t come up with anything better. Software-defined X, the Docker/containerisation love affair, the “rise of the API”, keynotes about the irrelevance of open source and the replacement of it with “open standards” … all of it is nothing more than the perpetual, frenetic and frenzied attempt to be like Amazon.</p>
<p>Developers are not engineers</p>
<p>Where it all goes wrong – and it has – is that while many engineers are developers, not all developers are engineers. In the “bad old days”, we had a separation of powers. In a well-balanced IT department no one idiot could ruin everything for everyone one else.</p>
<p>A virtual admin with a burning idea would need to get the network, storage, OS, application and security guys to all sign off on it.</p>
<p>The new way is to dispense with all of that and let the devs run the asylum. Hell, most software teams have almost entirely done away with testing and quality assurance. It’s common practice for even the mightiest software houses to throw beta software out as “release” and let the customers beat through the bugs in production.</p>
<p>It’s a rare company that – like Netflix – invests in building a chaos monkey. Rarer still are those still building software using proper engineering principles.</p>
<p>Software-defined change management</p>
<p>With the exception of a handful of Israeli startups run by terrifying ex-Mossad InfoSec types, these are the sorts of questions and discussions that make software-defined X startups very, very angry. They really don’t want to talk about things like rate limiting change requests from a given authentication key, how one might implement mitigation via segmentation or automated incident response.</p>
<p>There’s money to be made and any concerns about privacy, security or data sovereignty are to be viciously stamped out. The hell of it is … they’re not wrong.</p>
<p>Change management is seen as a problematic impediment by pretty much anyone who isn’t a traditional infrastructure nerd or a security specialist. Developers, sales, marketing and most executives want what they want and they want it now. If IT can’t deliver, they’ll go do their thing in Amazon. Every time that happens that is money those startups – or even the staid old guard – aren’t getting.</p>
<p>Eventually, the software-defined crew will realise that if they are going to be around for more than a single refresh cycle they need to put a truly unholy amount of time and effort into idiot-proofing their offerings. Those that don’t won’t be around long.</p>
<p>When someone talks about “software-defined”, that’s what they’re trying to be. Or, at least, they’re trying to be some small piece of that puzzle. If they do talk about “software-defined”, however, take the time to ask them hard questions about security, privacy and data sovereignty. After all, in a “software-defined” world, those sorts of considerations are now automated. Welcome to the future.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2013/06/24/software-defined-data-centers/comment-page-1/#comment-1464422</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Tue, 05 Jan 2016 11:29:35 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=20438#comment-1464422</guid>
		<description><![CDATA[The Register guide to software-defined infrastructure
Our very own Trevor Pott does his best to cut through the marketing fluff
http://www.theregister.co.uk/2016/01/04/software_defined_infrastructure_explainer/

Software-Defined Infrastructure (SDI) has, in a very short time, become a completely overused term.

As the individual components of SDI have started to become automated the marketing usage of the term has approached &quot;cloud&quot; or &quot;X as a Service&quot; levels of abstracted pointlessness.

Understanding what different groups mean when they use the term &quot;software-defined&quot; means cutting through a lot of fluff to find it. Ultimately, this is why I chose to eventually use the term Infrastructure Endgame Machine to describe what I see as the ultimate evolution of SDI: the marketing bullshit has run so far ahead of the technical realities that describing theoretical concepts can only be done using ridiculous absolutist terminology like &quot;endgame machine&quot;.

I don&#039;t think even tech marketers are willing to go there quite yet.

SDI wars: WTF is software defined infrastructure?
This time we play for ALL the marbles
http://www.theregister.co.uk/2014/10/17/sdi_wars_what_is_software_defined_infrastructure]]></description>
		<content:encoded><![CDATA[<p>The Register guide to software-defined infrastructure<br />
Our very own Trevor Pott does his best to cut through the marketing fluff<br />
<a href="http://www.theregister.co.uk/2016/01/04/software_defined_infrastructure_explainer/" rel="nofollow">http://www.theregister.co.uk/2016/01/04/software_defined_infrastructure_explainer/</a></p>
<p>Software-Defined Infrastructure (SDI) has, in a very short time, become a completely overused term.</p>
<p>As the individual components of SDI have started to become automated the marketing usage of the term has approached &#8220;cloud&#8221; or &#8220;X as a Service&#8221; levels of abstracted pointlessness.</p>
<p>Understanding what different groups mean when they use the term &#8220;software-defined&#8221; means cutting through a lot of fluff to find it. Ultimately, this is why I chose to eventually use the term Infrastructure Endgame Machine to describe what I see as the ultimate evolution of SDI: the marketing bullshit has run so far ahead of the technical realities that describing theoretical concepts can only be done using ridiculous absolutist terminology like &#8220;endgame machine&#8221;.</p>
<p>I don&#8217;t think even tech marketers are willing to go there quite yet.</p>
<p>SDI wars: WTF is software defined infrastructure?<br />
This time we play for ALL the marbles<br />
<a href="http://www.theregister.co.uk/2014/10/17/sdi_wars_what_is_software_defined_infrastructure" rel="nofollow">http://www.theregister.co.uk/2014/10/17/sdi_wars_what_is_software_defined_infrastructure</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2013/06/24/software-defined-data-centers/comment-page-1/#comment-1464199</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Mon, 04 Jan 2016 12:39:14 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=20438#comment-1464199</guid>
		<description><![CDATA[OPENCORES: Tools
http://opencores.org/opencores,tools

 There are plenty of good EDA tools that are open source available. The use of such tools makes it easier to collaborate at the opencores site. An IP that has readily available scripts for an open source HDL simulator makes it easier for an other person to verify and possibly update that particular core. A test environment that is built for a commercial simulator that only a limited number of people have access to makes verification more complicated.]]></description>
		<content:encoded><![CDATA[<p>OPENCORES: Tools<br />
<a href="http://opencores.org/opencores,tools" rel="nofollow">http://opencores.org/opencores,tools</a></p>
<p> There are plenty of good EDA tools that are open source available. The use of such tools makes it easier to collaborate at the opencores site. An IP that has readily available scripts for an open source HDL simulator makes it easier for an other person to verify and possibly update that particular core. A test environment that is built for a commercial simulator that only a limited number of people have access to makes verification more complicated.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2013/06/24/software-defined-data-centers/comment-page-1/#comment-1464197</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Mon, 04 Jan 2016 12:26:09 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=20438#comment-1464197</guid>
		<description><![CDATA[Data center power can be software defined too
http://www.datacenterdynamics.com/critical-environment/data-center-power-can-be-software-defined-too/84915.fullarticle

More and more data is being collected, stored and transacted today thanks to the internet, social networking, smartphones and credit cards. All this activity takes place in real time, so application availability is more important than ever and reliability requirements are increasingly stringent. Much application downtime today is caused by power problems, either in a data center’s power delivery network or the utility distribution grid. This is likely to become even more so as reliability of the electrical grid continues to deteriorate.

 

Part of the reason power is such a frequent cause of application downtime is the effort to abstract IT hardware from applications through virtualization and “software defined data center” technologies. While abstracting servers, storage and networking, the concept of software defined infrastructure has ignored power.

It is a purely IT-centric view of the data center. Standing separately are facilities staff who operate building management systems and other infrastructure components. If you want an integrated management environment for this infrastructure you get what is called data center infrastruture management (DCIM) software.

Software defined data center technologies and DCIM software are valuable tools for their respective purposes, neither addresses power-related downtime. This problem is generally addressed by setting up multiple geographically dispersed, often fully redundant data centers, configured for either hot or cold backup and failover. But automated failover and recovery is still very often plagued by problems.

Application failover to another site requires manual intervention nearly 80% of the time. A study by Symantec found that even before you get to manual intervention, 25% of disaster recovery failover tests fail completely even before getting to the manual part.

Today, software defined data center and DCIM solutions do not address the relationship between applications and power. Power should be the next resource to become software defined. While you can use software to allocate IT resources, it is not possible with power. You cannot dynamically adjust the amount of power going to a rack or an outlet, but you can, however, dynamically change the amount of power consumed by IT gear plugged into an outlet by shifting the workload. Software defined power involves adjusting server capacity to accommodate workloads and indirectly manage the power consumed.

The approach could combine power capacity management with disaster recovery procedures and other functions, such as participation in utility demand response programs. 

Because load shifting does not occur until availability of the destination has been verified, the process is risk free, and when disaster does strike, the chances of smooth transition are dramatically improved. 

implementation of software defined power brings together application monitoring, IT management, DCIM, power monitoring, enterprise-scale automation, analytics and energy market intelligence]]></description>
		<content:encoded><![CDATA[<p>Data center power can be software defined too<br />
<a href="http://www.datacenterdynamics.com/critical-environment/data-center-power-can-be-software-defined-too/84915.fullarticle" rel="nofollow">http://www.datacenterdynamics.com/critical-environment/data-center-power-can-be-software-defined-too/84915.fullarticle</a></p>
<p>More and more data is being collected, stored and transacted today thanks to the internet, social networking, smartphones and credit cards. All this activity takes place in real time, so application availability is more important than ever and reliability requirements are increasingly stringent. Much application downtime today is caused by power problems, either in a data center’s power delivery network or the utility distribution grid. This is likely to become even more so as reliability of the electrical grid continues to deteriorate.</p>
<p>Part of the reason power is such a frequent cause of application downtime is the effort to abstract IT hardware from applications through virtualization and “software defined data center” technologies. While abstracting servers, storage and networking, the concept of software defined infrastructure has ignored power.</p>
<p>It is a purely IT-centric view of the data center. Standing separately are facilities staff who operate building management systems and other infrastructure components. If you want an integrated management environment for this infrastructure you get what is called data center infrastruture management (DCIM) software.</p>
<p>Software defined data center technologies and DCIM software are valuable tools for their respective purposes, neither addresses power-related downtime. This problem is generally addressed by setting up multiple geographically dispersed, often fully redundant data centers, configured for either hot or cold backup and failover. But automated failover and recovery is still very often plagued by problems.</p>
<p>Application failover to another site requires manual intervention nearly 80% of the time. A study by Symantec found that even before you get to manual intervention, 25% of disaster recovery failover tests fail completely even before getting to the manual part.</p>
<p>Today, software defined data center and DCIM solutions do not address the relationship between applications and power. Power should be the next resource to become software defined. While you can use software to allocate IT resources, it is not possible with power. You cannot dynamically adjust the amount of power going to a rack or an outlet, but you can, however, dynamically change the amount of power consumed by IT gear plugged into an outlet by shifting the workload. Software defined power involves adjusting server capacity to accommodate workloads and indirectly manage the power consumed.</p>
<p>The approach could combine power capacity management with disaster recovery procedures and other functions, such as participation in utility demand response programs. </p>
<p>Because load shifting does not occur until availability of the destination has been verified, the process is risk free, and when disaster does strike, the chances of smooth transition are dramatically improved. </p>
<p>implementation of software defined power brings together application monitoring, IT management, DCIM, power monitoring, enterprise-scale automation, analytics and energy market intelligence</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2013/06/24/software-defined-data-centers/comment-page-1/#comment-1446708</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Mon, 26 Oct 2015 12:37:09 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=20438#comment-1446708</guid>
		<description><![CDATA[&#039;Composable infrastructure&#039;: new servers give software more to define
Cisco, Intel, IBM and HP are bridging virtualisation and the software-defined data centre
http://www.theregister.co.uk/2015/10/26/composable_infrastructure_servers_that_give_software_more_to_define/

“Composable infrastructure” is a term you&#039;re about to start hearing a lot more, and the good news is that while it is marketing jargon behind the shiny is pleasing advances in server design that will advance server virtualisation and private clouds.

The new term has its roots in server virtualisation, which is of course an eminently sensible idea that anyone sensible uses whenever possible. Intel and AMD both gave server virtualisation a mighty shunt forward with their respective virtualisation extensions that equipped their CPUs with the smarts to help multiple virtual machines to do their thing at once.

Servers have changed shape and components in the years since server virtualisation boomed. But now they&#039;re changing more profoundly.

Exhibit A is the M-series of Cisco&#039;s UCS servers, which offer shared storage, networking, cooling and power to “cartridges” that contain RAM and CPU. Cisco&#039;s idea is that instead of having blade servers with dedicated resources, the M-series allows users to assemble components into servers with their preferred configurations, with less overhead than is required to operate virtual machines that span different boxes or touch a SAN for resources.

In a composable infrastructure world, APIs make it possible for code to whip up the servers it wants. That&#039;s important, because composable infrastructure is seen as a bridge between server virtualisation and the software-defined data centre. The thinking is that infrastructure that allows itself to be configured gives software more to define, which is probably a good thing.

HP has announced it plans to get into the composable caper and like Cisco uses the “composable infrastructure” moniker.]]></description>
		<content:encoded><![CDATA[<p>&#8216;Composable infrastructure&#8217;: new servers give software more to define<br />
Cisco, Intel, IBM and HP are bridging virtualisation and the software-defined data centre<br />
<a href="http://www.theregister.co.uk/2015/10/26/composable_infrastructure_servers_that_give_software_more_to_define/" rel="nofollow">http://www.theregister.co.uk/2015/10/26/composable_infrastructure_servers_that_give_software_more_to_define/</a></p>
<p>“Composable infrastructure” is a term you&#8217;re about to start hearing a lot more, and the good news is that while it is marketing jargon behind the shiny is pleasing advances in server design that will advance server virtualisation and private clouds.</p>
<p>The new term has its roots in server virtualisation, which is of course an eminently sensible idea that anyone sensible uses whenever possible. Intel and AMD both gave server virtualisation a mighty shunt forward with their respective virtualisation extensions that equipped their CPUs with the smarts to help multiple virtual machines to do their thing at once.</p>
<p>Servers have changed shape and components in the years since server virtualisation boomed. But now they&#8217;re changing more profoundly.</p>
<p>Exhibit A is the M-series of Cisco&#8217;s UCS servers, which offer shared storage, networking, cooling and power to “cartridges” that contain RAM and CPU. Cisco&#8217;s idea is that instead of having blade servers with dedicated resources, the M-series allows users to assemble components into servers with their preferred configurations, with less overhead than is required to operate virtual machines that span different boxes or touch a SAN for resources.</p>
<p>In a composable infrastructure world, APIs make it possible for code to whip up the servers it wants. That&#8217;s important, because composable infrastructure is seen as a bridge between server virtualisation and the software-defined data centre. The thinking is that infrastructure that allows itself to be configured gives software more to define, which is probably a good thing.</p>
<p>HP has announced it plans to get into the composable caper and like Cisco uses the “composable infrastructure” moniker.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2013/06/24/software-defined-data-centers/comment-page-1/#comment-1391066</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Fri, 15 May 2015 10:50:14 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=20438#comment-1391066</guid>
		<description><![CDATA[Software-defined storage
http://en.wikipedia.org/wiki/Software-defined_storage

Software-defined storage (SDS) is an evolving concept for computer data storage software to manage policy-based provisioning and management of data storage independent of hardware. Software-defined storage definitions typically include a form of storage virtualization to separate the storage hardware from the software that manages the storage infrastructure. The software enabling a software-defined storage environment may also provide policy management for feature options such as deduplication, replication, thin provisioning, snapshots and backup. SDS definitions are sometimes compared with those of Software-based Storage.

By consensus and early advocacy,[1] SDS software is separate from the hardware it is managing. That hardware may or may not have abstraction, pooling, or automation software embedded. This philosophical span has made software-defined storage difficult to categorize. When implemented as software only in conjunction with commodity servers with internal disks, it may suggest software such as a virtual or global file system. If it is software layered over sophisticated large storage arrays, it suggests software such as storage virtualization or storage resource management, categories of products that address separate and different problems.

Based on similar concepts as software-defined networking (SDN),[4] interest in SDS rose after VMware acquired Nicira (known for &quot;software-defined networking&quot;) for over a billion dollars in 2012.[5][6]

SDS - software-defined storage
http://www.webopedia.com/TERM/S/software-defined_storage_sds.html

Storage infrastructure that is managed and automated by intelligent software as opposed to by the storage hardware itself. In this way, the pooled storage infrastructure resources in a software-defined storage (SDS) environment can be automatically and efficiently allocated to match the application needs of an enterprise.

Separating the Storage Hardware from the Software

By separating the storage hardware from the software that manages the storage infrastructure, software-defined storage enables enterprises to purchase heterogeneous storage hardware without having to worry as much about issues such as interoperability, under- or over-utilization of specific storage resources, and manual oversight of storage resources.

The software that enables a software-defined storage environment can provide functionality such as deduplication, replication, thin provisioning, snapshots and other backup and restore capabilities across a wide range of server hardware components. The key benefits of software-defined storage over traditional storage are increased flexibility, automated management and cost efficiency.]]></description>
		<content:encoded><![CDATA[<p>Software-defined storage<br />
<a href="http://en.wikipedia.org/wiki/Software-defined_storage" rel="nofollow">http://en.wikipedia.org/wiki/Software-defined_storage</a></p>
<p>Software-defined storage (SDS) is an evolving concept for computer data storage software to manage policy-based provisioning and management of data storage independent of hardware. Software-defined storage definitions typically include a form of storage virtualization to separate the storage hardware from the software that manages the storage infrastructure. The software enabling a software-defined storage environment may also provide policy management for feature options such as deduplication, replication, thin provisioning, snapshots and backup. SDS definitions are sometimes compared with those of Software-based Storage.</p>
<p>By consensus and early advocacy,[1] SDS software is separate from the hardware it is managing. That hardware may or may not have abstraction, pooling, or automation software embedded. This philosophical span has made software-defined storage difficult to categorize. When implemented as software only in conjunction with commodity servers with internal disks, it may suggest software such as a virtual or global file system. If it is software layered over sophisticated large storage arrays, it suggests software such as storage virtualization or storage resource management, categories of products that address separate and different problems.</p>
<p>Based on similar concepts as software-defined networking (SDN),[4] interest in SDS rose after VMware acquired Nicira (known for &#8220;software-defined networking&#8221;) for over a billion dollars in 2012.[5][6]</p>
<p>SDS &#8211; software-defined storage<br />
<a href="http://www.webopedia.com/TERM/S/software-defined_storage_sds.html" rel="nofollow">http://www.webopedia.com/TERM/S/software-defined_storage_sds.html</a></p>
<p>Storage infrastructure that is managed and automated by intelligent software as opposed to by the storage hardware itself. In this way, the pooled storage infrastructure resources in a software-defined storage (SDS) environment can be automatically and efficiently allocated to match the application needs of an enterprise.</p>
<p>Separating the Storage Hardware from the Software</p>
<p>By separating the storage hardware from the software that manages the storage infrastructure, software-defined storage enables enterprises to purchase heterogeneous storage hardware without having to worry as much about issues such as interoperability, under- or over-utilization of specific storage resources, and manual oversight of storage resources.</p>
<p>The software that enables a software-defined storage environment can provide functionality such as deduplication, replication, thin provisioning, snapshots and other backup and restore capabilities across a wide range of server hardware components. The key benefits of software-defined storage over traditional storage are increased flexibility, automated management and cost efficiency.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2013/06/24/software-defined-data-centers/comment-page-1/#comment-1384963</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Mon, 04 May 2015 07:45:16 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=20438#comment-1384963</guid>
		<description><![CDATA[Powering Converged Infrastructure
http://powerquality.eaton.com/About-Us/Markets/Converged-Infrastructure/Default.asp
http://www.google.fi/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=1&amp;cad=rja&amp;uact=8&amp;ved=0CB8QFjAA&amp;url=http%3A%2F%2Flit.powerware.com%2Fll_download.asp%3Ffile%3DWP_PoweringConvergedInfrastructures.pdf&amp;ei=dSJHVdzFFoH7sgGPmIGYAQ&amp;usg=AFQjCNFCqqOXpbmrlYiMNs5ookIS64iggw&amp;bvm=bv.92291466,d.bGg

Converged infrastructures utilize virtualization and automation to achieve high levels of availability in a cost-effective manner. In fact, converged infrastructures are so resilient that some IT managers believe they can be safely and reliably operated without the assistance of uninterruptible power systems (UPSs), power distribution units (PDUs) and other power protection technologies. In truth, however, such beliefs are dangerously mistaken.

What is converged infrastructure?
Simply put, converged infrastructures are pre-integrated hardware and software bundles designed to reduce the cost and complexity of deploying and maintaining virtualized solutions. Most converged infrastructure products include these four elements:
1. Server hardware
2. Storage hardware
3. Networking hardware
4. Software (including a hypervisor, operating system, automated management tools and sometimes email systems, collaboration tools or other applications)

Why use converged infrastructure?
According to analyst firm IDC, the worldwide market for converged infrastructure solutions will expand at a compound annual growth rate of 40 percent between 2012 and 2016, rising from $4.6 billion to $17.8 billion. Sales of non-converged server, storage and networking hardware, by contrast, will increase at a CAGR of just a little over two percent over the same period. Benefits like the following help explain why adoption of converged infrastructures is rising so sharply:
Faster, simpler deployment. Converged infrastructures are pre-integrated and tested, so they take far less time to install and configure. According to a study from analyst firm IDC, in fact, Hewlett-Packard converged infrastructures typically enable businesses to cut application provisioning time by 75 percent.
Lower costs. Converged infrastructure products usually sell for less than the combined cost of their individual components, enabling businesses to conserve capital when rolling out new solutions. Furthermore, the automated management software included with most converged infrastructure offerings decreases operating expenses by simplifying system administration. Indeed, the HP converged infrastructure users studied by IDC shifted over 50 percent of their IT resources from maintenance to innovation on average.
Enhanced agility. Thanks to their ease of deployment, affordability and scalability, converged infrastructures enable companies to add new IT capabilities or augment existing ones more quickly and cost-effectively.

Power protection equipment plays a key role in automatically triggering virtual machine migration processes during utility outages. Converged infrastructures execute automated failover routines only when informed that there’s a reason to do so. During utility failures, network connected UPSs can provide that information by notifying downstream devices that power is no longer available. At companies without UPSs, technicians must initiate the virtual machine transfer processes manually, which is far slower and less reliable.

A converged infrastructure’s failover features can’t function without electrical power.

Converged infrastructures are vulnerable to power spikes and other electrical disturbances.

The fifth element of converged infrastructures: Intelligent power protection
Power distribution units suitable for use with converged infrastructures do more than simply distribute power.

Management software
Most converged infrastructure solutions come with built-in system management software that helps make them highly resilient. Adding VM-centric power management software increases resilience even further by enabling technicians to do the following:
Manage all of their converged IT and power protection assets through a single console.]]></description>
		<content:encoded><![CDATA[<p>Powering Converged Infrastructure<br />
<a href="http://powerquality.eaton.com/About-Us/Markets/Converged-Infrastructure/Default.asp" rel="nofollow">http://powerquality.eaton.com/About-Us/Markets/Converged-Infrastructure/Default.asp</a><br />
<a href="http://www.google.fi/url?sa=t&#038;rct=j&#038;q=&#038;esrc=s&#038;source=web&#038;cd=1&#038;cad=rja&#038;uact=8&#038;ved=0CB8QFjAA&#038;url=http%3A%2F%2Flit.powerware.com%2Fll_download.asp%3Ffile%3DWP_PoweringConvergedInfrastructures.pdf&#038;ei=dSJHVdzFFoH7sgGPmIGYAQ&#038;usg=AFQjCNFCqqOXpbmrlYiMNs5ookIS64iggw&#038;bvm=bv.92291466,d.bGg" rel="nofollow">http://www.google.fi/url?sa=t&#038;rct=j&#038;q=&#038;esrc=s&#038;source=web&#038;cd=1&#038;cad=rja&#038;uact=8&#038;ved=0CB8QFjAA&#038;url=http%3A%2F%2Flit.powerware.com%2Fll_download.asp%3Ffile%3DWP_PoweringConvergedInfrastructures.pdf&#038;ei=dSJHVdzFFoH7sgGPmIGYAQ&#038;usg=AFQjCNFCqqOXpbmrlYiMNs5ookIS64iggw&#038;bvm=bv.92291466,d.bGg</a></p>
<p>Converged infrastructures utilize virtualization and automation to achieve high levels of availability in a cost-effective manner. In fact, converged infrastructures are so resilient that some IT managers believe they can be safely and reliably operated without the assistance of uninterruptible power systems (UPSs), power distribution units (PDUs) and other power protection technologies. In truth, however, such beliefs are dangerously mistaken.</p>
<p>What is converged infrastructure?<br />
Simply put, converged infrastructures are pre-integrated hardware and software bundles designed to reduce the cost and complexity of deploying and maintaining virtualized solutions. Most converged infrastructure products include these four elements:<br />
1. Server hardware<br />
2. Storage hardware<br />
3. Networking hardware<br />
4. Software (including a hypervisor, operating system, automated management tools and sometimes email systems, collaboration tools or other applications)</p>
<p>Why use converged infrastructure?<br />
According to analyst firm IDC, the worldwide market for converged infrastructure solutions will expand at a compound annual growth rate of 40 percent between 2012 and 2016, rising from $4.6 billion to $17.8 billion. Sales of non-converged server, storage and networking hardware, by contrast, will increase at a CAGR of just a little over two percent over the same period. Benefits like the following help explain why adoption of converged infrastructures is rising so sharply:<br />
Faster, simpler deployment. Converged infrastructures are pre-integrated and tested, so they take far less time to install and configure. According to a study from analyst firm IDC, in fact, Hewlett-Packard converged infrastructures typically enable businesses to cut application provisioning time by 75 percent.<br />
Lower costs. Converged infrastructure products usually sell for less than the combined cost of their individual components, enabling businesses to conserve capital when rolling out new solutions. Furthermore, the automated management software included with most converged infrastructure offerings decreases operating expenses by simplifying system administration. Indeed, the HP converged infrastructure users studied by IDC shifted over 50 percent of their IT resources from maintenance to innovation on average.<br />
Enhanced agility. Thanks to their ease of deployment, affordability and scalability, converged infrastructures enable companies to add new IT capabilities or augment existing ones more quickly and cost-effectively.</p>
<p>Power protection equipment plays a key role in automatically triggering virtual machine migration processes during utility outages. Converged infrastructures execute automated failover routines only when informed that there’s a reason to do so. During utility failures, network connected UPSs can provide that information by notifying downstream devices that power is no longer available. At companies without UPSs, technicians must initiate the virtual machine transfer processes manually, which is far slower and less reliable.</p>
<p>A converged infrastructure’s failover features can’t function without electrical power.</p>
<p>Converged infrastructures are vulnerable to power spikes and other electrical disturbances.</p>
<p>The fifth element of converged infrastructures: Intelligent power protection<br />
Power distribution units suitable for use with converged infrastructures do more than simply distribute power.</p>
<p>Management software<br />
Most converged infrastructure solutions come with built-in system management software that helps make them highly resilient. Adding VM-centric power management software increases resilience even further by enabling technicians to do the following:<br />
Manage all of their converged IT and power protection assets through a single console.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2013/06/24/software-defined-data-centers/comment-page-1/#comment-1384944</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Mon, 04 May 2015 06:30:04 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=20438#comment-1384944</guid>
		<description><![CDATA[Meet Evolving Business Demands with Software-Defined Storage
https://webinar.informationweek.com/19710?keycode=IKWE02

The data storage landscape is becoming increasingly complex. Firms are looking for effective and quick ways to upgrade their storage infrastructure so that they can respond with services that win and retain customers – yet the status quo of storage provisioning continues to be slow and clumsy. Businesses are beginning to discover the advantages of a software-defined storage approach – one that accelerates the delivery of storage resources in today’s complex and dynamic infrastructures.

Why 55% of technology decision makers are expressing interest in or starting to implement software-defined storage

How distributed systems technology transforms commodity server infrastructure into a scalable, resilient, self-service storage platform]]></description>
		<content:encoded><![CDATA[<p>Meet Evolving Business Demands with Software-Defined Storage<br />
<a href="https://webinar.informationweek.com/19710?keycode=IKWE02" rel="nofollow">https://webinar.informationweek.com/19710?keycode=IKWE02</a></p>
<p>The data storage landscape is becoming increasingly complex. Firms are looking for effective and quick ways to upgrade their storage infrastructure so that they can respond with services that win and retain customers – yet the status quo of storage provisioning continues to be slow and clumsy. Businesses are beginning to discover the advantages of a software-defined storage approach – one that accelerates the delivery of storage resources in today’s complex and dynamic infrastructures.</p>
<p>Why 55% of technology decision makers are expressing interest in or starting to implement software-defined storage</p>
<p>How distributed systems technology transforms commodity server infrastructure into a scalable, resilient, self-service storage platform</p>
]]></content:encoded>
	</item>
</channel>
</rss>
