<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments on: Petabytes on budget 2.0</title>
	<atom:link href="http://www.epanorama.net/blog/2012/12/09/petabytes-on-budget-2-0/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.epanorama.net/blog/2012/12/09/petabytes-on-budget-2-0/</link>
	<description>All about electronics and circuit design</description>
	<lastBuildDate>Wed, 29 Apr 2026 06:53:58 +0000</lastBuildDate>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.9.14</generator>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2012/12/09/petabytes-on-budget-2-0/comment-page-1/#comment-1317017</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Wed, 17 Dec 2014 22:16:14 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=15050#comment-1317017</guid>
		<description><![CDATA[Backblaze&#039;s 6 TB Hard Drive Face-Off
http://hardware.slashdot.org/story/14/12/17/1644205/backblazes-6-tb-hard-drive-face-off

Backblaze is transitioning from using 4 TB hard drives to 6 TB hard drives in the Storage Pods they will be deploying over the coming months. 

Our 6 TB Hard Drive Face-Off
https://www.backblaze.com/blog/6-tb-hard-drive-face-off/]]></description>
		<content:encoded><![CDATA[<p>Backblaze&#8217;s 6 TB Hard Drive Face-Off<br />
<a href="http://hardware.slashdot.org/story/14/12/17/1644205/backblazes-6-tb-hard-drive-face-off" rel="nofollow">http://hardware.slashdot.org/story/14/12/17/1644205/backblazes-6-tb-hard-drive-face-off</a></p>
<p>Backblaze is transitioning from using 4 TB hard drives to 6 TB hard drives in the Storage Pods they will be deploying over the coming months. </p>
<p>Our 6 TB Hard Drive Face-Off<br />
<a href="https://www.backblaze.com/blog/6-tb-hard-drive-face-off/" rel="nofollow">https://www.backblaze.com/blog/6-tb-hard-drive-face-off/</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2012/12/09/petabytes-on-budget-2-0/comment-page-1/#comment-25848</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Tue, 12 Nov 2013 11:22:39 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=15050#comment-25848</guid>
		<description><![CDATA[Server, server in the rack, when&#039;s my disk drive going to crack?
Backblaze&#039;s 25,000-drive study scries the future of your storage
http://www.theregister.co.uk/2013/11/12/server_server_in_the_rack_whens_my_disk_drive_going_to_crack/

Cloud backup outfit Backblaze has cobbled together all the data it&#039;s gathered from the 25,000 or so disk drives it keeps spinning and drawn some conclusions about just how long you can expect disks to survive in an array.

The study&#039;s not the best of guides to data centre performance, because Backblaze happily makes do with consumer-grade drives. As even those drives routinely offer mean time between failure (MTBF) in the hundreds of thousands of hours – decades of operation – or the storage industry&#039;s preferred longevity metric of annualised failure rates (AFR) of under one per cent per year, the study tests those claims as well as any other

Backblaze&#039;s study finds that both AFR and MTBF are bunk. The document finds that disks follow the predicted “bathtub” curve of failure: lots of early failures due to manufacturing errors, a slow decline in failure rates to a shallow bottom and then a steep increase in failure rates as drives age.

The study then looked at when drives fail and found a drive that survives the 5.1 per cent AFR of its first 18 months under load will then only fail 1.4 per cent of the time in the next year and half. After that, things get nasty: in year three a surviving disk has an 11.8 per cent AFR. That still leaves over 80 per cent of drives alive and whirring after four years, a decent outcome.]]></description>
		<content:encoded><![CDATA[<p>Server, server in the rack, when&#8217;s my disk drive going to crack?<br />
Backblaze&#8217;s 25,000-drive study scries the future of your storage<br />
<a href="http://www.theregister.co.uk/2013/11/12/server_server_in_the_rack_whens_my_disk_drive_going_to_crack/" rel="nofollow">http://www.theregister.co.uk/2013/11/12/server_server_in_the_rack_whens_my_disk_drive_going_to_crack/</a></p>
<p>Cloud backup outfit Backblaze has cobbled together all the data it&#8217;s gathered from the 25,000 or so disk drives it keeps spinning and drawn some conclusions about just how long you can expect disks to survive in an array.</p>
<p>The study&#8217;s not the best of guides to data centre performance, because Backblaze happily makes do with consumer-grade drives. As even those drives routinely offer mean time between failure (MTBF) in the hundreds of thousands of hours – decades of operation – or the storage industry&#8217;s preferred longevity metric of annualised failure rates (AFR) of under one per cent per year, the study tests those claims as well as any other</p>
<p>Backblaze&#8217;s study finds that both AFR and MTBF are bunk. The document finds that disks follow the predicted “bathtub” curve of failure: lots of early failures due to manufacturing errors, a slow decline in failure rates to a shallow bottom and then a steep increase in failure rates as drives age.</p>
<p>The study then looked at when drives fail and found a drive that survives the 5.1 per cent AFR of its first 18 months under load will then only fail 1.4 per cent of the time in the next year and half. After that, things get nasty: in year three a surviving disk has an 11.8 per cent AFR. That still leaves over 80 per cent of drives alive and whirring after four years, a decent outcome.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: CrashPlan backup software and service &#171; Tomi Engdahl&#8217;s ePanorama blog</title>
		<link>https://www.epanorama.net/blog/2012/12/09/petabytes-on-budget-2-0/comment-page-1/#comment-25847</link>
		<dc:creator><![CDATA[CrashPlan backup software and service &#171; Tomi Engdahl&#8217;s ePanorama blog]]></dc:creator>
		<pubDate>Fri, 24 May 2013 07:19:35 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=15050#comment-25847</guid>
		<description><![CDATA[[...] that give space enough to backup all your data from PC for few dollars/euros per month (for example Backblaze and [...] ]]></description>
		<content:encoded><![CDATA[<p>[...] that give space enough to backup all your data from PC for few dollars/euros per month (for example Backblaze and [...] </p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2012/12/09/petabytes-on-budget-2-0/comment-page-1/#comment-25846</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Sun, 03 Mar 2013 06:36:10 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=15050#comment-25846</guid>
		<description><![CDATA[For smaller storage needs:

Hacked together NAS in a box
http://hackaday.com/2012/12/18/hacked-together-nas-in-a-box/]]></description>
		<content:encoded><![CDATA[<p>For smaller storage needs:</p>
<p>Hacked together NAS in a box<br />
<a href="http://hackaday.com/2012/12/18/hacked-together-nas-in-a-box/" rel="nofollow">http://hackaday.com/2012/12/18/hacked-together-nas-in-a-box/</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: dressesforbest</title>
		<link>https://www.epanorama.net/blog/2012/12/09/petabytes-on-budget-2-0/comment-page-1/#comment-25845</link>
		<dc:creator><![CDATA[dressesforbest]]></dc:creator>
		<pubDate>Sat, 23 Feb 2013 06:01:14 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=15050#comment-25845</guid>
		<description><![CDATA[The structure for the weblog is a bit off in Epiphany. Nevertheless I like your blog. I may have to use a normal web browser just to enjoy it.]]></description>
		<content:encoded><![CDATA[<p>The structure for the weblog is a bit off in Epiphany. Nevertheless I like your blog. I may have to use a normal web browser just to enjoy it.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2012/12/09/petabytes-on-budget-2-0/comment-page-1/#comment-25844</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Thu, 21 Feb 2013 13:14:30 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=15050#comment-25844</guid>
		<description><![CDATA[180TB of Good Vibrations – Storage Pod 3.0
http://blog.backblaze.com/2013/02/20/180tb-of-good-vibrations-storage-pod-3-0/]]></description>
		<content:encoded><![CDATA[<p>180TB of Good Vibrations – Storage Pod 3.0<br />
<a href="http://blog.backblaze.com/2013/02/20/180tb-of-good-vibrations-storage-pod-3-0/" rel="nofollow">http://blog.backblaze.com/2013/02/20/180tb-of-good-vibrations-storage-pod-3-0/</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2012/12/09/petabytes-on-budget-2-0/comment-page-1/#comment-25843</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Thu, 21 Feb 2013 13:14:07 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=15050#comment-25843</guid>
		<description><![CDATA[Backblaze shares third-gen storage server design
http://news.cnet.com/8301-11386_3-57570245-76/backblaze-shares-third-gen-storage-server-design/

Want to launch your own high-capacity networked storage infrastructure? Backblaze just shared its new 180-terabyte Storage Pod design.

It was something of a PR stunt when the company shared its first-gen Storage Pod design back in 2009

Netflix was inspired to share its Open Connect Appliance Hardware design, and Backblaze also showed its Storage Pod 2.0 design, which could accommodate 135TB of data.

 Now the Storage Pod 3.0 design is out, too. Backblaze uses 450 pods to hold more than 50 petabytes of customer data, it said.

The third-generation pods use 4TB drives -- up to 45 of them -- which increases the total capacity. Instead of being held in place by a band, they&#039;re now squeezed by an anti-vibration panel that also shaves an hour off storage pod assembly time and makes it easier to replace failed drives.

The new design also switched to a Supermicro MBD-X9SCL-F motherboard, upgraded to a second-generation, lower-power Intel Core i3-2100 processor, improved airflow to keep components cool. The total cost for the pod -- without drives -- dropped to $1,942.59, $37.41 less than the second-gen Storage Pod.

 Indeed, the cost per gigabyte is somewhat higher for new 4TB drives than for the earlier 3TB models it&#039;s been using. But lower costs for power consumption, rack space, and installation mean the 4TB drives work out to be the same cost.]]></description>
		<content:encoded><![CDATA[<p>Backblaze shares third-gen storage server design<br />
<a href="http://news.cnet.com/8301-11386_3-57570245-76/backblaze-shares-third-gen-storage-server-design/" rel="nofollow">http://news.cnet.com/8301-11386_3-57570245-76/backblaze-shares-third-gen-storage-server-design/</a></p>
<p>Want to launch your own high-capacity networked storage infrastructure? Backblaze just shared its new 180-terabyte Storage Pod design.</p>
<p>It was something of a PR stunt when the company shared its first-gen Storage Pod design back in 2009</p>
<p>Netflix was inspired to share its Open Connect Appliance Hardware design, and Backblaze also showed its Storage Pod 2.0 design, which could accommodate 135TB of data.</p>
<p> Now the Storage Pod 3.0 design is out, too. Backblaze uses 450 pods to hold more than 50 petabytes of customer data, it said.</p>
<p>The third-generation pods use 4TB drives &#8212; up to 45 of them &#8212; which increases the total capacity. Instead of being held in place by a band, they&#8217;re now squeezed by an anti-vibration panel that also shaves an hour off storage pod assembly time and makes it easier to replace failed drives.</p>
<p>The new design also switched to a Supermicro MBD-X9SCL-F motherboard, upgraded to a second-generation, lower-power Intel Core i3-2100 processor, improved airflow to keep components cool. The total cost for the pod &#8212; without drives &#8212; dropped to $1,942.59, $37.41 less than the second-gen Storage Pod.</p>
<p> Indeed, the cost per gigabyte is somewhat higher for new 4TB drives than for the earlier 3TB models it&#8217;s been using. But lower costs for power consumption, rack space, and installation mean the 4TB drives work out to be the same cost.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: tomi</title>
		<link>https://www.epanorama.net/blog/2012/12/09/petabytes-on-budget-2-0/comment-page-1/#comment-25842</link>
		<dc:creator><![CDATA[tomi]]></dc:creator>
		<pubDate>Thu, 27 Dec 2012 19:16:54 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=15050#comment-25842</guid>
		<description><![CDATA[Good points. Thanks you for your feedback.]]></description>
		<content:encoded><![CDATA[<p>Good points. Thanks you for your feedback.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: EllisGL</title>
		<link>https://www.epanorama.net/blog/2012/12/09/petabytes-on-budget-2-0/comment-page-1/#comment-25841</link>
		<dc:creator><![CDATA[EllisGL]]></dc:creator>
		<pubDate>Sat, 22 Dec 2012 04:47:40 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=15050#comment-25841</guid>
		<description><![CDATA[The redundancy can be done easily externally via Distributed file systems like XteemFS (http://www.xtreemfs.org) and Gluster (http://www.gluster.org/). Using RAID 6 like they state they do, this give even more error correction and allows to fix issues before they really go down hill. I guess you could do LVM and do RAID 10 on top of the RAID 6 to make it more resilient and maybe a little more speedy on reads, since RAID six will really kill your write speeds.]]></description>
		<content:encoded><![CDATA[<p>The redundancy can be done easily externally via Distributed file systems like XteemFS (<a href="http://www.xtreemfs.org" rel="nofollow">http://www.xtreemfs.org</a>) and Gluster (<a href="http://www.gluster.org/" rel="nofollow">http://www.gluster.org/</a>). Using RAID 6 like they state they do, this give even more error correction and allows to fix issues before they really go down hill. I guess you could do LVM and do RAID 10 on top of the RAID 6 to make it more resilient and maybe a little more speedy on reads, since RAID six will really kill your write speeds.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: golf putting</title>
		<link>https://www.epanorama.net/blog/2012/12/09/petabytes-on-budget-2-0/comment-page-1/#comment-25840</link>
		<dc:creator><![CDATA[golf putting]]></dc:creator>
		<pubDate>Tue, 11 Dec 2012 12:09:19 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=15050#comment-25840</guid>
		<description><![CDATA[Magnificent website. A lot of useful info here. I am sending it to several friends ans additionally sharing in delicious. And naturally, thank you to your sweat!]]></description>
		<content:encoded><![CDATA[<p>Magnificent website. A lot of useful info here. I am sending it to several friends ans additionally sharing in delicious. And naturally, thank you to your sweat!</p>
]]></content:encoded>
	</item>
</channel>
</rss>
