<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments on: Lightning protection</title>
	<atom:link href="http://www.epanorama.net/blog/2013/06/25/lightning-protection/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.epanorama.net/blog/2013/06/25/lightning-protection/</link>
	<description>All about electronics and circuit design</description>
	<lastBuildDate>Sun, 05 Apr 2026 18:35:45 +0000</lastBuildDate>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.9.14</generator>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2013/06/25/lightning-protection/comment-page-1/#comment-1601226</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Sat, 25 Aug 2018 08:13:15 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=20554#comment-1601226</guid>
		<description><![CDATA[What caused this outlet strip&#039;s catastrophic failure?
https://www.edn.com/electronics-blogs/living-analog/4461014/What-caused-this-outlet-strips-catastrophic-failure-?utm_source=Aspencore&amp;utm_medium=EDN&amp;utm_campaign=social

Failed outlet strip

When this happened, the circuit breaker in the basement shut off but, quite clearly, there were some very dangerous pyrotechnics going on until that basement circuit breaker finally responded.]]></description>
		<content:encoded><![CDATA[<p>What caused this outlet strip&#8217;s catastrophic failure?<br />
<a href="https://www.edn.com/electronics-blogs/living-analog/4461014/What-caused-this-outlet-strips-catastrophic-failure-?utm_source=Aspencore&#038;utm_medium=EDN&#038;utm_campaign=social" rel="nofollow">https://www.edn.com/electronics-blogs/living-analog/4461014/What-caused-this-outlet-strips-catastrophic-failure-?utm_source=Aspencore&#038;utm_medium=EDN&#038;utm_campaign=social</a></p>
<p>Failed outlet strip</p>
<p>When this happened, the circuit breaker in the basement shut off but, quite clearly, there were some very dangerous pyrotechnics going on until that basement circuit breaker finally responded.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2013/06/25/lightning-protection/comment-page-1/#comment-1600357</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Thu, 16 Aug 2018 12:10:37 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=20554#comment-1600357</guid>
		<description><![CDATA[Effective Surge And Lightning Strike Protection For AC And DC Power Line Applications 
https://www.powerelectronics.com/circuit-protection-ics/effective-surge-and-lightning-strike-protection-ac-and-dc-power-line-applicat?PK=UM_Classics0818&amp;utm_rid=CPG05000002750211&amp;utm_campaign=19028&amp;utm_medium=email&amp;elq2=3df46cf795774ab1aa1f03f9cfd71022

Advancements in power TVS diodes have made them an effective alternative to metal oxide varistors (MOVs) in protecting AC and DC power supplies from power line surges and indirect lightning strikes. These devices not only provide improved reliability and increased durability against repetitive surges when compared to MOVs in AC and DC power line applications, their use of surface mount packaging delivers an enhanced surge response due to lower lead inductance.

An important element of the circuit protection design must limit the peak surge voltage to an acceptable level and by not short-circuiting the line for an extended period of time. Advancements in power TVS diodes have made them optimal solution to meet the demands of these applications, and offer a more effective alternative to metal oxide varistors (MOVs). 

Power TVS diodes have become an effective alternative to metal oxide varistors (MOVs) in protecting AC and DC power supplies from power line surges and indirect lightning strikes, due to their improved reliability and increased durability against repetitive surges in AC and DC power line applications.]]></description>
		<content:encoded><![CDATA[<p>Effective Surge And Lightning Strike Protection For AC And DC Power Line Applications<br />
<a href="https://www.powerelectronics.com/circuit-protection-ics/effective-surge-and-lightning-strike-protection-ac-and-dc-power-line-applicat?PK=UM_Classics0818&#038;utm_rid=CPG05000002750211&#038;utm_campaign=19028&#038;utm_medium=email&#038;elq2=3df46cf795774ab1aa1f03f9cfd71022" rel="nofollow">https://www.powerelectronics.com/circuit-protection-ics/effective-surge-and-lightning-strike-protection-ac-and-dc-power-line-applicat?PK=UM_Classics0818&#038;utm_rid=CPG05000002750211&#038;utm_campaign=19028&#038;utm_medium=email&#038;elq2=3df46cf795774ab1aa1f03f9cfd71022</a></p>
<p>Advancements in power TVS diodes have made them an effective alternative to metal oxide varistors (MOVs) in protecting AC and DC power supplies from power line surges and indirect lightning strikes. These devices not only provide improved reliability and increased durability against repetitive surges when compared to MOVs in AC and DC power line applications, their use of surface mount packaging delivers an enhanced surge response due to lower lead inductance.</p>
<p>An important element of the circuit protection design must limit the peak surge voltage to an acceptable level and by not short-circuiting the line for an extended period of time. Advancements in power TVS diodes have made them optimal solution to meet the demands of these applications, and offer a more effective alternative to metal oxide varistors (MOVs). </p>
<p>Power TVS diodes have become an effective alternative to metal oxide varistors (MOVs) in protecting AC and DC power supplies from power line surges and indirect lightning strikes, due to their improved reliability and increased durability against repetitive surges in AC and DC power line applications.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2013/06/25/lightning-protection/comment-page-1/#comment-1595224</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Mon, 25 Jun 2018 11:18:20 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=20554#comment-1595224</guid>
		<description><![CDATA[How to connect Varistor
https://www.youtube.com/watch?v=mN4-l-lWlI4]]></description>
		<content:encoded><![CDATA[<p>How to connect Varistor<br />
<a href="https://www.youtube.com/watch?v=mN4-l-lWlI4" rel="nofollow">https://www.youtube.com/watch?v=mN4-l-lWlI4</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2013/06/25/lightning-protection/comment-page-1/#comment-1583739</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Mon, 05 Mar 2018 17:05:03 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=20554#comment-1583739</guid>
		<description><![CDATA[Understanding the Pros and Cons of Overvoltage Protection
http://www.electronicdesign.com/power/understanding-pros-and-cons-overvoltage-protection?code=UM_AT17TIPwr4&amp;utm_rid=CPG05000002750211&amp;utm_campaign=14067&amp;utm_medium=email&amp;elq2=c086625e804c4f8b88578de0f4f5b654

Understanding how overvoltage protection (OVP) works and when it may falsely trip or miss an overvoltage helps pinpoint the right OVP method to protect your device under test, based on what may happen in the test environment.

When testing your devices, it may become apparent that the device needs protection against an overvoltage condition. Most power supplies offer some form of overvoltage-protect (OVP) circuit. The OVP circuit’s purpose is to detect and then quickly pull down the overvoltage condition to prevent damage to your device under test (DUT). However, it’s important to understand how your power supply’s OVP works to maximize its benefits.

What Causes Overvoltage?

The power supply itself could be the source of the overvoltage. A failure inside the power supply may force an unexpected and uncontrolled high voltage across the DUT. It’s also possible that the overvoltage is not due to a power-supply failure, but some user error where the user programs the power supply higher than the DUT can tolerate. 

The overvoltage condition could come from outside of the power supply. The DUT can be subjected to overvoltage because wires inside a connector or wiring harness short together, placing high voltage on the DUT. 

How Does OVP Work?

OVP circuits can be fixed or tracking and local or remote. A fixed OVP makes it possible to set a fixed voltage threshold, either manually or programmed remotely. It’s a fixed value such that when the power-supply output voltage exceeds this value, the OVP circuit trips and the power supply tries to pull down the overvoltage on its output. The power supply output voltage can be changed, and the OVP threshold stays the same.

A tracking OVP allows you to set a threshold value that varies with the output voltage. For example, the tracking OVP might be set to 0.5 V, or 10%, over the programmed output voltage.

False Trips vs. Undetected Real Overvoltage Conditions

It’s desirable to have overvoltage protection, but if the OVP is able to be falsely tripped, it quickly becomes a nuisance. On the other hand, though, if the OVP can possibly miss a real overvoltage condition, then that becomes hazardous. 

Protecting your DUT always involves a tradeoff between the highest level of protection and false trips of an OVP circuit.]]></description>
		<content:encoded><![CDATA[<p>Understanding the Pros and Cons of Overvoltage Protection<br />
<a href="http://www.electronicdesign.com/power/understanding-pros-and-cons-overvoltage-protection?code=UM_AT17TIPwr4&#038;utm_rid=CPG05000002750211&#038;utm_campaign=14067&#038;utm_medium=email&#038;elq2=c086625e804c4f8b88578de0f4f5b654" rel="nofollow">http://www.electronicdesign.com/power/understanding-pros-and-cons-overvoltage-protection?code=UM_AT17TIPwr4&#038;utm_rid=CPG05000002750211&#038;utm_campaign=14067&#038;utm_medium=email&#038;elq2=c086625e804c4f8b88578de0f4f5b654</a></p>
<p>Understanding how overvoltage protection (OVP) works and when it may falsely trip or miss an overvoltage helps pinpoint the right OVP method to protect your device under test, based on what may happen in the test environment.</p>
<p>When testing your devices, it may become apparent that the device needs protection against an overvoltage condition. Most power supplies offer some form of overvoltage-protect (OVP) circuit. The OVP circuit’s purpose is to detect and then quickly pull down the overvoltage condition to prevent damage to your device under test (DUT). However, it’s important to understand how your power supply’s OVP works to maximize its benefits.</p>
<p>What Causes Overvoltage?</p>
<p>The power supply itself could be the source of the overvoltage. A failure inside the power supply may force an unexpected and uncontrolled high voltage across the DUT. It’s also possible that the overvoltage is not due to a power-supply failure, but some user error where the user programs the power supply higher than the DUT can tolerate. </p>
<p>The overvoltage condition could come from outside of the power supply. The DUT can be subjected to overvoltage because wires inside a connector or wiring harness short together, placing high voltage on the DUT. </p>
<p>How Does OVP Work?</p>
<p>OVP circuits can be fixed or tracking and local or remote. A fixed OVP makes it possible to set a fixed voltage threshold, either manually or programmed remotely. It’s a fixed value such that when the power-supply output voltage exceeds this value, the OVP circuit trips and the power supply tries to pull down the overvoltage on its output. The power supply output voltage can be changed, and the OVP threshold stays the same.</p>
<p>A tracking OVP allows you to set a threshold value that varies with the output voltage. For example, the tracking OVP might be set to 0.5 V, or 10%, over the programmed output voltage.</p>
<p>False Trips vs. Undetected Real Overvoltage Conditions</p>
<p>It’s desirable to have overvoltage protection, but if the OVP is able to be falsely tripped, it quickly becomes a nuisance. On the other hand, though, if the OVP can possibly miss a real overvoltage condition, then that becomes hazardous. </p>
<p>Protecting your DUT always involves a tradeoff between the highest level of protection and false trips of an OVP circuit.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2013/06/25/lightning-protection/comment-page-1/#comment-1493337</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Sun, 05 Jun 2016 13:02:03 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=20554#comment-1493337</guid>
		<description><![CDATA[Pre-exploded surge protection strip. What the heck???
https://www.youtube.com/watch?v=pnJml7Rz3sQ

On a plus note I get to talk about MOV (Metal Oxide Varistor) surge suppressors and then explain why it was probably just as well it went on fire in the factory anyway.]]></description>
		<content:encoded><![CDATA[<p>Pre-exploded surge protection strip. What the heck???<br />
<a href="https://www.youtube.com/watch?v=pnJml7Rz3sQ" rel="nofollow">https://www.youtube.com/watch?v=pnJml7Rz3sQ</a></p>
<p>On a plus note I get to talk about MOV (Metal Oxide Varistor) surge suppressors and then explain why it was probably just as well it went on fire in the factory anyway.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2013/06/25/lightning-protection/comment-page-1/#comment-1476964</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Thu, 03 Mar 2016 10:22:25 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=20554#comment-1476964</guid>
		<description><![CDATA[Analyzing the demise of a network adapter
http://www.edn.com/electronics-blogs/brians-brain/4441501/Analyzing-the-demise-of-a-network-adapter?_mc=NL_EDN_EDT_EDN_today_20160302&amp;cid=NL_EDN_EDT_EDN_today_20160302&amp;elqTrackId=1a8afcb8badc44c580e77eda7b4fcfd5&amp;elq=ac839723036b4856b36e1de97c4385ca&amp;elqaid=31129&amp;elqat=1&amp;elqCampaignId=27211

 A recent teardown dissected one of this year&#039;s victims, a MoCA network adapter.

You&#039;ll note that in last year&#039;s teardown, I was unable to find any visible damage that would point to a particular failure mechanism. Symptomatically, I instead suggested that the Ethernet controller might have gotten zapped, the result of a coiled strand of Cat5e that acted as an antenna. This time around, however, the breakdown point was immediately evident:

That&#039;s the Realtek RTL8211CL single-port Ethernet controller. And, in case it&#039;s not already evident to you, the package isn&#039;t supposed to have a hole blown out of it ;-) The damage is reminiscent of another 2014 lightning-strike victim, a D-Link GO-SW-8GE eight-port GbE switch, whose Ethernet controller IC suffered similar indignity

This commonality is causing me to potentially reconsider the root cause of the HDHomeRun Prime&#039;s demise this time. I&#039;d previously suspected that the EMP coupled to the hardware via coax cable running around the residence exterior, since the coax-connected MoCA adapter had also died. But I&#039;m now once again suspecting that the strand of Ethernet cable running between the HDHomeRun Prime and an eight-port GbE switch was the culprit, which would also explain why the switch was rendered &quot;confused&quot; (only temporarily in this particular case, thankfully).]]></description>
		<content:encoded><![CDATA[<p>Analyzing the demise of a network adapter<br />
<a href="http://www.edn.com/electronics-blogs/brians-brain/4441501/Analyzing-the-demise-of-a-network-adapter?_mc=NL_EDN_EDT_EDN_today_20160302&#038;cid=NL_EDN_EDT_EDN_today_20160302&#038;elqTrackId=1a8afcb8badc44c580e77eda7b4fcfd5&#038;elq=ac839723036b4856b36e1de97c4385ca&#038;elqaid=31129&#038;elqat=1&#038;elqCampaignId=27211" rel="nofollow">http://www.edn.com/electronics-blogs/brians-brain/4441501/Analyzing-the-demise-of-a-network-adapter?_mc=NL_EDN_EDT_EDN_today_20160302&#038;cid=NL_EDN_EDT_EDN_today_20160302&#038;elqTrackId=1a8afcb8badc44c580e77eda7b4fcfd5&#038;elq=ac839723036b4856b36e1de97c4385ca&#038;elqaid=31129&#038;elqat=1&#038;elqCampaignId=27211</a></p>
<p> A recent teardown dissected one of this year&#8217;s victims, a MoCA network adapter.</p>
<p>You&#8217;ll note that in last year&#8217;s teardown, I was unable to find any visible damage that would point to a particular failure mechanism. Symptomatically, I instead suggested that the Ethernet controller might have gotten zapped, the result of a coiled strand of Cat5e that acted as an antenna. This time around, however, the breakdown point was immediately evident:</p>
<p>That&#8217;s the Realtek RTL8211CL single-port Ethernet controller. And, in case it&#8217;s not already evident to you, the package isn&#8217;t supposed to have a hole blown out of it <img src="http://www.epanorama.net/blog/wp-includes/images/smilies/icon_wink.gif" alt=";-)" class="wp-smiley" />  The damage is reminiscent of another 2014 lightning-strike victim, a D-Link GO-SW-8GE eight-port GbE switch, whose Ethernet controller IC suffered similar indignity</p>
<p>This commonality is causing me to potentially reconsider the root cause of the HDHomeRun Prime&#8217;s demise this time. I&#8217;d previously suspected that the EMP coupled to the hardware via coax cable running around the residence exterior, since the coax-connected MoCA adapter had also died. But I&#8217;m now once again suspecting that the strand of Ethernet cable running between the HDHomeRun Prime and an eight-port GbE switch was the culprit, which would also explain why the switch was rendered &#8220;confused&#8221; (only temporarily in this particular case, thankfully).</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2013/06/25/lightning-protection/comment-page-1/#comment-1460424</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Tue, 15 Dec 2015 10:42:55 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=20554#comment-1460424</guid>
		<description><![CDATA[Devices fall victim to lightning strike, again
http://www.edn.com/electronics-blogs/brians-brain/4440938/Devices-fall-victim-to-lightning-strike--again?_mc=NL_EDN_EDT_EDN_review_20151211&amp;cid=NL_EDN_EDT_EDN_review_20151211&amp;elq=bd7234d9123f430f9310d77b079ebb5f&amp;elqCampaignId=26073&amp;elqaid=29722&amp;elqat=1&amp;elqTrackId=91861cb23d324ca58834a980a05699ec

I can&#039;t believe it&#039;s happened again. Last October, I told you about a nearby lightning strike that took out the digital board in my plasma TV in mid-August, along with two GbE switches and a CableCARD tuner (I ended up fixing the TV, and tearing down the latter three devices). Well, this October, pretty much the exact same thing happened. Although I didn&#039;t see the bolt itself, therefore where it hit, the deafening crack of the accompanying thunder directly overhead was impossible to miss. And although the residence&#039;s abundance of electronics at first seemed to have survived unscathed (my laptop remained on and online via the router-sourced 5 GHz Wi-Fi beacon, for example), I relatively quickly realized the delusion of my initial over-optimistic diagnosis.

Lightning strike becomes EMP weapon
http://www.edn.com/electronics-blogs/brians-brain/4435969/Lightning-strike-becomes-EMP-weapon-

One of the perks (generally speaking, as you&#039;ll soon see) of living in the Colorado Rocky Mountains is the multi-sensory experience of the thunderstorms that churn through them nearly every summer afternoon. About a week ago, I returned home one evening to find an excited friend awaiting me, who&#039;d seen and heard a lightning bolt hit only a few dozen yards (he claimed) away from my home&#039;s southeast corner. 

Nonetheless, I considered myself lucky that it hadn&#039;t scored a bullseye on my property, and assumed I&#039;d dodged damage.

That was until I realized that I couldn&#039;t get online. Eventually, I discovered that not one but two multiport GbE switches (a LG-Ericsson ES-1105G and a D-Link GO-SW-8GE), both located in the southeast quadrant of the residence, no longer would power up. And I later realized, after several successive days&#039; worth of unsuccessful television program recordings, that a seemingly otherwise functional (judging by front panel LED illumination, although one of them was now red, not green) SiliconDust HDHomeRun Prime CableCARD TV tuner would no longer go online, either.

A power surge might tidily explain the switches&#039; failures, but it doesn&#039;t account quite as neatly for the TV tuner&#039;s offline-but-otherwise-still-alive status. All three devices, along with others (both powered on and off at the time) were connected to premises power through high quality surge protectors, in some cases also in combination with UPS backup batteries. And the remainder of the gear seemed (fingers crossed) to survive the near miss unscathed. Why, then, did these particular products expire?

The culprit, I suspect after a bit of pondering, is a two-fold combination; the failed gear&#039;s locations in the residence, coupled with their Ethernet interconnect. As the lightning bolt headed to the ground in the open space behind my house, it radiated an abundance of broad-spectrum electronic interference; in effect, it was an EMP weapon delivered by Mother Nature. The several dozen feet of Ethernet cable connecting the two switches to each other, and connecting one of them to the router (which bafflingly seems to have survived unscathed), acted as an antenna for receiving that EMP. And, zap.]]></description>
		<content:encoded><![CDATA[<p>Devices fall victim to lightning strike, again<br />
<a href="http://www.edn.com/electronics-blogs/brians-brain/4440938/Devices-fall-victim-to-lightning-strike--again?_mc=NL_EDN_EDT_EDN_review_20151211&#038;cid=NL_EDN_EDT_EDN_review_20151211&#038;elq=bd7234d9123f430f9310d77b079ebb5f&#038;elqCampaignId=26073&#038;elqaid=29722&#038;elqat=1&#038;elqTrackId=91861cb23d324ca58834a980a05699ec" rel="nofollow">http://www.edn.com/electronics-blogs/brians-brain/4440938/Devices-fall-victim-to-lightning-strike&#8211;again?_mc=NL_EDN_EDT_EDN_review_20151211&#038;cid=NL_EDN_EDT_EDN_review_20151211&#038;elq=bd7234d9123f430f9310d77b079ebb5f&#038;elqCampaignId=26073&#038;elqaid=29722&#038;elqat=1&#038;elqTrackId=91861cb23d324ca58834a980a05699ec</a></p>
<p>I can&#8217;t believe it&#8217;s happened again. Last October, I told you about a nearby lightning strike that took out the digital board in my plasma TV in mid-August, along with two GbE switches and a CableCARD tuner (I ended up fixing the TV, and tearing down the latter three devices). Well, this October, pretty much the exact same thing happened. Although I didn&#8217;t see the bolt itself, therefore where it hit, the deafening crack of the accompanying thunder directly overhead was impossible to miss. And although the residence&#8217;s abundance of electronics at first seemed to have survived unscathed (my laptop remained on and online via the router-sourced 5 GHz Wi-Fi beacon, for example), I relatively quickly realized the delusion of my initial over-optimistic diagnosis.</p>
<p>Lightning strike becomes EMP weapon<br />
<a href="http://www.edn.com/electronics-blogs/brians-brain/4435969/Lightning-strike-becomes-EMP-weapon-" rel="nofollow">http://www.edn.com/electronics-blogs/brians-brain/4435969/Lightning-strike-becomes-EMP-weapon-</a></p>
<p>One of the perks (generally speaking, as you&#8217;ll soon see) of living in the Colorado Rocky Mountains is the multi-sensory experience of the thunderstorms that churn through them nearly every summer afternoon. About a week ago, I returned home one evening to find an excited friend awaiting me, who&#8217;d seen and heard a lightning bolt hit only a few dozen yards (he claimed) away from my home&#8217;s southeast corner. </p>
<p>Nonetheless, I considered myself lucky that it hadn&#8217;t scored a bullseye on my property, and assumed I&#8217;d dodged damage.</p>
<p>That was until I realized that I couldn&#8217;t get online. Eventually, I discovered that not one but two multiport GbE switches (a LG-Ericsson ES-1105G and a D-Link GO-SW-8GE), both located in the southeast quadrant of the residence, no longer would power up. And I later realized, after several successive days&#8217; worth of unsuccessful television program recordings, that a seemingly otherwise functional (judging by front panel LED illumination, although one of them was now red, not green) SiliconDust HDHomeRun Prime CableCARD TV tuner would no longer go online, either.</p>
<p>A power surge might tidily explain the switches&#8217; failures, but it doesn&#8217;t account quite as neatly for the TV tuner&#8217;s offline-but-otherwise-still-alive status. All three devices, along with others (both powered on and off at the time) were connected to premises power through high quality surge protectors, in some cases also in combination with UPS backup batteries. And the remainder of the gear seemed (fingers crossed) to survive the near miss unscathed. Why, then, did these particular products expire?</p>
<p>The culprit, I suspect after a bit of pondering, is a two-fold combination; the failed gear&#8217;s locations in the residence, coupled with their Ethernet interconnect. As the lightning bolt headed to the ground in the open space behind my house, it radiated an abundance of broad-spectrum electronic interference; in effect, it was an EMP weapon delivered by Mother Nature. The several dozen feet of Ethernet cable connecting the two switches to each other, and connecting one of them to the router (which bafflingly seems to have survived unscathed), acted as an antenna for receiving that EMP. And, zap.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2013/06/25/lightning-protection/comment-page-1/#comment-1428120</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Fri, 21 Aug 2015 07:42:10 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=20554#comment-1428120</guid>
		<description><![CDATA[Google loses data as lightning strikes
http://www.bbc.com/news/technology-33989384

Google says data has been wiped from discs at one of its data centres in Belgium – after it was struck by lightning four times.

Some people have permanently lost access to their files as a result.

A number of disks damaged following the lightning strikes did, however, later became accessible.

Generally, data centres require more lightning protection than most other buildings.

While four successive strikes might sound highly unlikely, lightning does not need to repeatedly strike a building in exactly the same spot to cause additional damage.

Justin Gale, project manager for the lightning protection service Orion, said lightning could strike power or telecommunications cables connected to a building at a distance and still cause disruptions.

“The cabling alone can be struck anything up to a kilometre away, bring [the shock] back to the data centre and fuse everything that’s in it,” he said.

In an online statement, Google said that data on just 0.000001% of disk space was permanently affected.

“Although automatic auxiliary systems restored power quickly, and the storage systems are designed with battery backup, some recently written data was located on storage systems which were more susceptible to power failure from extended or repeated battery drain,” it said.

The company added it would continue to upgrade hardware and improve its response procedures to make future losses less likely.

Google Compute Engine Incident #15056
https://status.cloud.google.com/incident/compute/15056#5719570367119360

Google Compute Engine Persistent Disk issue in europe-west1-b

From Thursday 13 August 2015 to Monday 17 August 2015, errors occurred on a small proportion of Google Compute Engine persistent disks in the europe-west1-b zone. The affected disks sporadically returned I/O errors to their attached GCE instances, and also typically returned errors for management operations such as snapshot creation. In a very small fraction of cases (less than 0.000001% of PD space in europe-west1-b), there was permanent data loss.

ROOT CAUSE:

At 09:19 PDT on Thursday 13 August 2015, four successive lightning strikes on the local utilities grid that powers our European datacenter caused a brief loss of power to storage systems which host disk capacity for GCE instances in the europe-west1-b zone. Although automatic auxiliary systems restored power quickly, and the storage systems are designed with battery backup, some recently written data was located on storage systems which were more susceptible to power failure from extended or repeated battery drain. In almost all cases the data was successfully committed to stable storage, although manual intervention was required in order to restore the systems to their normal serving state. However, in a very few cases, recent writes were unrecoverable, leading to permanent data loss on the Persistent Disk.

This outage is wholly Google’s responsibility.]]></description>
		<content:encoded><![CDATA[<p>Google loses data as lightning strikes<br />
<a href="http://www.bbc.com/news/technology-33989384" rel="nofollow">http://www.bbc.com/news/technology-33989384</a></p>
<p>Google says data has been wiped from discs at one of its data centres in Belgium – after it was struck by lightning four times.</p>
<p>Some people have permanently lost access to their files as a result.</p>
<p>A number of disks damaged following the lightning strikes did, however, later became accessible.</p>
<p>Generally, data centres require more lightning protection than most other buildings.</p>
<p>While four successive strikes might sound highly unlikely, lightning does not need to repeatedly strike a building in exactly the same spot to cause additional damage.</p>
<p>Justin Gale, project manager for the lightning protection service Orion, said lightning could strike power or telecommunications cables connected to a building at a distance and still cause disruptions.</p>
<p>“The cabling alone can be struck anything up to a kilometre away, bring [the shock] back to the data centre and fuse everything that’s in it,” he said.</p>
<p>In an online statement, Google said that data on just 0.000001% of disk space was permanently affected.</p>
<p>“Although automatic auxiliary systems restored power quickly, and the storage systems are designed with battery backup, some recently written data was located on storage systems which were more susceptible to power failure from extended or repeated battery drain,” it said.</p>
<p>The company added it would continue to upgrade hardware and improve its response procedures to make future losses less likely.</p>
<p>Google Compute Engine Incident #15056<br />
<a href="https://status.cloud.google.com/incident/compute/15056#5719570367119360" rel="nofollow">https://status.cloud.google.com/incident/compute/15056#5719570367119360</a></p>
<p>Google Compute Engine Persistent Disk issue in europe-west1-b</p>
<p>From Thursday 13 August 2015 to Monday 17 August 2015, errors occurred on a small proportion of Google Compute Engine persistent disks in the europe-west1-b zone. The affected disks sporadically returned I/O errors to their attached GCE instances, and also typically returned errors for management operations such as snapshot creation. In a very small fraction of cases (less than 0.000001% of PD space in europe-west1-b), there was permanent data loss.</p>
<p>ROOT CAUSE:</p>
<p>At 09:19 PDT on Thursday 13 August 2015, four successive lightning strikes on the local utilities grid that powers our European datacenter caused a brief loss of power to storage systems which host disk capacity for GCE instances in the europe-west1-b zone. Although automatic auxiliary systems restored power quickly, and the storage systems are designed with battery backup, some recently written data was located on storage systems which were more susceptible to power failure from extended or repeated battery drain. In almost all cases the data was successfully committed to stable storage, although manual intervention was required in order to restore the systems to their normal serving state. However, in a very few cases, recent writes were unrecoverable, leading to permanent data loss on the Persistent Disk.</p>
<p>This outage is wholly Google’s responsibility.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2013/06/25/lightning-protection/comment-page-1/#comment-1428119</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Fri, 21 Aug 2015 07:41:42 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=20554#comment-1428119</guid>
		<description><![CDATA[Google Loses Data: Who Says Lightning Never Strikes Twice?
http://www.eetimes.com/document.asp?doc_id=1327474&amp;

oogle experienced high read/write error rates and a small data loss at its Google Compute Engine data center in Ghislain, Belgium, Aug. 13-17 following a storm that delivered four lightning strikes on or near the data center.

Data centers, like other commercial buildings, can be protected from lightning, and Google offered no details as to how its persistent-state disk equipment had been affected by the strikes, other than to say they caused power supply lapses. Emergency power kicked in as planned, but in some cases the battery backup to the disk systems did not perform as expected.
Sponsor video, mouseover for sound
 

According to a summary of the incident by the Google cloud operations team posted to its Google Cloud Status page: &quot;Although automatic auxiliary systems restored power quickly, and the storage systems are designed with battery backup, some recently written data was located on storage systems which were more susceptible to power failure from extended or repeated battery drain.&quot; 

The Cloud Status summary doesn&#039;t say whether the repeated strikes led to multiple failures of the power supply to the disks.

The summary also did not say the data center was struck four times, as a BBC report on the incident noted. Rather, Google said only that there were &quot;four successive strikes on the electrical systems of a European data center.&quot;

Google Loses Data: Who Says Lightning Never Strikes Twice? 
http://www.informationweek.com/cloud/cloud-storage/google-loses-data-who-says-lightning-never-strikes-twice-/d/d-id/1321836

In a four-strike incident, power to Google Compute Cloud disks in Ghislain, Belgium, gets interupted and data writes are lost.

Google experienced high read/write error rates and a small data loss at its Google Compute Engine data center in Ghislain, Belgium, Aug. 13-17 following a storm that delivered four lightning strikes on or near the data center.

According to a summary of the incident by the Google cloud operations team posted to its Google Cloud Status page: &quot;Although automatic auxiliary systems restored power quickly, and the storage systems are designed with battery backup, some recently written data was located on storage systems which were more susceptible to power failure from extended or repeated battery drain.&quot; 

The Cloud Status summary doesn&#039;t say whether the repeated strikes led to multiple failures of the power supply to the disks.

In Google&#039;s situation, its summary report said: &quot;In almost all cases the data was successfully committed to stable storage, although manual intervention was required in order to restore the systems to their normal serving state. However, in a very few cases, recent writes were unrecoverable, leading to permanent data loss on the Persistent Disk.&quot;

Any loss of data is a serious incident for a cloud service provider, and they take extraordinary measures to prevent it. Data sets are routinely copied three times, so that a hardware failure will still leave two intact copies. But the power interruption in Ghislain caused some data writes to disk to be lost, and it was those write incidents that created the lost data.

As a way of minimizing the loss, the Google summary cited a statistic that represented the amount of persistent disk space that had been affected out of the total available in Ghislain -- &quot;less than 0.000001%.&quot; That was a meaningless figure to those customers who happened to be doing frequent read/writes with their systems at the time. A more meaningful figure would have been simply the total amount of data lost in kilobytes, megabytes, or terabytes or the percentage of writes lost.

Google loses data as lightning strikes
http://www.bbc.com/news/technology-33989384]]></description>
		<content:encoded><![CDATA[<p>Google Loses Data: Who Says Lightning Never Strikes Twice?<br />
<a href="http://www.eetimes.com/document.asp?doc_id=1327474&#038;amp" rel="nofollow">http://www.eetimes.com/document.asp?doc_id=1327474&#038;amp</a>;</p>
<p>oogle experienced high read/write error rates and a small data loss at its Google Compute Engine data center in Ghislain, Belgium, Aug. 13-17 following a storm that delivered four lightning strikes on or near the data center.</p>
<p>Data centers, like other commercial buildings, can be protected from lightning, and Google offered no details as to how its persistent-state disk equipment had been affected by the strikes, other than to say they caused power supply lapses. Emergency power kicked in as planned, but in some cases the battery backup to the disk systems did not perform as expected.<br />
Sponsor video, mouseover for sound</p>
<p>According to a summary of the incident by the Google cloud operations team posted to its Google Cloud Status page: &#8220;Although automatic auxiliary systems restored power quickly, and the storage systems are designed with battery backup, some recently written data was located on storage systems which were more susceptible to power failure from extended or repeated battery drain.&#8221; </p>
<p>The Cloud Status summary doesn&#8217;t say whether the repeated strikes led to multiple failures of the power supply to the disks.</p>
<p>The summary also did not say the data center was struck four times, as a BBC report on the incident noted. Rather, Google said only that there were &#8220;four successive strikes on the electrical systems of a European data center.&#8221;</p>
<p>Google Loses Data: Who Says Lightning Never Strikes Twice?<br />
<a href="http://www.informationweek.com/cloud/cloud-storage/google-loses-data-who-says-lightning-never-strikes-twice-/d/d-id/1321836" rel="nofollow">http://www.informationweek.com/cloud/cloud-storage/google-loses-data-who-says-lightning-never-strikes-twice-/d/d-id/1321836</a></p>
<p>In a four-strike incident, power to Google Compute Cloud disks in Ghislain, Belgium, gets interupted and data writes are lost.</p>
<p>Google experienced high read/write error rates and a small data loss at its Google Compute Engine data center in Ghislain, Belgium, Aug. 13-17 following a storm that delivered four lightning strikes on or near the data center.</p>
<p>According to a summary of the incident by the Google cloud operations team posted to its Google Cloud Status page: &#8220;Although automatic auxiliary systems restored power quickly, and the storage systems are designed with battery backup, some recently written data was located on storage systems which were more susceptible to power failure from extended or repeated battery drain.&#8221; </p>
<p>The Cloud Status summary doesn&#8217;t say whether the repeated strikes led to multiple failures of the power supply to the disks.</p>
<p>In Google&#8217;s situation, its summary report said: &#8220;In almost all cases the data was successfully committed to stable storage, although manual intervention was required in order to restore the systems to their normal serving state. However, in a very few cases, recent writes were unrecoverable, leading to permanent data loss on the Persistent Disk.&#8221;</p>
<p>Any loss of data is a serious incident for a cloud service provider, and they take extraordinary measures to prevent it. Data sets are routinely copied three times, so that a hardware failure will still leave two intact copies. But the power interruption in Ghislain caused some data writes to disk to be lost, and it was those write incidents that created the lost data.</p>
<p>As a way of minimizing the loss, the Google summary cited a statistic that represented the amount of persistent disk space that had been affected out of the total available in Ghislain &#8212; &#8220;less than 0.000001%.&#8221; That was a meaningless figure to those customers who happened to be doing frequent read/writes with their systems at the time. A more meaningful figure would have been simply the total amount of data lost in kilobytes, megabytes, or terabytes or the percentage of writes lost.</p>
<p>Google loses data as lightning strikes<br />
<a href="http://www.bbc.com/news/technology-33989384" rel="nofollow">http://www.bbc.com/news/technology-33989384</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2013/06/25/lightning-protection/comment-page-1/#comment-1331065</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Tue, 20 Jan 2015 11:22:22 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/blog/?p=20554#comment-1331065</guid>
		<description><![CDATA[THREE vans and FIVE people: that&#039;s what Telstra needs to fix one fault
Surprise: lightning and copper don&#039;t mix
http://www.theregister.co.uk/2015/01/19/how_many_telstra_vans_does_it_take_to_restore_one_service/

The impact of weather on Telstra&#039;s ailing copper network has hit the headlines, with some parts of Canberra told they&#039;ll suffer outages well into February.

The Fairfax Media reports that storms in early December led to 700 faults in the ACT and another 900 being logged in surrounding areas.

Just seven months after being connected to the Internet of Trees, I once again had the chance to see at close hand the impact of a decent lightning strike on Telstra&#039;s copper infrastructure.

It starts with the tree that was struck, which is about ten metres from the nearest Telstra pit. The tree itself is now shorter by about 15 metres.

Even after around 75 metres, the current in the copper still packed enough punch to destroy the RJ45 that terminated the last twisted pair in the bundle.

Because the cable run is more than 25 years old, record-keeping created a challenge for the Telstra techs

Crack Telstra Cabling SquadTM goes all Tarzan to restore internet
No conduit? No worries! We&#039;ll build an Internet of Trees!
http://www.theregister.co.uk/2014/06/09/laying_cables_telstrastyle/

Some time ago, this Vulture South hack had a not-uncommon experience: loss of broadband during a storm.

Telstra, to its credit, despatched a Crack Telstra Cabling SquadTM to perform the unenviable task of burying a new cable, unless an alternative could be found.]]></description>
		<content:encoded><![CDATA[<p>THREE vans and FIVE people: that&#8217;s what Telstra needs to fix one fault<br />
Surprise: lightning and copper don&#8217;t mix<br />
<a href="http://www.theregister.co.uk/2015/01/19/how_many_telstra_vans_does_it_take_to_restore_one_service/" rel="nofollow">http://www.theregister.co.uk/2015/01/19/how_many_telstra_vans_does_it_take_to_restore_one_service/</a></p>
<p>The impact of weather on Telstra&#8217;s ailing copper network has hit the headlines, with some parts of Canberra told they&#8217;ll suffer outages well into February.</p>
<p>The Fairfax Media reports that storms in early December led to 700 faults in the ACT and another 900 being logged in surrounding areas.</p>
<p>Just seven months after being connected to the Internet of Trees, I once again had the chance to see at close hand the impact of a decent lightning strike on Telstra&#8217;s copper infrastructure.</p>
<p>It starts with the tree that was struck, which is about ten metres from the nearest Telstra pit. The tree itself is now shorter by about 15 metres.</p>
<p>Even after around 75 metres, the current in the copper still packed enough punch to destroy the RJ45 that terminated the last twisted pair in the bundle.</p>
<p>Because the cable run is more than 25 years old, record-keeping created a challenge for the Telstra techs</p>
<p>Crack Telstra Cabling SquadTM goes all Tarzan to restore internet<br />
No conduit? No worries! We&#8217;ll build an Internet of Trees!<br />
<a href="http://www.theregister.co.uk/2014/06/09/laying_cables_telstrastyle/" rel="nofollow">http://www.theregister.co.uk/2014/06/09/laying_cables_telstrastyle/</a></p>
<p>Some time ago, this Vulture South hack had a not-uncommon experience: loss of broadband during a storm.</p>
<p>Telstra, to its credit, despatched a Crack Telstra Cabling SquadTM to perform the unenviable task of burying a new cable, unless an alternative could be found.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
