<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments on: Stephen Hawking talking technology</title>
	<atom:link href="http://www.epanorama.net/blog/2015/01/09/stephen-hawking-talking-technology/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.epanorama.net/blog/2015/01/09/stephen-hawking-talking-technology/</link>
	<description>All about electronics and circuit design</description>
	<lastBuildDate>Sat, 18 Apr 2026 22:36:12 +0000</lastBuildDate>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.9.14</generator>
	<item>
		<title>By: pg66 com</title>
		<link>https://www.epanorama.net/blog/2015/01/09/stephen-hawking-talking-technology/comment-page-1/#comment-1865623</link>
		<dc:creator><![CDATA[pg66 com]]></dc:creator>
		<pubDate>Thu, 20 Nov 2025 10:06:44 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/newepa/?p=28725#comment-1865623</guid>
		<description><![CDATA[you&#039;re in point of fact a good webmaster. The web 
site loading speed is incredible. It kind of feels that 
you are doing any unique trick. In addition, The contents are masterwork.
you have done a great task in this matter!]]></description>
		<content:encoded><![CDATA[<p>you&#8217;re in point of fact a good webmaster. The web<br />
site loading speed is incredible. It kind of feels that<br />
you are doing any unique trick. In addition, The contents are masterwork.<br />
you have done a great task in this matter!</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2015/01/09/stephen-hawking-talking-technology/comment-page-1/#comment-1466656</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Wed, 13 Jan 2016 09:52:53 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/newepa/?p=28725#comment-1466656</guid>
		<description><![CDATA[Stephen Hawking reckons he&#039;s cracked the black hole paradox
Hawking says &#039;soft hair&#039; explains everything. Not, repeat not, on cats
http://www.theregister.co.uk/2016/01/13/stephen_hawking_reckons_hes_cracked_the_black_hole_paradox/

Last August, Stephen Hawking tantalised the world by saying he&#039;d worked out a solution to the “black hole paradox”.

He&#039;s now dropped the first detailed discussion of his hypothesis for the world to pore over, here at ArXiv in a paper entitled Soft hair on black holes.

The black hole paradox wouldn&#039;t have arisen if not for his own work, in a now 40-year-old paper that proposed “Hawking radiation”. That paper created a problem because it proposed a mechanism by which information is lost to the universe forever.

Physicists don&#039;t like information destruction any more than they like singularities. Physical laws let us use the present to predict the future, but black holes destroying information also destroys the determinism we rely on.]]></description>
		<content:encoded><![CDATA[<p>Stephen Hawking reckons he&#8217;s cracked the black hole paradox<br />
Hawking says &#8216;soft hair&#8217; explains everything. Not, repeat not, on cats<br />
<a href="http://www.theregister.co.uk/2016/01/13/stephen_hawking_reckons_hes_cracked_the_black_hole_paradox/" rel="nofollow">http://www.theregister.co.uk/2016/01/13/stephen_hawking_reckons_hes_cracked_the_black_hole_paradox/</a></p>
<p>Last August, Stephen Hawking tantalised the world by saying he&#8217;d worked out a solution to the “black hole paradox”.</p>
<p>He&#8217;s now dropped the first detailed discussion of his hypothesis for the world to pore over, here at ArXiv in a paper entitled Soft hair on black holes.</p>
<p>The black hole paradox wouldn&#8217;t have arisen if not for his own work, in a now 40-year-old paper that proposed “Hawking radiation”. That paper created a problem because it proposed a mechanism by which information is lost to the universe forever.</p>
<p>Physicists don&#8217;t like information destruction any more than they like singularities. Physical laws let us use the present to predict the future, but black holes destroying information also destroys the determinism we rely on.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2015/01/09/stephen-hawking-talking-technology/comment-page-1/#comment-1457301</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Thu, 03 Dec 2015 16:09:03 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/newepa/?p=28725#comment-1457301</guid>
		<description><![CDATA[Is AI Development Moving In the Wrong Direction? 
http://search.slashdot.org/story/15/12/03/043239/is-ai-development-moving-in-the-wrong-direction

Artificial Intelligence is always just around the corner, right? We often see reports that promise near breakthroughs, but time and again they don&#039;t come to fruition. The cause of this may be that we&#039;re trying to solve the wrong problem.

efforts like IBM&#039;s Watson and Google&#039;s Inceptionism. His conclusion is that we haven&#039;t actually been trying to solve &quot;intelligence&quot;

A Short History of AI, and Why It’s Heading in the Wrong Direction
http://hackaday.com/2015/12/01/a-short-history-of-ai-and-why-its-heading-in-the-wrong-direction/]]></description>
		<content:encoded><![CDATA[<p>Is AI Development Moving In the Wrong Direction?<br />
<a href="http://search.slashdot.org/story/15/12/03/043239/is-ai-development-moving-in-the-wrong-direction" rel="nofollow">http://search.slashdot.org/story/15/12/03/043239/is-ai-development-moving-in-the-wrong-direction</a></p>
<p>Artificial Intelligence is always just around the corner, right? We often see reports that promise near breakthroughs, but time and again they don&#8217;t come to fruition. The cause of this may be that we&#8217;re trying to solve the wrong problem.</p>
<p>efforts like IBM&#8217;s Watson and Google&#8217;s Inceptionism. His conclusion is that we haven&#8217;t actually been trying to solve &#8220;intelligence&#8221;</p>
<p>A Short History of AI, and Why It’s Heading in the Wrong Direction<br />
<a href="http://hackaday.com/2015/12/01/a-short-history-of-ai-and-why-its-heading-in-the-wrong-direction/" rel="nofollow">http://hackaday.com/2015/12/01/a-short-history-of-ai-and-why-its-heading-in-the-wrong-direction/</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2015/01/09/stephen-hawking-talking-technology/comment-page-1/#comment-1449299</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Thu, 05 Nov 2015 11:09:19 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/newepa/?p=28725#comment-1449299</guid>
		<description><![CDATA[Emerging technologies and the future of humanity
http://bos.sagepub.com/content/71/6/29.full

Emerging technologies are not the danger. Failure of human imagination, optimism, energy, and creativity is the danger.

Why the future doesn’t need us: Our most powerful 21st-century technologies—robotics, genetic engineering, and nanotech—are threatening to make humans an endangered species. —Bill Joy, co-founder and at the time chief scientist, Sun Microsystems, 20001

Although it was not clear at the time, Bill Joy’s article warning of the dangers of emerging technologies was to spawn a veritable “dystopia industry.” More recent contributions have tended to focus on artificial intelligence, or AI; electric car and space technology entrepreneur Elon Musk has warned that AI is “summoning the demon” (Mack, 2015), while physicist Stephen Hawking has argued that “the development of full artificial intelligence could spell the end of the human race” (Cellan-Jones, 2014). The Future of Life Institute (2015) recently released an open letter signed by many scientific and research notables urging a ban on “offensive autonomous weapons beyond meaningful human control.” Meanwhile, the UN holds conferences and European activists mount campaigns against what they characterize as “killer robots” (see, e.g., Human Rights Watch, 2012). Headlines reinforce a sense of existential crisis; in the military and security domain, cyber conflict runs rampant, with hackers accessing millions of US personnel records, including sensitive security clearance documents. Technologies such as uncrewed aerial vehicles, commonly referred to as “drones,” are highly contentious in both civil and conflict environments, for many different reasons. A recent US Army Research Laboratory report foresees genetically and technologically enhanced soldiers networked with their battlespace robotic partners and remarks that “the presence of super humans on the battlefield in the 2050 timeframe is highly likely because the various components needed to enable this development already exist and are undergoing rapid evolution” (Kott et al., 2015: 19).]]></description>
		<content:encoded><![CDATA[<p>Emerging technologies and the future of humanity<br />
<a href="http://bos.sagepub.com/content/71/6/29.full" rel="nofollow">http://bos.sagepub.com/content/71/6/29.full</a></p>
<p>Emerging technologies are not the danger. Failure of human imagination, optimism, energy, and creativity is the danger.</p>
<p>Why the future doesn’t need us: Our most powerful 21st-century technologies—robotics, genetic engineering, and nanotech—are threatening to make humans an endangered species. —Bill Joy, co-founder and at the time chief scientist, Sun Microsystems, 20001</p>
<p>Although it was not clear at the time, Bill Joy’s article warning of the dangers of emerging technologies was to spawn a veritable “dystopia industry.” More recent contributions have tended to focus on artificial intelligence, or AI; electric car and space technology entrepreneur Elon Musk has warned that AI is “summoning the demon” (Mack, 2015), while physicist Stephen Hawking has argued that “the development of full artificial intelligence could spell the end of the human race” (Cellan-Jones, 2014). The Future of Life Institute (2015) recently released an open letter signed by many scientific and research notables urging a ban on “offensive autonomous weapons beyond meaningful human control.” Meanwhile, the UN holds conferences and European activists mount campaigns against what they characterize as “killer robots” (see, e.g., Human Rights Watch, 2012). Headlines reinforce a sense of existential crisis; in the military and security domain, cyber conflict runs rampant, with hackers accessing millions of US personnel records, including sensitive security clearance documents. Technologies such as uncrewed aerial vehicles, commonly referred to as “drones,” are highly contentious in both civil and conflict environments, for many different reasons. A recent US Army Research Laboratory report foresees genetically and technologically enhanced soldiers networked with their battlespace robotic partners and remarks that “the presence of super humans on the battlefield in the 2050 timeframe is highly likely because the various components needed to enable this development already exist and are undergoing rapid evolution” (Kott et al., 2015: 19).</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2015/01/09/stephen-hawking-talking-technology/comment-page-1/#comment-1449109</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Wed, 04 Nov 2015 15:34:32 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/newepa/?p=28725#comment-1449109</guid>
		<description><![CDATA[The real danger of artificial intelligence
http://www.edn.com/electronics-blogs/embedded-insights/4440725/The-real-danger-of-artificial-intelligence?_mc=NL_EDN_EDT_EDN_today_20151103&amp;cid=NL_EDN_EDT_EDN_today_20151103&amp;elq=cfbbcc28d82a471c9be5bc13c555c987&amp;elqCampaignId=25533&amp;elqaid=29048&amp;elqat=1&amp;elqTrackId=aa251888c88c4d53a543018d6bfde91a

In the beginning of this year several respected scientists issued a letter warning about the dangers of artificial intelligence (AI). In particular, they were concerned that we would create an AI that was able to adapt and evolve on its own, and would do so at such an accelerated rate that it would move beyond human ability to understand or control. And that, they warned, could spell the end of mankind. But I think the real danger of AI is much closer to us than that undefined and likely distant future.

For one thing, I have serious doubts about the whole AI apocalypse scenario. We are an awfully long way from creating any kind of computing system with the complexity embodied in the human brain. In addition, we don&#039;t really know what intelligence is, what&#039;s necessary for it to exist, and how it arises in the first place. Complexity alone clearly isn&#039;t enough. We humans all have brains, but intelligence varies widely. I don&#039;t see how we can artificially create an intelligence when we don&#039;t really have a specification to follow.

What we do have is a hazy description of what intelligent behavior looks like, and so far all our AI efforts have concentrated on mimicking some elements of that behavior. The results so far have offered some impressive results, but only in narrow application areas. 

And even were we able to create something that was truly intelligent, who&#039;s to say that such an entity will be malevolent?

I think the dangers of AI are real and will manifest in the near future, however. But they won&#039;t arise because of how intelligent the machines are. They&#039;ll arise because the machines won&#039;t be intelligent enough, yet we will give control over to them anyway and in so doing, lose the ability to take control ourselves.

This handoff and skill loss is already starting to happen in the airline industry, according to this New Yorker article. Autopilots are good enough to handle the vast majority of situations without human intervention, so the pilot&#039;s attention wanders and when a situation arises that the autopilot cannot properly handle, there is an increased chance that the human pilot&#039;s startled reaction will be the wrong one.

Then there is the GIGO factor (GIGO = garbage in, garbage out). If the AI system is getting incorrect information, it is highly likely to make an improper decision with potentially disastrous consequences. Humans are able to take in information from a variety of sources, integrate them all, compare that against experience, and use the result to identify faulty information sources.]]></description>
		<content:encoded><![CDATA[<p>The real danger of artificial intelligence<br />
<a href="http://www.edn.com/electronics-blogs/embedded-insights/4440725/The-real-danger-of-artificial-intelligence?_mc=NL_EDN_EDT_EDN_today_20151103&#038;cid=NL_EDN_EDT_EDN_today_20151103&#038;elq=cfbbcc28d82a471c9be5bc13c555c987&#038;elqCampaignId=25533&#038;elqaid=29048&#038;elqat=1&#038;elqTrackId=aa251888c88c4d53a543018d6bfde91a" rel="nofollow">http://www.edn.com/electronics-blogs/embedded-insights/4440725/The-real-danger-of-artificial-intelligence?_mc=NL_EDN_EDT_EDN_today_20151103&#038;cid=NL_EDN_EDT_EDN_today_20151103&#038;elq=cfbbcc28d82a471c9be5bc13c555c987&#038;elqCampaignId=25533&#038;elqaid=29048&#038;elqat=1&#038;elqTrackId=aa251888c88c4d53a543018d6bfde91a</a></p>
<p>In the beginning of this year several respected scientists issued a letter warning about the dangers of artificial intelligence (AI). In particular, they were concerned that we would create an AI that was able to adapt and evolve on its own, and would do so at such an accelerated rate that it would move beyond human ability to understand or control. And that, they warned, could spell the end of mankind. But I think the real danger of AI is much closer to us than that undefined and likely distant future.</p>
<p>For one thing, I have serious doubts about the whole AI apocalypse scenario. We are an awfully long way from creating any kind of computing system with the complexity embodied in the human brain. In addition, we don&#8217;t really know what intelligence is, what&#8217;s necessary for it to exist, and how it arises in the first place. Complexity alone clearly isn&#8217;t enough. We humans all have brains, but intelligence varies widely. I don&#8217;t see how we can artificially create an intelligence when we don&#8217;t really have a specification to follow.</p>
<p>What we do have is a hazy description of what intelligent behavior looks like, and so far all our AI efforts have concentrated on mimicking some elements of that behavior. The results so far have offered some impressive results, but only in narrow application areas. </p>
<p>And even were we able to create something that was truly intelligent, who&#8217;s to say that such an entity will be malevolent?</p>
<p>I think the dangers of AI are real and will manifest in the near future, however. But they won&#8217;t arise because of how intelligent the machines are. They&#8217;ll arise because the machines won&#8217;t be intelligent enough, yet we will give control over to them anyway and in so doing, lose the ability to take control ourselves.</p>
<p>This handoff and skill loss is already starting to happen in the airline industry, according to this New Yorker article. Autopilots are good enough to handle the vast majority of situations without human intervention, so the pilot&#8217;s attention wanders and when a situation arises that the autopilot cannot properly handle, there is an increased chance that the human pilot&#8217;s startled reaction will be the wrong one.</p>
<p>Then there is the GIGO factor (GIGO = garbage in, garbage out). If the AI system is getting incorrect information, it is highly likely to make an improper decision with potentially disastrous consequences. Humans are able to take in information from a variety of sources, integrate them all, compare that against experience, and use the result to identify faulty information sources.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2015/01/09/stephen-hawking-talking-technology/comment-page-1/#comment-1444657</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Mon, 19 Oct 2015 09:09:10 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/newepa/?p=28725#comment-1444657</guid>
		<description><![CDATA[We&#039;re so predictable
An algorithm can predict human behavior better than humans
http://qz.com/527008/an-algorithm-can-predict-human-behavior-better-than-humans/

You might presume, or at least hope, that humans are better at understanding fellow humans than machines are. But a new MIT study suggests an algorithm can predict someone’s behavior faster and more reliably than humans can.

Max Kanter, a master’s student in computer science at MIT, and his advisor, Kalyan Veeramachaneni, a research scientist at MIT’s computer science and artificial intelligence laboratory, created the Data Science Machine to search for patterns and choose which variables are the most relevant.

It’s fairly common for machines to analyze data, but humans are typically required to choose which data points are relevant for analysis. In three competitions with human teams, a machine made more accurate predictions than 615 of 906 human teams. And while humans worked on their predictive algorithms for months, the machine took two to 12 hours to produce each of its competition entries.

For example, when one competition asked teams to predict whether a student would drop out during the next ten days, based on student interactions with resources on an online course, there were many possible factors to consider. 

The Data Science Machine performed well in this competition. It was also successful in two other competitions, one in which participants had to predict whether a crowd-funded project would be considered “exciting” and another if a customer would become a repeat buyer.]]></description>
		<content:encoded><![CDATA[<p>We&#8217;re so predictable<br />
An algorithm can predict human behavior better than humans<br />
<a href="http://qz.com/527008/an-algorithm-can-predict-human-behavior-better-than-humans/" rel="nofollow">http://qz.com/527008/an-algorithm-can-predict-human-behavior-better-than-humans/</a></p>
<p>You might presume, or at least hope, that humans are better at understanding fellow humans than machines are. But a new MIT study suggests an algorithm can predict someone’s behavior faster and more reliably than humans can.</p>
<p>Max Kanter, a master’s student in computer science at MIT, and his advisor, Kalyan Veeramachaneni, a research scientist at MIT’s computer science and artificial intelligence laboratory, created the Data Science Machine to search for patterns and choose which variables are the most relevant.</p>
<p>It’s fairly common for machines to analyze data, but humans are typically required to choose which data points are relevant for analysis. In three competitions with human teams, a machine made more accurate predictions than 615 of 906 human teams. And while humans worked on their predictive algorithms for months, the machine took two to 12 hours to produce each of its competition entries.</p>
<p>For example, when one competition asked teams to predict whether a student would drop out during the next ten days, based on student interactions with resources on an online course, there were many possible factors to consider. </p>
<p>The Data Science Machine performed well in this competition. It was also successful in two other competitions, one in which participants had to predict whether a crowd-funded project would be considered “exciting” and another if a customer would become a repeat buyer.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2015/01/09/stephen-hawking-talking-technology/comment-page-1/#comment-1443168</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Tue, 13 Oct 2015 08:55:30 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/newepa/?p=28725#comment-1443168</guid>
		<description><![CDATA[Stephen Hawking Says We Should Really Be Scared Of Capitalism, Not Robots
http://www.epanorama.net/newepa/2015/10/09/stephen-hawking-says-we-should-really-be-scared-of-capitalism-not-robots/]]></description>
		<content:encoded><![CDATA[<p>Stephen Hawking Says We Should Really Be Scared Of Capitalism, Not Robots<br />
<a href="http://www.epanorama.net/newepa/2015/10/09/stephen-hawking-says-we-should-really-be-scared-of-capitalism-not-robots/" rel="nofollow">http://www.epanorama.net/newepa/2015/10/09/stephen-hawking-says-we-should-really-be-scared-of-capitalism-not-robots/</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2015/01/09/stephen-hawking-talking-technology/comment-page-1/#comment-1443164</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Tue, 13 Oct 2015 08:51:47 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/newepa/?p=28725#comment-1443164</guid>
		<description><![CDATA[Andy Rubin: AI Is The Future Of Computing, Mobility
http://www.eetimes.com/author.asp?section_id=36&amp;doc_id=1327971&amp;

Andy Rubin, the man behind Google&#039;s Android operating system, thinks artificial intelligence will define computing in the future.

&quot;There is a point in time -- I have no idea when it is, it won&#039;t be in the next 10 years, or 20 years -- where there is some form of AI, for lack of a better term, that will be the next computing platform,&quot; said Rubin onstage at the Code/Mobile conference.

More specifically, Rubin believes Internet-connected devices (smartphones, tablets, thermostats, smoke detectors, and cars, for example) will create massive amounts of data that will be analyzed by deep-learning technologies. This process will be the foundation of the first artificial intelligence networks. They will be able to tell people, for instance, what their thermostat is set to, when it&#039;s time to hit the gym, and whether or not your pool has too much chlorine.

Context is important. &quot;The thing that&#039;s gonna be new is the part of the cloud that&#039;s forming the intelligence from all the information that&#039;s coming,&quot; said Rubin. 

Andy Rubin: AI Is The Future Of Computing, Mobility
http://www.informationweek.com/mobile/mobile-applications/andy-rubin-ai-is-the-future-of-computing-mobility/a/d-id/1322556?

Andy Rubin, the man behind Google&#039;s Android operating system, thinks artificial intelligence will define computing in the future.]]></description>
		<content:encoded><![CDATA[<p>Andy Rubin: AI Is The Future Of Computing, Mobility<br />
<a href="http://www.eetimes.com/author.asp?section_id=36&#038;doc_id=1327971&#038;amp" rel="nofollow">http://www.eetimes.com/author.asp?section_id=36&#038;doc_id=1327971&#038;amp</a>;</p>
<p>Andy Rubin, the man behind Google&#8217;s Android operating system, thinks artificial intelligence will define computing in the future.</p>
<p>&#8220;There is a point in time &#8212; I have no idea when it is, it won&#8217;t be in the next 10 years, or 20 years &#8212; where there is some form of AI, for lack of a better term, that will be the next computing platform,&#8221; said Rubin onstage at the Code/Mobile conference.</p>
<p>More specifically, Rubin believes Internet-connected devices (smartphones, tablets, thermostats, smoke detectors, and cars, for example) will create massive amounts of data that will be analyzed by deep-learning technologies. This process will be the foundation of the first artificial intelligence networks. They will be able to tell people, for instance, what their thermostat is set to, when it&#8217;s time to hit the gym, and whether or not your pool has too much chlorine.</p>
<p>Context is important. &#8220;The thing that&#8217;s gonna be new is the part of the cloud that&#8217;s forming the intelligence from all the information that&#8217;s coming,&#8221; said Rubin. </p>
<p>Andy Rubin: AI Is The Future Of Computing, Mobility<br />
<a href="http://www.informationweek.com/mobile/mobile-applications/andy-rubin-ai-is-the-future-of-computing-mobility/a/d-id/1322556" rel="nofollow">http://www.informationweek.com/mobile/mobile-applications/andy-rubin-ai-is-the-future-of-computing-mobility/a/d-id/1322556</a>?</p>
<p>Andy Rubin, the man behind Google&#8217;s Android operating system, thinks artificial intelligence will define computing in the future.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2015/01/09/stephen-hawking-talking-technology/comment-page-1/#comment-1436115</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Thu, 17 Sep 2015 11:50:35 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/newepa/?p=28725#comment-1436115</guid>
		<description><![CDATA[Giraffe: Using Deep Reinforcement Learning to Play Chess
http://arxiv.org/pdf/1509.01549v1.pdf

This  report  presents  Giraffe,  a  chess  engine  that  uses  self-play  to  discover  all  its
domain-specific  knowledge,  with  minimal  hand-crafted  knowledge  given  by  the  pro-
grammer.  Unlike previous attempts using machine learning only to perform parameter-
tuning  on  hand-crafted  evaluation  functions,  Giraffe&#039;s  learning  system  also  performs
automatic feature extraction and pattern recognition.

With the move evaluator guiding a probability-based search using the learned eval-
uator, Giraffe plays at approximately the level of an FIDE International Master (top
2.2% of tournament chess players with an official rating)]]></description>
		<content:encoded><![CDATA[<p>Giraffe: Using Deep Reinforcement Learning to Play Chess<br />
<a href="http://arxiv.org/pdf/1509.01549v1.pdf" rel="nofollow">http://arxiv.org/pdf/1509.01549v1.pdf</a></p>
<p>This  report  presents  Giraffe,  a  chess  engine  that  uses  self-play  to  discover  all  its<br />
domain-specific  knowledge,  with  minimal  hand-crafted  knowledge  given  by  the  pro-<br />
grammer.  Unlike previous attempts using machine learning only to perform parameter-<br />
tuning  on  hand-crafted  evaluation  functions,  Giraffe&#8217;s  learning  system  also  performs<br />
automatic feature extraction and pattern recognition.</p>
<p>With the move evaluator guiding a probability-based search using the learned eval-<br />
uator, Giraffe plays at approximately the level of an FIDE International Master (top<br />
2.2% of tournament chess players with an official rating)</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2015/01/09/stephen-hawking-talking-technology/comment-page-1/#comment-1435792</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Wed, 16 Sep 2015 07:19:15 +0000</pubDate>
		<guid isPermaLink="false">http://www.epanorama.net/newepa/?p=28725#comment-1435792</guid>
		<description><![CDATA[Google, Robotics &amp; AI: A Systems Approach 
http://www.eetimes.com/author.asp?section_id=36&amp;doc_id=1327692&amp;
Robotics &amp; AI: A Systems Approach 
http://www.ebnonline.com/author.asp?section_id=3737&amp;doc_id=278609&amp;]]></description>
		<content:encoded><![CDATA[<p>Google, Robotics &amp; AI: A Systems Approach<br />
<a href="http://www.eetimes.com/author.asp?section_id=36&#038;doc_id=1327692&#038;amp" rel="nofollow">http://www.eetimes.com/author.asp?section_id=36&#038;doc_id=1327692&#038;amp</a>;<br />
Robotics &amp; AI: A Systems Approach<br />
<a href="http://www.ebnonline.com/author.asp?section_id=3737&#038;doc_id=278609&#038;amp" rel="nofollow">http://www.ebnonline.com/author.asp?section_id=3737&#038;doc_id=278609&#038;amp</a>;</p>
]]></content:encoded>
	</item>
</channel>
</rss>
