Stephen Hawking talking technology

I just read yesterday that famous English theoretical physicist, cosmologist, author Stephen Hawking had his 73rd birthday.So this is a good time to post here some interesting links on Stephen Hawking. 

Just last month there were articles that Stephen Hawking’s speech rate doubled with Intel’s new system Acat (Assistive Context Aware Toolkit). ACAT has seen Hawking typing rate double and saw a ten times improvement in common tasks. Thanks to technology integrated from SwiftKey, he has to type 20 percent fewer characters overall.  Stephen Hawking: How He Speaks & Spells article tells about technology that helped resurrect the life of Stephen Hawking after the physicist was stricken by Lou Gehrig’s disease. Stephen Hawking’s new speech system is free and open-source.

How the speech system is working in practice? Here some worth to watch and also funny stuff for the Friday: Week Tonight with John Oliver: Stephen Hawking Interview (HBO).

One of the highlights of the brilliant video involves Oliver asking if there is a universe where he is the intellectual superior, to which Hawking calmly ripostes, “Yes, and also a universe where you’re funny.”

Here are some things to think about: Professor Stephen Hawking has given his new voice box a workout by once again predicting that artificial intelligence will spell humanity’s doom. Stephen Hawking again warns AI will supersede human article tells that in a chat with the BBC, Hawking said “the primitive forms of artificial intelligence we already have have proved very useful, but the I think the development of true artificial intelligence could spell the end of the human race.” Stephen Hawking: Humans evolve slowly, AI could stomp us out article says that in the latest of his pessimistic thoughts on the future, the famed physicist warns yet again of the end of the human race. Hawking argues that “once humans develop artificial intelligence, it will take off on its own and redesign itself at an ever-increasing rate.” “Humans limited by slow biological evolution cannot compete and will be superseded.” AI could be a ‘real danger’  and AI could be the end of humanity.

Hawking is not alone with those thoughts. Elon Musk: artificial intelligence is our biggest existential threat article tells that the AI investor says that humanity risks ‘summoning a demon’ and calls for more regulatory oversight. Elon Musk has spoken out against artificial intelligence (AI), declaring it the most serious threat to the survival of the human race. “I think we should be very careful about artificial intelligence.” He recently described his investments in AI research as “keeping an eye on what’s going on”, rather than viable return on capital. Musk is one of the high-profile investors, alongside Facebook’s Mark Zuckerberg and the actor Ashton Kutcher, in Vicarious, a company aiming to build a computer that can think like a person, with a neural network capable of replicating the part of the brain that controls vision, body movement and language.

30 Comments

  1. Tomi Engdahl says:

    Experts pledge to rein in AI research
    12 January 2015
    http://www.bbc.com/news/technology-30777834

    Scientists including Stephen Hawking and Elon Musk have signed a letter pledging to ensure artificial intelligence research benefits mankind.

    The promise of AI to solve human problems had to be matched with safeguards on how it was used, it said.

    The letter was drafted by the Future of Life Institute, which seeks to head off risks that could wipe out humanity.

    The letter comes soon after Prof Hawking warned that AI could “supersede” humans.

    Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter
    http://futureoflife.org/misc/open_letter

    Reply
  2. Tomi Engdahl says:

    An Open Letter To Everyone Tricked Into Fearing AI
    http://tech.slashdot.org/story/15/01/15/2215241/an-open-letter-to-everyone-tricked-into-fearing-ai

    If you’re into robots, AI, you’ve probably read about the open letter on AI safety. But do you realize how blatantly the media is misinterpreting its purpose, and its message?

    An Open Letter To Everyone Tricked Into Fearing Artificial Intelligence
    Don’t believe the hype about artificial intelligence, or the horror
    http://www.popsci.com/open-letter-everyone-tricked-fearing-ai

    Earlier this week, an organization called the Future of Life Institute issued an open letter on the subject of building safety measures into artificial intelligence systems (AI). The letter, and the research document that accompanies it, present a remarkably even-handed look at how AI researchers can maximize the potential of this technology.

    Headlines written on it:
    “Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.”
    “Artificial intelligence experts sign open letter to protect mankind from machines”
    “Experts pledge to rein in AI research”

    I’d like to think that this is rock bottom. Journalists can’t possibly be any more clueless, or callously traffic-baiting, when it comes to robots and AI. And readers have to get tired, at some point, of clicking on the same shrill headlines, that quote the same non-AI researchers—Elon Musk and Stephen Hawking, to be specific—making the same doomsday proclamations.

    Fear-mongering always loses its edge over time, and even the most toxic media coverage has an inherent half-life. But it never stops.

    Forget about the risk that machines pose to us in the decades ahead. The more pertinent question, in 2015, is whether anyone is going to protect mankind from its willfully ignorant journalists.

    “Affixing a headline that conjures visions of skeletal androids stomping human skulls underfoot turns complex, transformative technology into a carnival sideshow.”

    The speedier, and more dramatic course of action is to provide what looks like context, but is really just Elon Musk and Stephen Hawking talking about a subject that is neither of their specialties. I’m mentioning them, in particular, because they’ve become the collective voice of AI panic. They believe that machine superintelligence could lead to our extinction. And their comments to that effect have the ring of truth, because they come from brilliant minds with a blessed lack of media filters. If time is money, then the endlessly recycled quotes from Musk and Hawking are a goldmine for harried reporters and editors. What more context do you need, than a pair of geniuses publicly fretting about the fall of humankind?

    The story behind the open letter is, in some ways, more interesting than the letter itself. On January 2, roughly 70 researchers met at a hotel in San Juan, Puerto Rico, for a three-day conference on AI safety. This was a genuinely secretive event. The Future of Life Institute (FLI) hadn’t alerted the media in advance, or invited any reporters to attend, despite having planned the meeting at least six months in advance. Even now, the event’s organizers won’t provide a complete list of attendees. FLI wanted researchers to speak candidly, and without worry of attribution, during the weekend-long schedule of formal and informal discussions.

    Those headlines from BBC News and CNET would have been perfectly at home on the movie screen, signaling the global response to a legitimately terrifying announcement.

    In fact, the open letter from FLI is a pretty bloodless affair. The title alone—Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter—should manage the reader’s expectations. The letter references advances in machine learning, neuroscience, and other research areas that, in combination, are yielding promising results for AI systems. As for doom and gloom, the only relevant statements are the afore-mentioned sentence about “potential pitfalls,” and this one:

    “We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.”

    That, in all honesty, is as dark as this open letter gets.

    The truth is, there are researchers within the AI community who are extremely concerned about the question of artificial superintelligence, which is why FLI included a section in the letter’s companion document about those fears.

    The history of AI research is full of theoretical benchmarks and milestones whose only barrier appeared to be a lack of computing resources. And yet, even as processor and storage technology has raced ahead of researchers’ expectations, the deadlines for AI’s most promising (or terrifying, depending on your agenda) applications remain stuck somewhere in the next 10 or 20 years.

    The key mistake, he says, is in confusing principle with execution, and assuming that throwing more resources at given system will trigger an explosive increase in capability. “People in computer science are very much aware that, even if you can do something in principle, if you had unlimited resources, you might still not be able to do it,” he says, “because unlimited resources don’t mean an exponential scaling up.”

    Reply
  3. Tomi Engdahl says:

    Humanity can defeat SkyNet with BOOKS, says IT think tank
    If CompSci kiddies read Neal Stephenson and Dave Eggers, our species will endure
    http://www.theregister.co.uk/2015/01/27/teach_ai_ethics_with_scifi_says_nicta/

    A group of researchers working for National ICT Australia reckons computer science courses need to look at artificial intelligence from an ethical point of view – and the popularity of sci-fi among comp.sci students makes that a good place to start.

    As the research team, which included NICTA’s Nicholas Mattei, the University of Kentucky’s Judy Goldsmith and Center College’s Emanuelle Burton, explain in their paper, ethical questions arise I a variety of AI environments. There’s the “mechanics of the modern military”, the “slow creep of a mechanized workforce” for example.

    “We have real, present ethics violations and challenges arising from current AI techniques and implementations, in the form of systematic decreases in privacy; increasing reliance on AI for our safety, and the ongoing job losses due to mechanization and automatic control of work processes,” the paper states.

    Computer science courses, they reckon, fall short in the ethical debate, even though “AI professionals come up against ethical issues on a regular basis”.

    Such things have, they note, been argued in sci-fi for decades

    Reply
  4. Tomi Engdahl says:

    An AI apocalypse? Really?
    http://www.edn.com/electronics-blogs/embedded-insights/4438559/An-AI-apocalypse–Really-?_mc=NL_EDN_EDT_EDN_today_20150204&cid=NL_EDN_EDT_EDN_today_20150204&elq=d3b21b0ae93a40e49e97a511db26589e&elqCampaignId=21480

    The press has reported dire warnings coming from some prominent scientists and technologists about the potential catastrophes the development of artificial intelligence (AI) may bring. Phrases like summoning the demon and end of the human race are being bandied about. Really? Don’t you think this is a bit over the top?

    As embedded systems developers we are used to calling our designs “smart,” but the truth is that they are nothing more than machines following an algorithmic program. Even the most powerful systems we have created only produce results that simulate intelligent behavior. The program “Eugene,” which is said to have passed the Turing Test, simply responds to typed user questions. It is not capable of independently composing a question much less planning the takeover of mankind.

    So far, the closest we have come to creating a machine that exhibited the self-adaptive characteristics of true intelligence is what has been called uploading a worm’s mind into a robot. The resulting machine’s behavior mimicked in some ways the obstacle avoidance and food seeking behaviors of roundworms, but not other behaviors. Even the scientists who created this device said they were only 20%-30% of the way to replicating the worm’s full repertoire of behaviors. And that’s still just simulation, not thought.

    The best AI we can do is a quarter of a worm, yet luminaries such as physicist Stephen Hawking, Tesla’s Elon Musk, and (most recently), Microsoft’s Bill Gates have all gone on record expressing their beliefs in the dangers of AI. The warnings are dire indeed, and because they are coming from individuals that appear to be authorities it seems like they should be taken at face value. But I’m not at all convinced.

    Perhaps this is simply the instinctive human fear of the unknown for the threats it may contain. The new always contains opportunity but also an element of risk.

    And on the whole, we instinctively avoid risk unless circumstances force us to seek opportunity. Perhaps it’s no wonder that most people view invention as vaguely threatening. But to fear AI research as a potential path to extinction borders on paranoia. We are nowhere near to even understanding what intelligence is, much less how to create it.

    The recent release of an open letter on AI research priorities suggests so. Signed by Hawking and Musk among many others, it provides a much more reasonable look at the issue.

    Reply
  5. Tomi Engdahl says:

    The Poem That Passed the Turing Test
    http://tech.slashdot.org/story/15/02/05/2030201/the-poem-that-passed-the-turing-tes

    In 2011, the editors of one of the nation’s oldest student-run literary journals selected a short poem called “For the Bristlecone Snag” for publication in its Fall issue.

    It’s unremarkable, mostly, except for one other thing: It was written by a computer algorithm, and nobody could tell.

    The Poem That Passed the Turing Test
    http://motherboard.vice.com/read/the-poem-that-passed-the-turing-test

    Zackary Scholl, then an undergrad at Duke University, had modified a program that utilized a context-free grammar system to spit out full-length, auto-generated poems. “It works by having the poem dissected into smaller components: stanzas, lines, phrases, then verbs, adjectives, and nouns,” Scholl explained. “When a call to create a poem is made, then it randomly selects components of the poem and recursively generates each of those.”

    Scholl’s work forms part of a small but burgeoning canon of ​algorithm​ically abetted poetry and prose—from ​​bots that​ mine Twitter​ to build sonnets in iambic pentameter to poem drones that ​scrawl lines o​n sid​ewalks to automated novel-generators, the gap between man and machine-made art has, ever so slightly, begun to close.

    In 2010, Scholl began submitting the output to online poetry websites, in order to gauge reader reaction, which he says was “overwhelmingly positive.” The year after that, he sent his auto-generated poems to literary magazines, where they were rejected from the likes of Memoir Journal and First Writer Poetry. Scholl then submitted a battery of poems written by his algorithm to the Duke literary journal, The Archive. One was accepted.

    He never told the editors that the poem was ‘written’ by what he considers to be an artificial intelligence. “I didn’t want to embarrass anybody,” Scholl told me.

    Four years later, Scholl, now a PhD candidate in computational biology, published a blog post revealing his stunt, ​”Turing Test: Passed, Using Computer-Gene​rated Poetry.”

    “This AI can create poetry indistinguishable from real poets”

    Scholl contends his ​poetry ge​nerator satisfies some version of the test. “This AI can create poetry indistinguishable from real poets,” he wrote. “The real Turing Test of this AI was to get it accepted to a literary journal, which was accomplished—this poetry was successfully accepted into a literary journal at a prestigious university.”

    Of course, AI scholars would likely be skeptical—after all, last year, when the much more sophisticated chatbot ​Euge​ne Goostman “passed” the Turing Test by posing as a Russian teenager who tapped out answers to human questions in in broken English, ​many in the AI community cried foul. Sneaking a robot-generated poem into an undergraduate literary journal is a similarly insufficient standard for proving the creep of artificial intelligence; poetry is often ambiguous and bizarre

    “I think that’s why we published this poem—because it was intriguing. It was not trite. And this was the most coherent one.”

    Scholl had sent in 26 poems, one for each letter of the alphabet, and “Bristlecone” was the only one that was published.

    So, if this is to be considered a milestone—a marker on the road to autonomous robot artistry—it’s a vanishingly little one. Still, it’s an interesting little milestone; none of the poets or coders I’ve spoken to knew of another machine-generated poem that was accepted for publication and published as if authored by a human.

    But Scholl isn’t as interested in the novelty alone. “I do consider it just another way of doing poetry,”

    “This program works on the basis that every word in the English language is either ‘positive’ or ‘negative,’”

    A ‘poem’ is a group of sentences that are structured in a way to have +1, -1 or 0 in terms of the positivity/negativity. A ‘mushy poem’ is strictly positive.”

    Scholl acknowledges that the program is very basic: “The only thing it does is store information about poetic words. The reasoning is very simple.”

    “Maybe it is an AI,” he added, “but a simpler one than speech recognition.”

    Turing Test: Passed, using computer-generated poetry
    https://rpiai.wordpress.com/2015/01/24/turing-test-passed-using-computer-generated-poetry/

    Reply
  6. Tomi Engdahl says:

    No, A ‘Supercomputer’ Did NOT Pass The Turing Test For The First Time And Everyone Should Know Better
    from the what-a-waste-of-time dept
    https://www.techdirt.com/articles/20140609/07284327524/no-computer-did-not-pass-turing-test-first-time-everyone-should-know-better.shtml

    Reply
  7. Tomi Engdahl says:

    Google’s artificial intelligence breakthrough may have a huge impact on self-driving cars and much more
    http://www.washingtonpost.com/blogs/innovations/wp/2015/02/25/googles-artificial-intelligence-breakthrough-may-have-a-huge-impact-on-self-driving-cars-and-much-more/

    Google researchers have created an algorithm that has a human-like ability to learn, marking a significant breakthrough in the field of artificial intelligence. In a paper published in Nature this week, the researchers demonstrated that the algorithm could master many Atari video games better than humans, simply through playing the game and learning from experience.

    “We can go all the way from pixels to actions as we call it and actually it can work on a challenging task that even humans find difficult,” said Demis Hassabis, one of the authors of the paper. “We know now we’re on the first rung of the ladder and it’s a baby step, but I think it’s an important one.”

    The researchers only provided the general-purpose algorithm its score on each game, and the visual feed of the game, leaving it to then figure out how to win. It dominated Video Pinball, Boxing and Breakout, but struggled with Montezuma’s Revenge and Asteroids.

    The algorithm is designed to tackle any sequential decision-making problem.

    “In the future I think what we’re most psyched about is using this type of AI to help do science and help with things like climate science, disease, all these areas which have huge complexity in terms of the data that the human scientists are having to deal with,” Hassabis said.

    Another potential use case be might telling your phone to plan a trip to Europe, and it would book your hotels and flights.

    Reply
  8. Tomi Engdahl says:

    Hawking’s ACAT Technology Now Open-Source
    http://www.medicaldesignbriefs.com/component/content/article/1104-mdb/news/21702

    The Assistive Context Aware Toolkit (ACAT) technology, used by famous physicist Stephen Hawking, is now open-source.

    Designed to respond to one’s facial movements, ACAT allows Hawking to communicate through speech as well as access his computer and deliver lectures.

    Hawking’s cheek sensor detects an infrared switch mounted to his glasses and helps him select a character on the computer. Integrating software from British language technology company SwiftKey helps the system predict Hawking’s next characters and words. The information is then sent to an existing speech synthesizer to enable communication.

    ACAT is the result of years of collaboration between Intel and Hawking.

    Reply
  9. Tomi Engdahl says:

    AI and IoT merger could signal the end of civilisation, says John Lewis IT head
    Expresses concerns over handling of data with IoT in retail
    http://www.theinquirer.net/inquirer/news/2399659/ai-and-iot-merger-could-signal-the-end-of-civilisation-says-john-lewis-it-head

    THE BLENDING OF artificial intelligence (AI) and the Internet of Things (IoT) in the future could signal the end of civilisation as we know it, John Lewis’ IT chief has warned.

    Paul Coby, speaking at the IoT Summit in London, cited Stephen Hawking, Bill Gates and Elon Musk, all of whom have warned of the dangers associated with developing computers that can think for themselves.

    “When [Hawking, Gates and Musk] all agree on something, it’s worth paying attention,” he said.

    “If you think about putting the IoT and connectivity in almost everything with AI is it going to be like Einstein and the splitting of the atom?” he asked.

    Coby also noted that John Lewis is concerned about the ambiguity of data attached with the rise of the IoT in retail, for instance the rise of wearables and connected home appliances.

    He highlighted two aspects. The first is spotting the right data in a data-saturated society, for example coping with all that information and still being able to pick out the data that matters, and acting on it in a way that saves or helps customers.

    The second aspect is dealing with customers’ concerns about giving away their home data to this “thing” that many do not understand, as well as not knowing who owns it.

    Reply
  10. Tomi Engdahl says:

    Steve Wozniak: ‘Computers are going to take over from humans’
    http://uk.businessinsider.com/steve-wozniak-artificial-intelligence-interview-humans-family-pets-2015-3

    Apple cofounder Steve Wozniak has revealed that he’s increasingly worried about the threat that Artificial Intelligence (AI) poses to humanity.

    “Computers are going to take over from humans,” the 64-year-old engineer told the Australian Financial Review. “No question.”

    Increasing numbers of prominent figures in the tech world have begun to speak up about the potential risks of AI. While truly intelligent machines (if actually theoretically possible) could be a boon to industry, they could also prove dangerous if they decided to turn on their creators.

    Read more: http://uk.businessinsider.com/steve-wozniak-artificial-intelligence-interview-humans-family-pets-2015-3#ixzz3VU6NcwkI

    Reply
  11. Tomi Engdahl says:

    Meet the man who inspired Elon Musk’s fear of the robot uprising
    Nick Bostrom explains his AI prophecies of doom to El Reg
    http://www.theregister.co.uk/2015/05/03/ai_expert_nick_bostrom_talks_to_el_reg/

    Exclusive Interview Swedish philosopher Nick Bostrom is quite a guy. The University of Oxford professor is known for his work on existential risk, human enhancement ethics, superintelligence risks and transhumanism. He also reckons the probability that we are all living in a Matrix-esque computer simulation is quite high.

    But he’s perhaps most famous these days for his book, Superintelligence: Paths, Dangers, Strategies, particularly since it was referenced by billionaire space rocket baron Elon Musk in one of his many tweets on the terrifying possibilities of artificial intelligence.

    Prophecies of AI-fuelled doom from the likes of Musk, Stephen Hawking and Bill Gates hit the headlines earlier this year. They all fretted that allowing the creation of machine intelligence would lead to the extinction or dystopian enslavement of the human race.

    Reply
  12. Tomi Engdahl says:

    Benjamin Wallace-Wells / New York Magazine:
    Four years after ‘Jeopardy’ win, IBM’s Watson program has seen applications in 75 industries including finance, healthcare, molecular biology

    http://nymag.com/daily/intelligencer/2015/05/jeopardy-robot-watson.html

    Watson was just 4 years old when it beat the best human contestants on Jeopardy! As it grows up and goes out into the world, the question becomes: How afraid of it should we be?

    Reply
  13. Tomi Engdahl says:

    Gartner Says Smarter Machines Will Challenge the Human Desire for Control
    http://www.gartner.com/newsroom/id/3072717

    CIOs Should Maintain and Promote an Objective Understanding of the Real Capabilities of Smart Machines

    The growth of sensor-based data combined with advanced algorithms and artificial intelligence (AI) are enabling smart machines to make increasingly significant business decisions over which humans have decreasing control, according to Gartner, Inc.

    “As smart machines become increasingly capable, they will become viable alternatives to human workers under certain circumstances, which will lead to significant repercussions for the business and thus for CIOs,” said Stephen Prentice, vice president and Gartner Fellow. “In the 2015 Gartner CEO and business leader survey, opinions were equally divided on this issue and indicate that business leaders are starting to take notice of the advances being made and more readily acknowledge that the threat to knowledge work is real.”

    Already the growing capabilities of automation and robotics have led to their increasing deployment in a wide range of industrial and business environments, which has prompted debate as to their impact on existing jobs in sectors such as manufacturing. “As smart machines become more capable, and more affordable, they will be more widely deployed in multiple roles across many industries, replacing some human workers. This is nothing new. The deployment of new technology has eliminated millions of jobs over the course of history,” said Mr. Prentice. “At the same time, entirely new industries have been developed by those technologies, almost always creating millions of new jobs. Organizations must balance the necessity to exploit the significant advances being made in the capabilities of various smart machines with the perceived negative impact of resulting job losses.”

    During the next five years, Gartner predicts that smart machines will inevitably be relied on to make more decisions that are of growing significance to the business, raising the fear that they may become “unstoppable” or run out of control.

    “The fear among many individuals is that the machines will ‘take over,’ start making decisions on their own and run out of control, posing a threat to individuals, society and even humanity itself,” explained Mr. Prentice. “However, within the confines of currently known technology, the idea of machines attaining some level of ‘self-awareness,’ ‘consciousness’ or ‘sentience’ is still the stuff of science fiction. Even with the coming generation of smart machines, which actively ‘learn’ and will be able to adapt their actions to optimize their progress toward a goal, humans can choose to remain in control.”

    Reply
  14. Tomi Engdahl says:

    The Future of AI: a Non-Alarmist Viewpoint
    http://tech.slashdot.org/story/15/06/17/0324258/the-future-of-ai-a-non-alarmist-viewpoint

    There has been a lot of discussion recently about the dangers posed by building truly intelligent machines. A lot of well-educated and smart people, including Bill Gates and Stephen Hawking, have stated they are fearful about the dangers that sentient Artificial Intelligence (AI) poses to humanity. But maybe it makes more sense to focus on the societal challenges that advances in AI will pose in the near future (Dice link), rather than worrying about what will happen when we eventually solve the titanic problem of building an artificial general intelligence that actually works.

    Don’t worry about a hypothetical SkyNet, in other words; the bigger issue is what a (dumber) AI will do to your profession over the next several years.

    The Future of AI: A Non-Alarmist Viewpoint
    http://insights.dice.com/2015/06/16/future-of-ai-a-non-alarmist-viewpoint/

    Reply
  15. Tomi Engdahl says:

    Woz Wows with Humanitarianism
    http://www.eetimes.com/document.asp?doc_id=1326971&

    When asked if robots were going to take over the world, he admitted that he had once worried about that, but no more.

    “If you go into an airport and use the kiosk, that machine is taking over a job the same as they are doing at factories. So what happens when computers achieve conscious?” Woz asked us. “Now I think it will be 100s of years before computer even be smart enough to take over, but by then they will understand that nature has to be preserved and man is a part of nature, so I’m not worried.”

    Reply
  16. Tomi Engdahl says:

    The future of artificial intelligence: Myths, realities and aspirations
    http://blogs.microsoft.com/next/2015/07/06/the-future-of-artificial-intelligence-myths-realities-and-aspirations/

    Only a few years ago, it would have seemed improbable to assume that a piece of technology could quickly and accurately understand most of what you say – let alone translate it into another language.

    A new wave of artificial intelligence breakthroughs is making it possible for technology to do all sorts of things we at first can’t believe and then quickly take for granted.

    And yet, although they can do some individual tasks as well as or even better than humans, technology still cannot approach the complex thinking that humans have.

    “It’s a long way from general intelligence,” Bishop said.

    The latest breakthroughs in artificial intelligence are the result of core advances in AI, including developments in machine learning, reasoning and perception, on a stage set by advances in multiple areas of computer science.

    Computing power has increased dramatically and has scaled to the cloud. Meanwhile, the growth of the Web has provided opportunities to collect, store and share large amounts of data.

    There also have been great strides in probabilistic modeling

    The new capabilities also are coming from advances in specific technologies, such as machine learning methods called neural networks, which can be trained from massive data sets to recognize objects in images or to understand spoken words.

    Another promising effort is “integrative AI,” in which competencies including vision, speech, natural language, machine learning and planning are brought together to create more capable systems, such as one that can see, understand and converse with people.

    “We see more and more of these successes in daily life,” Horvitz said. “We quickly grow accustomed to them and come to expect them.”

    That, in turn, means that big technology companies are growing more dependent on building successful artificial intelligence-based systems.

    “AI has become more central to the competitive landscape for these companies,” Horvitz said.

    In the long run, Horvitz sees vast potential for artificial intelligence to enhance people’s quality of life in areas including education, transportation and healthcare.

    Despite the recent breakthroughs in artificial intelligence research, many experts believe some of the biggest advances in artificial intelligence are years, if not decades, away. As these systems improve, Horvitz said researchers are creating safeguards to ensure that AI systems will perform safely even in unforeseen situations.

    “We have to stay vigilant, be proactive and make good decisions, especially as we build more powerful intelligences, including systems that might be able to outthink us or rethink things in ways that weren’t planned by the creators,” he said.

    Researchers, scientific societies and industry experts are building in tools, controls and constraints to prevent unexpected consequences.

    They also are constantly evaluating ethical and legal concerns

    Reply
  17. Tomi Engdahl says:

    Linux founder says you must be ‘on drugs’ if you’re scared of AI
    Torvalds dismisses claims from Elon Musk and Steve Wozniak
    http://www.theinquirer.net/inquirer/news/2416607/linux-founder-says-you-must-be-on-drugs-if-youre-scared-of-ai

    LINUX FOUNDER Linus Torvalds has said that artificial intelligence (AI) is nothing to fear, dismissing remarks from the likes of Elon Musk, Stephen Hawking and Steve Wozniak.

    Torvalds made his views on AI plain when a Slashdot user quizzed him as to whether he thinks it will be a “great gift” to mankind or a potential danger.

    Debunking Wozniak’s recent claims that humans will become the pets of robots when they take over the world, Torvalds said: “We’ll get AI, and it will almost certainly be through something very much like recurrent neural networks.

    “And the thing is, since that kind of AI will need training, it won’t be ‘reliable’ in the traditional computer sense. It’s not the old rule-based prolog days, when people thought they’d ‘understand’ what the actual decisions were in an AI.”

    “And that all makes it very interesting, of course, but it also makes it hard to productise. Which will very much limit where you’ll actually find those neural networks, and what kinds of network sizes and inputs and outputs they’ll have.”

    “So I’d expect just more of (and much fancier) targeted AI, rather than anything human-like at all. Language recognition, pattern recognition, things like that.”

    “The whole ‘singularity’ kind of event? Yeah, it’s science fiction, and not very good sci-fi at that, in my opinion. Unending exponential growth? What drugs are those people on? I mean, really.”

    Reply
  18. Tomi Engdahl says:

    Will our Deep Learning Machines Love Us or Loath Us?
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1327589&

    Whose judgment will AI systems use as a basis for their evaluations? Yours, mine, or someone we don’t know who may have poor judgment skills?

    The recent and unfortunate death of a German factory worker at the “hands” of a robot and the homophobic responses of the Russian version of Siri made for attention-grabbing headlines that played to our existential angst over robots in general, and artificial intelligence in particular. For those who suffer from such angst, life is going to get very difficult in the coming months and years as deep-learning research and algorithms start to bear fruit.

    And for those who don’t suffer such angst… maybe they should. Robots have already made life difficult for many by performing labor-intensive tasks that were previously suited only to humans sue to our capacity for pattern recognition, exception handling, and decision making. Combined with the steadily increasing cost of labor, the outsourcing tide has changed and caused a geographic shift in manufacturing out of what were once cheap-labor regions and closer to the designers and developers of these products, in many cases.

    This shift back to manufacturing in countries like the US, for example, has been greeted with excitement, given the outflow of such jobs to Asia over the past 30 years.

    Reply
  19. Tomi Engdahl says:

    You Can Now Use Stephen Hawking’s Speech Software for Free
    http://www.wired.com/2015/08/stephen-hawking-software-open-source/

    Software created by Intel was instrumental in giving Stephen Hawking a voice. Now, the company has released this same software under a free software license.

    The development of the platform, called ACAT for “assistive context-aware toolkit,” was detailed in a WIRED story earlier this year. It’s a system that makes computers more accessible to people with disabilities. And now that the source code for this toolkit is open, it means you can build a system very similar to the one Professor Hawking uses to input text, send commands to applications, and communicate with the world.

    So why aren’t you? Well, there are a few things you should be aware of before you go ahead and download ACAT. For starters, it’s PC-only. You will need a PC running at least Windows XP

    If you have a PC, though, the rest of the hardware requirements are pretty easy to meet. ACAT uses visual cues in the user’s face to understand commands

    To use it, your computer simply needs to have a webcam. However, for users who might want or need more from ACAT, there are possibilities for other types of input down the road.

    Of course, ACAT isn’t really meant for the average user to download and play with—at least, not yet.

    “The goal of open sourcing this is to enable developers to create solutions in the assistive space with ease, and have them leverage what we have invested years of effort in,” says Nachman. “Our vision is to enable any developer or researcher who can bring in value in sensing, UI, word prediction, context awareness, etc. to build on top of this, and not have to reinvent the wheel since it is a large effort to do this.”

    Reply
  20. Tomi Engdahl says:

    Software Takes On School Science Tests In Search For Common Sense
    http://news.slashdot.org/story/15/09/09/1922253/software-takes-on-school-science-tests-in-search-for-common-sense

    Making software take school tests designed for human kids can help the quest for machines with common sense, says researchers at the Allen Institute for Artificial Intelligence. They’ve made software called Aristo that scores 75 percent on the multiple choice questions that make up most of New York State’s 4th grade science exam.

    AI Software Goes Up Against Fourth Graders on Science Tests
    http://www.technologyreview.com/news/541001/ai-software-goes-up-against-fourth-graders-on-science-tests/

    Making AI software take real school exams might accelerate progress toward machines with common sense.

    Reply
  21. Tomi Engdahl says:

    Giraffe: Using Deep Reinforcement Learning to Play Chess
    http://arxiv.org/pdf/1509.01549v1.pdf

    This report presents Giraffe, a chess engine that uses self-play to discover all its
    domain-specific knowledge, with minimal hand-crafted knowledge given by the pro-
    grammer. Unlike previous attempts using machine learning only to perform parameter-
    tuning on hand-crafted evaluation functions, Giraffe’s learning system also performs
    automatic feature extraction and pattern recognition.

    With the move evaluator guiding a probability-based search using the learned eval-
    uator, Giraffe plays at approximately the level of an FIDE International Master (top
    2.2% of tournament chess players with an official rating)

    Reply
  22. Tomi Engdahl says:

    Andy Rubin: AI Is The Future Of Computing, Mobility
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1327971&

    Andy Rubin, the man behind Google’s Android operating system, thinks artificial intelligence will define computing in the future.

    “There is a point in time — I have no idea when it is, it won’t be in the next 10 years, or 20 years — where there is some form of AI, for lack of a better term, that will be the next computing platform,” said Rubin onstage at the Code/Mobile conference.

    More specifically, Rubin believes Internet-connected devices (smartphones, tablets, thermostats, smoke detectors, and cars, for example) will create massive amounts of data that will be analyzed by deep-learning technologies. This process will be the foundation of the first artificial intelligence networks. They will be able to tell people, for instance, what their thermostat is set to, when it’s time to hit the gym, and whether or not your pool has too much chlorine.

    Context is important. “The thing that’s gonna be new is the part of the cloud that’s forming the intelligence from all the information that’s coming,” said Rubin.

    Andy Rubin: AI Is The Future Of Computing, Mobility
    http://www.informationweek.com/mobile/mobile-applications/andy-rubin-ai-is-the-future-of-computing-mobility/a/d-id/1322556?

    Andy Rubin, the man behind Google’s Android operating system, thinks artificial intelligence will define computing in the future.

    Reply
  23. Tomi Engdahl says:

    We’re so predictable
    An algorithm can predict human behavior better than humans
    http://qz.com/527008/an-algorithm-can-predict-human-behavior-better-than-humans/

    You might presume, or at least hope, that humans are better at understanding fellow humans than machines are. But a new MIT study suggests an algorithm can predict someone’s behavior faster and more reliably than humans can.

    Max Kanter, a master’s student in computer science at MIT, and his advisor, Kalyan Veeramachaneni, a research scientist at MIT’s computer science and artificial intelligence laboratory, created the Data Science Machine to search for patterns and choose which variables are the most relevant.

    It’s fairly common for machines to analyze data, but humans are typically required to choose which data points are relevant for analysis. In three competitions with human teams, a machine made more accurate predictions than 615 of 906 human teams. And while humans worked on their predictive algorithms for months, the machine took two to 12 hours to produce each of its competition entries.

    For example, when one competition asked teams to predict whether a student would drop out during the next ten days, based on student interactions with resources on an online course, there were many possible factors to consider.

    The Data Science Machine performed well in this competition. It was also successful in two other competitions, one in which participants had to predict whether a crowd-funded project would be considered “exciting” and another if a customer would become a repeat buyer.

    Reply
  24. Tomi Engdahl says:

    The real danger of artificial intelligence
    http://www.edn.com/electronics-blogs/embedded-insights/4440725/The-real-danger-of-artificial-intelligence?_mc=NL_EDN_EDT_EDN_today_20151103&cid=NL_EDN_EDT_EDN_today_20151103&elq=cfbbcc28d82a471c9be5bc13c555c987&elqCampaignId=25533&elqaid=29048&elqat=1&elqTrackId=aa251888c88c4d53a543018d6bfde91a

    In the beginning of this year several respected scientists issued a letter warning about the dangers of artificial intelligence (AI). In particular, they were concerned that we would create an AI that was able to adapt and evolve on its own, and would do so at such an accelerated rate that it would move beyond human ability to understand or control. And that, they warned, could spell the end of mankind. But I think the real danger of AI is much closer to us than that undefined and likely distant future.

    For one thing, I have serious doubts about the whole AI apocalypse scenario. We are an awfully long way from creating any kind of computing system with the complexity embodied in the human brain. In addition, we don’t really know what intelligence is, what’s necessary for it to exist, and how it arises in the first place. Complexity alone clearly isn’t enough. We humans all have brains, but intelligence varies widely. I don’t see how we can artificially create an intelligence when we don’t really have a specification to follow.

    What we do have is a hazy description of what intelligent behavior looks like, and so far all our AI efforts have concentrated on mimicking some elements of that behavior. The results so far have offered some impressive results, but only in narrow application areas.

    And even were we able to create something that was truly intelligent, who’s to say that such an entity will be malevolent?

    I think the dangers of AI are real and will manifest in the near future, however. But they won’t arise because of how intelligent the machines are. They’ll arise because the machines won’t be intelligent enough, yet we will give control over to them anyway and in so doing, lose the ability to take control ourselves.

    This handoff and skill loss is already starting to happen in the airline industry, according to this New Yorker article. Autopilots are good enough to handle the vast majority of situations without human intervention, so the pilot’s attention wanders and when a situation arises that the autopilot cannot properly handle, there is an increased chance that the human pilot’s startled reaction will be the wrong one.

    Then there is the GIGO factor (GIGO = garbage in, garbage out). If the AI system is getting incorrect information, it is highly likely to make an improper decision with potentially disastrous consequences. Humans are able to take in information from a variety of sources, integrate them all, compare that against experience, and use the result to identify faulty information sources.

    Reply
  25. Tomi Engdahl says:

    Emerging technologies and the future of humanity
    http://bos.sagepub.com/content/71/6/29.full

    Emerging technologies are not the danger. Failure of human imagination, optimism, energy, and creativity is the danger.

    Why the future doesn’t need us: Our most powerful 21st-century technologies—robotics, genetic engineering, and nanotech—are threatening to make humans an endangered species. —Bill Joy, co-founder and at the time chief scientist, Sun Microsystems, 20001

    Although it was not clear at the time, Bill Joy’s article warning of the dangers of emerging technologies was to spawn a veritable “dystopia industry.” More recent contributions have tended to focus on artificial intelligence, or AI; electric car and space technology entrepreneur Elon Musk has warned that AI is “summoning the demon” (Mack, 2015), while physicist Stephen Hawking has argued that “the development of full artificial intelligence could spell the end of the human race” (Cellan-Jones, 2014). The Future of Life Institute (2015) recently released an open letter signed by many scientific and research notables urging a ban on “offensive autonomous weapons beyond meaningful human control.” Meanwhile, the UN holds conferences and European activists mount campaigns against what they characterize as “killer robots” (see, e.g., Human Rights Watch, 2012). Headlines reinforce a sense of existential crisis; in the military and security domain, cyber conflict runs rampant, with hackers accessing millions of US personnel records, including sensitive security clearance documents. Technologies such as uncrewed aerial vehicles, commonly referred to as “drones,” are highly contentious in both civil and conflict environments, for many different reasons. A recent US Army Research Laboratory report foresees genetically and technologically enhanced soldiers networked with their battlespace robotic partners and remarks that “the presence of super humans on the battlefield in the 2050 timeframe is highly likely because the various components needed to enable this development already exist and are undergoing rapid evolution” (Kott et al., 2015: 19).

    Reply
  26. Tomi Engdahl says:

    Is AI Development Moving In the Wrong Direction?
    http://search.slashdot.org/story/15/12/03/043239/is-ai-development-moving-in-the-wrong-direction

    Artificial Intelligence is always just around the corner, right? We often see reports that promise near breakthroughs, but time and again they don’t come to fruition. The cause of this may be that we’re trying to solve the wrong problem.

    efforts like IBM’s Watson and Google’s Inceptionism. His conclusion is that we haven’t actually been trying to solve “intelligence”

    A Short History of AI, and Why It’s Heading in the Wrong Direction
    http://hackaday.com/2015/12/01/a-short-history-of-ai-and-why-its-heading-in-the-wrong-direction/

    Reply
  27. Tomi Engdahl says:

    Stephen Hawking reckons he’s cracked the black hole paradox
    Hawking says ‘soft hair’ explains everything. Not, repeat not, on cats
    http://www.theregister.co.uk/2016/01/13/stephen_hawking_reckons_hes_cracked_the_black_hole_paradox/

    Last August, Stephen Hawking tantalised the world by saying he’d worked out a solution to the “black hole paradox”.

    He’s now dropped the first detailed discussion of his hypothesis for the world to pore over, here at ArXiv in a paper entitled Soft hair on black holes.

    The black hole paradox wouldn’t have arisen if not for his own work, in a now 40-year-old paper that proposed “Hawking radiation”. That paper created a problem because it proposed a mechanism by which information is lost to the universe forever.

    Physicists don’t like information destruction any more than they like singularities. Physical laws let us use the present to predict the future, but black holes destroying information also destroys the determinism we rely on.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*