3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

5,218 Comments

  1. Tomi Engdahl says:

    This App Can Detect Cancer Better Than Doctors Can
    http://www.iflscience.com/health-and-medicine/artificial-intelligence-can-now-detect-skin-cancer-better-than-humans/

    Artificial intelligence beats experienced dermatologists when it comes to skin cancer diagnosis, according to a study published in the journal Annals of Oncology.

    Researchers trained a deep learning convolutional neural network (CNN) to distinguish malignant melanomas from benign moles using more than 100,000 photographs. Then, they compared its success rate against those of 58 dermatologists from 17 countries.

    And it’s bad news for derms.

    Reply
  2. Tomi Engdahl says:

    How AI Will Help Pave the Way to Autonomous Driving
    http://www.electronicdesign.com/test-measurement/how-ai-will-help-pave-way-autonomous-driving

    Artificial intelligence will enable vehicles to manage, make sense of, and respond quickly to real-world data inputs from hundreds of different sensors, but it’s going to take some time.

    Reply
  3. Tomi Engdahl says:

    Startup Raises $12 Million to Make Most of Embedded Hardware
    http://www.electronicdesign.com/embedded-revolution/startup-raises-12-million-make-most-embedded-hardware

    Founded by former scientists at the Allen Institute for Artificial Intelligence, XNOR is trying to trim the fat from machine learning models so that they run on hardware as simple and low cost as the Raspberry Pi. That puts it directly in the path of companies creating custom chips that can accelerate neural networks and could cost much more than existing chips.

    XNOR is also trying to develop a software platform that allows anyone to integrate state-of-the-art inference models into security cameras, drones and other devices. The toolkit is scheduled to be released before the end of the year, and the company has partnered with semiconductor companies, including Ambarella, to make the algorithms compatible with their products.

    “Our ‘A.I. everywhere for everyone’ technology eliminates the need for internet connectivity, runs on inexpensive hardware platforms and eliminates latency inherent in traditional cloud based A.I. systems,” said Ali Farhadi, founder and chief executive of XNOR, which previously raised $2.6 million in seed funding.

    Reply
  4. Tomi Engdahl says:

    Google Plans Not to Renew Its Contract for Project Maven, a Controversial Pentagon Drone AI Imaging Program
    https://gizmodo.com/google-plans-not-to-renew-its-contract-for-project-mave-1826488620

    Google will not seek another contract for its controversial work providing artificial intelligence to the U.S. Department of Defense for analyzing drone footage after its current contract expires.

    Reply
  5. Tomi Engdahl says:

    The Current Limitations and Future Potential of AI in Cybersecurity
    https://www.securityweek.com/current-limitations-and-future-potential-ai-cybersecurity

    A recent NIST study shows the current limitations and future potential of machine learning in cybersecurity.

    Published Tuesday in the Proceedings of the National Academy of Sciences, the study focused on facial recognition and tested the accuracy of a group of 184 humans and the accuracy of four of the latest facial recognition algorithms. The humans comprised 87 trained professionals, 13 so-called ‘super recognizers’ (who simply have an exceptional natural ability), and a control group of 84 untrained individuals.

    “Our data show that the best results come from a single facial examiner working with a single top-performing algorithm,” commented NIST electronic engineer P. Jonathon Phillips. “While combining two human examiners does improve accuracy, it’s not as good as combining one examiner and the best algorithm.”

    Face recognition accuracy of forensic examiners, superrecognizers, and face recognition algorithms
    http://www.pnas.org/content/early/2018/05/22/1721355115

    Reply
  6. Tomi Engdahl says:

    AI Chip Tests Binary Approach
    Imec’s Lenna may explore in-memory compute
    https://www.eetimes.com/document.asp?doc_id=1333321

    Imec said at its annual event here that it is prototyping a deep-learning inference chip using single-bit precision. The research institute hopes to gather data over the next year on the effectiveness for client devices of the novel data type and architecture–either a processor-in-memory (PIM) or an analog memory fabric.

    The PIM architecture, explored by academics for decades, is gaining popularity for data-intensive machine-learning algorithms. Startup Mythic and IBM Research are designing two of the most prominent efforts in the field.

    Many academics are experimenting with 1- to 4-bit data types to trim the heavy memory requirements for deep learning. So far, commercial designs for AI accelerators from Arm and others are focusing on 8-bit and larger data types, in part because programming tools such as Google’s TensorFlow lack support for the smaller data types.

    Reply
  7. Tomi Engdahl says:

    MIT Scientists Create Norman, The World’s First “Psychopathic” AI
    http://www.iflscience.com/technology/mit-scientists-create-norman-the-worlds-first-psychopathic-ai/

    A team of scientists at the Massachusetts Institute of Technology (MIT) have built a psychopathic AI using images pulled from Reddit. Oh, and they’ve named it Norman after Alfred Hitchcock’s Norman Bates. This is how our very own Terminator starts…

    The purpose of the experiment was to test how data fed into an algorithm affects its “outlook”. Specifically, how training an algorithm on some of the darkest elements of the web

    Norman is a particular type of AI program that can “look at” and “understand” pictures, and then describe what it sees in writing. So, after being trained on some particularly gruesome images, it performed the Rorschach test, which is the series of inkblots psychologists use to analyze the mental health and emotional state of their patients. Norman’s responses were then compared to those of a second AI, trained on more family-friendly images of birds, cats, and people. The differences between the two are stark.

    Reply
  8. Tomi Engdahl says:

    Medical Imaging AI Software Is Vulnerable to Covert Attacks
    https://spectrum.ieee.org/the-human-os/biomedical/imaging/medical-imaging-ai-software-vulnerable-to-covert-attacks

    Artificial intelligence systems meant to analyze medical images are vulnerable to attacks designed to fool them in ways that are imperceptible to humans, a new study warns.

    There may be enormous incentives to carry out such attacks for healthcare fraud and other nefarious ends, the researchers say.

    “The most striking thing to me as a researcher crafting these attacks was probably how easy they were to carry out,”

    Reply
  9. Tomi Engdahl says:

    Phil Stewart / Reuters:
    Sources detail US military attempts to use AI to locate hostile nuclear missiles; the Trump administration proposed tripling funding for one program to $83M

    Deep in the Pentagon, a secret AI program to find hidden nuclear missiles
    https://www.reuters.com/article/us-usa-pentagon-missiles-ai-insight/deep-in-the-pentagon-a-secret-ai-program-to-find-hidden-nuclear-missiles-idUSKCN1J114J

    The U.S. military is increasing spending on a secret research effort to use artificial intelligence to help anticipate the launch of a nuclear-capable missile, as well as track and target mobile launchers in North Korea and elsewhere.

    U.S. officials familiar with the research told Reuters there are multiple classified programs now under way to explore how to develop AI-driven systems to better protect the United States against a potential nuclear missile strike.

    Reply
  10. Tomi Engdahl says:

    Frederic Lardinois / TechCrunch:
    Israel-based Hailo, which is building deep learning chips for embedded devices, raises $12.5M Series A from crowdfunding platform OurCrowd and others

    Hailo raises a $12.5M Series A round for its deep learning chips
    https://techcrunch.com/2018/06/05/hailo-raises-a-12-5m-series-a-round-for-its-deep-learning-chips/

    For the longest time, chips were a little bit boring. But the revolution in deep learning has now opened the market for startups that build specialty chips to accelerate deep learning and model evaluation. Among those is Israel-based Hailo, which is building deep learning chips for embedded devices. The company today announced that it has raised a $12 million Series A round.

    https://www.hailotech.com/

    Reply
  11. Tomi Engdahl says:

    Embedded AI: A Designer’s Guide
    https://www.eetimes.com/author.asp?section_id=36&doc_id=1333348

    Plenty of resources are becoming available to help engineers explore how to harness the new world of deep learning in their power-constrained designs.

    https://www.electronicproducts.com/Robotics/AI/Engineer_s_guide_to_embedded_AI.aspx

    Reply
  12. Tomi Engdahl says:

    Kenneth Falck
    AI & ML in the Cloud: Managed Services 2018
    https://www.amazon.com/AI-ML-Cloud-Managed-Services-ebook/dp/B07D6RN7F4/

    Description
    This book offers an overview of the currently available Artificial Intelligence and Machine Learning services in the cloud. It covers the four large cloud platforms – Amazon AWS, Google Cloud, IBM Clo …

    Reply
  13. Tomi Engdahl says:

    Automation won’t take your job until the next recession threatens it
    Economics boffin says we’re just playing with AI now and the payoff is years away
    https://www.theregister.co.uk/2018/06/07/automation_wont_take_your_job_until_the_next_recession_threatens_it/

    Good news! Automation capable of erasing white collar jobs is coming, but not for a decade or more.

    And that’s also the bad news because interest in automation accelerates during economic downturns, so once tech that can take your job arrives you’ll already have lived through another period of economic turmoil that may already have cost you your job.

    Reply
  14. Tomi Engdahl says:

    IBM Takes AI In Different Directions
    https://semiengineering.com/ibm-takes-ai-in-different-directions/

    What AI and deep learning are good for, what they’re not good for, and why accuracy sometimes works against these systems.

    Reply
  15. Tomi Engdahl says:

    IBM Takes AI In Different Directions
    https://semiengineering.com/ibm-takes-ai-in-different-directions/

    What AI and deep learning are good for, what they’re not good for, and why accuracy sometimes works against these systems.

    SE: What’s changing in AI and why?

    Welser: The most interesting thing in AI right now is that we’ve moved from narrow AI, where we’ve proven you can you use a deep learning neural nets to do really good image recognition or natural language processing—basically point tasks—to rival what humans can do in many cases. In image recognition, in particular, neural nets now can do better than humans. That’s great, but it’s really narrow. We’re moving into what we would call broader AI, where now we’re going to take those interesting point solutions and figure out how you integrate them into something that will help somebody do their job, or an actual task beyond, ‘I want to recognize cats on the Internet.’ Recognizing cats is an interesting demonstration, but you don’t get any business value out of it.

    SE: What are the next steps to make that happen?

    Welser: We’re focused on the problems in industry or in enterprises where AI could help a person in their role. In the health care area, there are ways that AI can help a radiologist read through the images that they’re seeing. But is there a way that it could help them understand, for a set of symptoms, what the potential diagnosis would be?

    SE: So what you’re looking for is deeper context.

    Welser: Exactly.

    Reply
  16. Tomi Engdahl says:

    Machine Learning’s Limits
    https://semiengineering.com/machine-learnings-limits/

    Experts at the Table, part 1: Why machine learning works in some cases and not in others.

    SE: Where are we with machine learning? What problems still have to be resolved?

    Aitken: We’re in a state where things are changing so rapidly that it’s really hard to keep up with where we are at any given instance. We’ve seen that machine learning has been able to take some of the things we used to think were very complicated and rendered them simple to do. But simple can be deceiving. It’s not just a case of, ‘I’ve downloaded TensorFlow and magically it worked for me, and now all the problems I used to have 100 people do are much simpler.’ The problems move to a different space. For example, we took a look at what it would take to do machine learning for verification test generation for processors. What we found is that machine learning is very good at picking from a set of random test programs the ones are more likely to be useful test vectors than others. That rendered a complicated task simpler, but it moved the problem to a new space. How do you convert test data into something that a machine learning algorithm can optimize? And then, how do you take what it told you and bring it back to the realm of processor testing? So we find a lot of moving the problem around, in addition to clever solutions for problems that we had trouble with before.

    Reply
  17. Tomi Engdahl says:

    Google Won’t Use Artificial Intelligence for Weapons
    https://www.securityweek.com/google-wont-use-artificial-intelligence-weapons

    Google announced Thursday it would not use artificial intelligence for weapons or to “cause or directly facilitate injury to people,” as it unveiled a set of principles for these technologies.

    Chief executive Sundar Pichai, in a blog post outlining the company’s artificial intelligence policies, noted that even though Google won’t use AI for weapons, “we will continue our work with governments and the military in many other areas” including cybersecurity, training, and search and rescue.

    The news comes with Google facing pressure from employees and others over a contract with the US military, which the California tech giant said last week would not be renewed.

    Pichai set out seven principles for Google’s application of artificial intelligence, or advanced computing that can simulate intelligent human behavior.

    Reply
  18. Tomi Engdahl says:

    Wally Rhines: Deep Learning Will Drive Next Wave of Chip Growth
    https://www.eetimes.com/document.asp?doc_id=1333369

    Count Wally Rhines, semiconductor industry veteran and long-time CEO of Mentor Graphics, among the many who believe that deep-learning hardware will drive the next wave of growth for the semiconductor industry.

    Speaking at the GSA European Executive Forum here this week, Rhines added that memory will continue to be a key driver of the chip industry going forward. Despite the volatility of the semiconductor industry, R&D investment continues to be around 14% of revenue as it has been for the last 36 years, Rhines said, dismissing arguments put forth by some that there isn’t enough being ploughed back into R&D to maintain sustained growth.

    Reply
  19. Tomi Engdahl says:

    AI Comes to ASICs in Data Centers
    eSilicon helped Nervana design its first-gen AI ASIC
    https://www.eetimes.com/document.asp?doc_id=1333358

    Three years ago, when AI chip startup Nervana ventured into the uncharted territory of designing custom AI accelerators, the company’s move was less perilous than it might have been, thanks to an ASIC expert that Nervana — now owned by Intel — sought for help.

    That ASIC expert was eSilicon.

    Two industry sources independently told EE Times that eSilicon worked on Nervana’s AI ASIC and delivered it to Intel after the startup was sold. eSilicon, however, declined to comment on its customer.

    Nervana’s first-generation AI ASIC, called Lake Crest, was one of the most-watched custom designs for AI accelerators.

    Reply
  20. Tomi Engdahl says:

    Google Promises Its AI Will Not Be Used For Weapons
    https://hardware.slashdot.org/story/18/06/07/2020200/google-promises-its-ai-will-not-be-used-for-weapons?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Slashdot%2Fslashdot%2Fto+%28%28Title%29Slashdot+%28rdf%29%29

    Google, reeling from an employee protest over the use of artificial intelligence for military purposes, said Thursday that it would not use A.I. for weapons or for surveillance that violates human rights

    But it will continue to work with governments and the military. The new rules were part of a set of principles Google unveiled relating to the use of artificial intelligence. In a company blog post, Sundar Pichai, the chief executive, laid out seven objectives for its A.I. technology, including “avoid creating or reinforcing unfair bias” and “be socially beneficial.”

    AI at Google: our principles
    https://blog.google/topics/ai/ai-principles/

    Reply
  21. Tomi Engdahl says:

    Josh Constine / TechCrunch:
    Panda, an app letting users send 10-second videos with AR effects based on spoken words, launches on iOS, has raised $850K from Social Capital and others

    Speech recognition triggers fun AR stickers in Panda’s video app
    https://techcrunch.com/2018/06/07/panda-app/

    Say “Want to get pizza?” and a 3D pizza slice hovers by your mouth. Say “I wear my sunglasses at night” and suddenly you’re wearing AR shades with a moon hung above your head.

    Panda is surprising and delightful. It’s also a bit janky, created by a five person team with under $1 million in funding. Building a video chat app user base from scratch amidst all the competition will be a struggle. But even if Panda isn’t the app to popularize the idea, it’s invented a smart way to enhance visual communication that blends into our natural behavior.

    Reply
  22. Tomi Engdahl says:

    GDPR panic may spur data and AI innovation
    https://techcrunch.com/2018/06/07/gdpr-panic-may-spur-data-and-ai-innovation/?sr_share=facebook&utm_source=tcfbpage

    Coincidentally, the barriers to GDPR compliance are also bottlenecks of widespread AI adoption. Despite the hype, enterprise AI is still nascent: Companies may own petabytes of data that can be used for AI, but fully digitizing that data, knowing what the data tables actually contain and understanding who, where and how to access that data remains a herculean coordination effort for even the most empowered internal champion. It’s no wonder that many scrappy AI startups find themselves bogged down by customer data cleanup and custom integrations.

    Reply
  23. Tomi Engdahl says:

    Thomas Fox-Brewster / Forbes:
    A look at how Amazon Rekognition can be used to build facial recognition tools at incredibly low cost by anyone with a computer

    We Built A Powerful Amazon Facial Recognition Tool For Under $10
    https://www.forbes.com/sites/thomasbrewster/2018/06/06/amazon-facial-recognition-cost-just-10-and-was-worryingly-good/#334e028e51db

    The democratization of mass surveillance is upon us. Insanely cheap tools with the power to track individuals en masse are now available for anyone to use, as exemplified by a Forbes test of an Amazon facial recognition product, Rekognition, that made headlines last month.

    Jeff Bezos’ behemoth of a business is seen by most as a consumer-driven business, not a provider of easy-to-use spy tech. But as revealed by the American Civil Liberties Union (ACLU) last week, Amazon Web Services (AWS) is shipping Rekognition to various U.S. police departments.

    And because Rekognition is open to all, Forbes decided to try out the service.

    we discovered it took just a few hours, some loose change and a little technical knowledge to establish a super-accurate facial recognition operation

    A recipe for facial recognition

    To get things started with Rekognition, we enlisted the help of independent researcher Matt Svensson. He set up an AWS database (known as an S3 bucket) into which we poured a mix of stock photos and Forbes staff mugshots. As Amazon didn’t have a straightforward tool to visualize a face match and simply sent back results in text form, Svensson quickly coded up a program that put a red square around our “targets” and green for “innocents,” giving the system an air of professional surveillance.

    Our video teams in Jersey City and London took some simple footage mimicking CCTV footage, shots still or pivoting slightly.

    In every case where a Forbes employee was included in the database and a filming, a successful match was made, as shown by the little red squares drawn around their faces.

    Cheap and cheerful facial recognition

    This small-scale test was essentially free, largely thanks to Svensson not charging. In a professional deployment the cost would still be minuscule. “Even if we include costs of testing, figuring out AWS and actually running the facial recognition on our scenario, it’s going to be under $10,” Svensson added.

    Law enforcement are already enjoying the low cost: the ACLU found the Orlando Police Department spent just $30.99 to process 30,989 images.

    Compared to other facial recognition projects currently being run by the federal government, the Amazon service is staggeringly cheap.

    Amazon isn’t the only consumer tech giant with uber scale dabbling in surveillance. Both Google and Facebook have their own facial recognition arms, though there’s no evidence they’ve sold such services to the U.S. government or local law enforcement agencies.

    “Real world matching would be the same, requiring multiple angles of someone’s face to be able to match well. For Facebook and Google, they have this in spades,” Svensson said. Google, of course, already has facial recognition software inside its Nest Hello doorbell.

    Amazon better than open source?

    While open source facial recognition tools are available, Amazon’s platform is different. As with its other products, it has the scale and quality of service to deliver facial recognition at incredibly low cost and to anyone with a computer.

    Svensson found Amazon was faster than one of the more popular open source tools, found on Github.

    So cheap, simple and speedy is Rekognition that it “will likely transform the way we view our privacy online and in the ‘real world,’” Svensson said.

    The world’s simplest facial recognition api for Python and the command line
    https://github.com/ageitgey/face_recognition

    Reply
  24. Tomi Engdahl says:

    Accenture wants to beat unfair AI with a professional toolkit
    https://techcrunch.com/2018/06/09/accenture-wants-to-beat-unfair-ai-with-a-professional-toolkit/?sr_share=facebook&utm_source=tcfbpage

    The “AI fairness tool”, as it’s being described, is one piece of a wider package the consultancy firm has recently started offering its customers around transparency and ethics for machine learning deployments — while still pushing businesses to adopt and deploy AI.

    Reply
  25. Tomi Engdahl says:

    ‘The Business of War’: Google Employees Protest Work for the Pentagon
    https://www.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html

    Thousands of Google employees, including dozens of senior engineers, have signed a letter protesting the company’s involvement in a Pentagon program that uses artificial intelligence to interpret video imagery and could be used to improve the targeting of drone strikes.

    The letter, which is circulating inside Google and has garnered more than 3,100 signatures, reflects a culture clash between Silicon Valley and the federal government that is likely to intensify as cutting-edge artificial intelligence is increasingly employed for military purposes.

    Reply
  26. Tomi Engdahl says:

    Rhines: Deep Learning Will Drive Next Wave of Chip Growth
    https://www.eetimes.com/document.asp?doc_id=1333369

    Reply
  27. Tomi Engdahl says:

    It’s Time for AI in PCB Design
    https://www.eetimes.com/author.asp?section_id=36&doc_id=1333372

    AI placement in PCB design is both possible and could be a road that brings designers into a new era of innovation.

    Artificial Intelligence has been available in most EDA tools including PCB layout for some time now. Though the potential for machine learning exists in EDA, PCB designers have been slow to adopt a technology that currently auto-places and auto-routes for silicon. Most PCB designers manually route and design their boards, a time consuming and intricate process.

    As early as the 1980’s, neural networking was an established theoretical concept in EDA. By the 1990’s, there were tools in place that could use the concepts.

    Neuroroute – a product based on neural networks – was given 50 to 60 human-made PCB designs. These designs were relayed to an AI routing engine that used supervised machine learning to create an auto-router that makes decisions like a human.

    Neuroroute paved the way for modern topological techniques.

    Reply
  28. Tomi Engdahl says:

    Open source image recognition with Luminoth
    https://opensource.com/article/18/5/getting-started-luminoth?sc_cid=7016000000127ECAAY

    Luminoth helps computers identify what’s in a photograph. The latest update offers new models and pre-trained checkpoints.

    Computer vision is a way to use artificial intelligence to automate image recognition—that is, to use computers to identify what’s in a photograph, video, or another image type. The latest version of Luminoth (v. 0.1), an open source computer vision toolkit built in Python and using Tensorflow and Sonnet, offers several improvements over its predecessor

    https://github.com/tryolabs/luminoth

    Reply
  29. Tomi Engdahl says:

    Spencer Soper / Bloomberg:
    Sources detail how Amazon is using AI to make crucial decisions, such as choosing inventory and managing retail operations, replacing white-collar workers — Amazon.com Inc. has long used robots to help humans move merchandise around its

    Amazon’s Clever Machines Are Moving From the Warehouse to Headquarters
    https://www.bloomberg.com/news/articles/2018-06-13/amazon-s-clever-machines-are-moving-from-the-warehouse-to-headquarters

    In a major reorganization, the retail veterans who once decided what to sell on the site have lost out to the marketplace data scientists.

    Amazon.com Inc. has long used robots to help humans move merchandise around its warehouses. Now automation is transforming Amazon’s white-collar workforce, too.

    The people who command six-figure salaries to negotiate multimillion-dollar deals with major brands are being replaced by software that predicts what shoppers want and how much to charge for it.

    Machines are beating people at the critical inventory decisions that separate the winners and losers in retail.

    Reply
  30. Tomi Engdahl says:

    Kyle Wiggers / VentureBeat:
    Google says it will open an AI research center in Ghana later this year, its first such center in Africa
    https://venturebeat.com/2018/06/13/google-will-open-an-ai-center-in-ghana-later-this-year-its-first-in-africa/

    Tech giants are pouring money into artificial intelligence. Baidu and Google spent between $20 and $30 billion on AI in 2016 alone, according to research from McKinsey. In Google’s case, a portion of that investment went to AI centers in China and France, and the Mountain View company shows no signs of slowing down. Today, Google announced its next AI research center will be in Accra, Ghana.

    Reply
  31. Tomi Engdahl says:

    Ron Miller / TechCrunch:
    Tableau says it has acquired AI startup Empirical Systems, born from MIT’s Probabilistic Computing Project, which analyzes modular data, such as spreadsheets

    Tableau gets AI shot in the arm with Empirical Systems acquisition
    https://techcrunch.com/2018/06/13/tableau-gets-ai-shot-in-the-arm-with-empirical-systems-acquisition/

    When Tableau was founded back in 2003, not many people were thinking about artificial intelligence to drive analytics and visualization, but over the years the world has changed and the company recognized that it needed talent to keep up with new trends. Today, it announced it was acquiring Empirical Systems, an early stage startup with AI roots.

    The startup was born just two years ago from research on automated statistics at the MIT Probabilistic Computing Project. According to the company website, “Empirical is an analytics engine that automatically models structured, tabular data (such as spreadsheets, tables, or csv files) and allows those models to be queried to uncover statistical insights in data.”

    http://probcomp.csail.mit.edu/

    Reply
  32. Tomi Engdahl says:

    Syntiant: Analog Deep Learning Chips
    Intel Capital funds startup to put AI in low-power mobile devices.
    https://semiengineering.com/syntiant-analog-deep-learning-chips/

    Startup Syntiant Corp. is an Irvine, Calif. semiconductor company led by former top Broadcom engineers with experience in both innovative design and in producing chips designed to be produced in the billions, according to company CEO Kurt Busch.

    The chip they’ll be building is a deep-learning inference engine, which Busch said will eventually run deep learning applications 50% faster than a typical GPU, with 50% better power/efficiency on mostly portable devices that depend on battery power.

    There are plenty of other ways to accelerate the inference portion of a deep-learning application, and the power cost is lower with FPGAs or custom ASICs than with traditional GPUs. But all of those approaches are power-hungry enough to keep mobile users close to charging stations.

    “Most machine learning happens in the cloud and there’s no real solution for battery-powered devices at the edge,”

    Reply
  33. Tomi Engdahl says:

    Machine Learning’s Limits
    Experts at the Table, part 1: Why machine learning works in some cases and not in others.
    https://semiengineering.com/machine-learnings-limits/

    Reply
  34. Tomi Engdahl says:

    Kyle Wiggers / VentureBeat:
    European Commission names 52 experts, including researchers from Google and IBM, to its High Level Group on AI, an advisory body to draft AI ethics guidelines

    European Commission names 52 experts to its AI advisory board
    https://venturebeat.com/2018/06/14/european-commission-names-52-experts-to-its-ai-advisory-board/

    The European Commission today named 52 experts to its High Level Group on Artificial Intelligence (AI HLG), an advisory body tasked with drafting AI ethics guidelines, anticipating challenges and opportunities in AI, and steering the course of Europe’s machine learning investments.

    The 52 new members — 30 men and 22 women — were selected from an applicant pool of 500 and come from titans of industry like Bosch, BMW, Bayer, and AXA, in addition to AI research leaders that include Google, IBM, Nokia Bell Labs, STMicroelectronics, Telenor, Zalando, Element AI, Orange, SAP, Sigfox, and Santander. Among the recruits are Jakob Uszkoreit, an AI Researcher in the Google Brain team, and Jaan Tallinn, a founding engineer of Kazaa and Skype and an early investor in Google subsidiary DeepMind.

    Reply
  35. Tomi Engdahl says:

    Amazon starts shipping its $249 DeepLens AI camera for developers
    https://techcrunch.com/2018/06/13/amazon-starts-shipping-its-249-deeplens-ai-camera-for-developers/?utm_source=tcfbpage&sr_share=facebook

    Back at its re:Invent conference in November, AWS announced its $249 DeepLens, a camera that’s specifically geared toward developers who want to build and prototype vision-centric machine learning models.

    Reply
  36. Tomi Engdahl says:

    Apple introduces the AI phone
    https://techcrunch.com/2018/06/07/apple-introduces-the-a-i-phone/?sr_share=facebook&utm_source=tcfbpage

    At Apple’s WWDC 2018 — an event some said would be boring this year with its software-only focus and lack of new MacBooks and iPads — the company announced what may be its most important operating system update to date with the introduction of iOS 12. Through a series of Siri enhancements and features, Apple is turning its iPhone into a highly personalized device, powered by its Siri AI.

    Reply
  37. Tomi Engdahl says:

    MIT Researchers Teach AI To Walk Through Walls
    http://www.iflscience.com/technology/mit-researchers-teach-ai-to-walk-through-walls/

    Scientists have already used AI to read minds and predict the future (sort of). Now, we can add X-ray vision to its growing list of superpowers. A team at MIT have built a tool that can see through solid objects and even identify people based on their gait alone.

    The tool doesn’t use X-rays, which would come with the slightly problematic side effect of showering nearby people with radiation. Instead, it relies on radio waves and utilizes the same physics as Wi-Fi. That is that wireless signals in Wi-Fi frequencies can pass through walls yet bounce off the human body. For this particular system, however, the team used radio waves thousands of times weaker than your typical Wi-Fi, reports Wired

    PUBLIC RELEASE: 12-JUN-2018
    AI senses people’s pose through walls
    MASSACHUSETTS INSTITUTE OF TECHNOLOGY, CSAIL
    https://eurekalert.org/pub_releases/2018-06/miot-asp061118.php

    Reply
  38. Tomi Engdahl says:

    Machines learn language better by using a deep understanding of words
    https://techcrunch.com/2018/06/15/machines-learn-language-better-by-using-a-deep-understanding-of-words/?sr_share=facebook&utm_source=tcfbpage

    Computer systems are getting quite good at understanding what people say, but they also have some major weak spots. Among them is the fact that they have trouble with words that have multiple or complex meanings. A new system called ELMo adds this critical context to words, producing better understanding across the board.

    https://allennlp.org/elmo

    Reply
  39. Tomi Engdahl says:

    The problem with ‘explainable AI’
    https://techcrunch.com/2018/06/14/the-problem-with-explainable-ai/?sr_share=facebook&utm_source=tcfbpage

    The first consideration when discussing transparency in AI should be data, the fuel that powers the algorithms. Companies should disclose where and how they got the data they used to fuel their AI systems’ decisions. Consumers should own their data

    Because data is the foundation for all AI, it is valid to want to know where the data comes from and how it might explain biases and counterintuitive decisions that AI systems make.

    On the algorithmic side, grandstanding by IBM and other tech giants around the idea of “explainable AI” is nothing but virtue signaling that has no basis in reality.

    There are two issues with the idea of explainable AI. One is a definition: What do we mean by explainability? What do we want to know?

    What these models are, is also pretty transparent. In fact, one of the refreshing facets of the current AI wave is that most of the advancements are made in peer-reviewed papers — open and available to everyone.

    Part of the advantage of some of the current approaches (most notably deep learning), is that the model identifies (some) relevant variables that are better than the ones we can define, so part of the reason why their performance is better relates to that very complexity that is hard to explain because the system identifies variables and relationships that humans have not identified or articulated. If we could, we would program it and call it software.

    The second overarching factor when considering explainable AI is assessing the trade-offs of “true explainable and transparent AI.” Currently there is a trade-off in some tasks between performance and explainability, in addition to business ramifications.

    Companies should be transparent about their data and offer an explanation about their AI systems to those who are interested, but we need to think about the societal implications of what that is, both in terms of what we can do and what business environment we create.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*