3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

5,247 Comments

  1. Tomi Engdahl says:

    Stef W. Kight / Axios:
    Gallup poll: 58% of Americans see AI, robotics, and automation as bigger threats to jobs than immigration and offshoring over the next ten years — More than half of Americans (58%) believe that artificial intelligence poses a greater threat to U.S. jobs over the next 10 years than immigration …
    Americans say AI poses greater job threat than immigration
    https://www.axios.com/americans-say-ai-poses-greater-job-threat-than-immigration-35bf5999-b061-491c-bb2a-553e7497da05.html

    More than half of Americans (58%) believe that artificial intelligence poses a greater threat to U.S. jobs over the next 10 years than immigration and offshoring (42%,) according to a new Northeastern University/Gallup survey.

    Republicans were the only subgroup to think that immigration and offshoring (48%) pose a bigger threat than AI (52%,) while Democrats chose AI as the higher threat (67%)

    Reply
  2. Tomi Engdahl says:

    Artificial intelligence developer program launched by Intel to bring AI devices to market
    https://www.vision-systems.com/articles/2018/02/artificial-intelligence-developer-program-launched-by-intel-to-bring-ai-devices-to-market.html?cmpid=enl_vsd_vsd_newsletter_2018-03-12&pwhid=6b9badc08db25d04d04ee00b499089ffc280910702f8ef99951bdbdad3175f54dcae8b7ad9fa2c1f5697ffa19d05535df56b8dc1e6f75b7b6f6f8c7461ce0b24&eid=289644432&bid=2029287

    Intel has announced the launch of “AI: In Production,” a program that enables developers to bring their artificial intelligence (AI) prototypes to market.

    Intel has selected embedded and industrial computing company AAEON Technologies as the first Intel AI: In Production partner. As part of this, AAEON provides two streamlined production paths for developers integrating the low-power Intel Movidius Myriad 2 Vision Processing Unit (VPU) into their product designs, according to Intel.

    The first option is the new AI Core from AAEON’s UP Bridge the Gap, which is a mini-PCIe module that features an Intel Movidius Myriad 2 VPU designed to work with a wide range of x86 host platforms. The AI Core is compatible with the Intel Movidius Neural Compute Stick—which has gained a developer base in the tens of thousands, according to Intel—and delivers the low-power capabilities of the Movidius Myriad 2 VPU deep neural networks accelerator.

    Reply
  3. Tomi Engdahl says:

    EU lawmakers seek coordinated hand-wringing over AI ethics
    Rules created in isolation will drive AI makers to operate in areas without restraint
    https://www.theregister.co.uk/2018/03/09/european_lawmakers_experts_ethics_ai/

    European policymakers have asked for help unravelling the “patchwork” of ethical and societal challenges as the use of artificial intelligence increases.

    The European Commission’s group on ethics in science and new technologies on Friday issued a statement (PDF) warning that existing efforts to develop solutions to the ethical, societal and legal challenges AI presents are a “patchwork of disparate initiatives.”

    It added that “uncoordinated, unbalanced approaches in the regulation of AI” risked “ethics shopping,” resulting in the “relocation of AI development and use to regions with lower ethical standards.”

    Instead, the group wants to start a process that will “pave the way towards a common, internationally recognized ethical and legal framework for the design, production, use and governance of artificial intelligence, robotics, and ‘autonomous’ systems.”

    The Commission said in a separate statement that it wanted to kick off a “wide, open and inclusive discussion on how to use and develop artificial intelligence both successfully and ethically sound.”

    Reply
  4. Tomi Engdahl says:

    The 10 Commandments of AI and its Potential for Malicious Use
    https://www.eetimes.com/author.asp?section_id=36&doc_id=1333047

    Deep thinkers in government, religion and technology are exploring the implications of artificial intelligence and the possibilities for it being used for no good.

    Reply
  5. Tomi Engdahl says:

    With Windows ML, Intel AI to Invade Mobile PCs
    https://www.eetimes.com/document.asp?doc_id=1333045

    PARIS — It might not be too long before your average mobile PC will feature — on its motherboard — not just CPUs and GPUs but also an embedded AI inference chip, like the Intel/Movidius Vision Processor Unit (VPU).

    The first clue for this scenario unfolded in Microsoft Corp.’s launch announcement today, at its Windows Developer Day, of Windows ML, an open-standard framework for machine-learning tasks in the Windows OS. Microsoft said that it is extending Windows OS native support for the Intel/Movidius VPU. Implied in the message is that Intel/Movidius has taken a step closer to finding a home not just in embedded applications, such as drones and surveillance cameras, but also in Windows-based laptops and tablets.

    In a telephone interview with EE Times, Gary Brown, director of marketing at Movidius/Intel, confirmed, “Although today’s announcement isn’t about that [VPU integration on a mobile PC], yes, you will see VPU migrating into a PC motherboard.”

    Reply
  6. Tomi Engdahl says:

    Intelligence At The Edge Is Transforming Our World
    https://semiengineering.com/intelligence-at-the-edge-is-transforming-our-world/

    Machine learning already plays a part in everyday life, but efficient inference will keep it moving forward.

    Reply
  7. Tomi Engdahl says:

    A Hippocratic Oath for artificial intelligence practitioners
    https://techcrunch.com/2018/03/14/a-hippocratic-oath-for-artificial-intelligence-practitioners/?utm_source=tcfbpage&sr_share=facebook

    In the forward to Microsoft’s recent book, The Future Computed, executives Brad Smith and Harry Shum proposed that Artificial Intelligence (AI) practitioners highlight their ethical commitments by taking an oath analogous to the Hippocratic Oath sworn by doctors for generations. In the past, much power and responsibility over life and death was concentrated in the hands of doctors. Now, this ethical burden is increasingly shared by the builders of AI software.

    Of course, AI is not the first technology to confer great responsibility on its designers, not by a long shot. Cloud computing, smartphones, social media platforms, and Internet of Things devices have already transformed how we communicate, work, shop, and socialize. These technologies gather unprecedented data streams leading to formidable challenges around privacy, profiling, manipulation, and personal safety. It is these issues that AI, if not developed responsibly, will further amplify.

    Reply
  8. Tomi Engdahl says:

    Developers Love Trendy New Languages, But Earn More With Functional Programming: Stack Overflow’s Annual Survey
    https://developers.slashdot.org/story/18/03/13/2015215/developers-love-trendy-new-languages-but-earn-more-with-functional-programming-stack-overflows-annual-survey?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Slashdot%2Fslashdot%2Fto+%28%28Title%29Slashdot+%28rdf%29%29

    Stack Overflow has released the results of its annual survey of 100,000 developers, revealing the most-popular, top-earning, and preferred programming languages.

    Developers love trendy new languages but earn more with functional programming
    And most feel that AI morality is management’s problem.
    https://arstechnica.com/gadgets/2018/03/developers-love-trendy-new-languages-but-earn-more-with-functional-programming/

    Reply
  9. Tomi Engdahl says:

    AI and the Future of Enterprise Mobility
    https://it.toolbox.com/article/ai-and-the-future-of-enterprise-mobility

    It seems as if artificial intelligence is creeping into every corner of our lives – including the mobility sector, where AI will play an increasingly important role in coming years.

    We are already becoming familiar with the way AI can support the use of mobile devices with its predictive qualities, but can it play a more valuable role for companies that want to maximize the utility of mobile devices?

    According to intelligence from CCS Insight, employees in the average firm are typically grappling with more than six apps in their daily work. But concerns are growing that workers are increasingly snowed under by the complexity and range of the apps they are asked to handle.

    Reply
  10. Tomi Engdahl says:

    AI: The Next Big Thing
    https://semiengineering.com/ai-the-next-big-thing/

    And it’s going to push the limit on semiconductor design, manufacturing and packaging.

    At a conference this week, entitled “ASICs Unlock Deep Learning Innovation,” and sponsored by Samsung, Amkor, eSilicon, ArterisIP and Northwest Logic, the consensus was that a discontinuity is already at hand. The path forward likely will require a mix of technologies, new design strategies that trade off different types of memory, processing power, and extremely high bandwidth, and packaging approaches that emphasis massive speed and potentially pre-developed platforms that are hardened in silicon to slash development time.

    Whether that includes all ASICs, or a mix of 7/5/3nm ASICs coupled with eFPGAs, and possibly alongside DSPs and GPUs, isn’t entirely clear yet. ASICs are by far the fastest, but a purely ASIC approach can’t keep up with algorithm changes. All of this tilts the balance toward some type of advanced packaging, whether that is 2.5D, 3D-ICs, or fan-outs on substrate, because moving electrons through TSVs—whether that’s in an interposer or through the middle of stacked die—is much faster over thin copper wires. According to Samsung, yield on TSVs is somewhere in the 99% range these days.

    Reply
  11. Tomi Engdahl says:

    Startup Runs Spiking Neural Network on Arm
    https://www.eetimes.com/document.asp?doc_id=1333080

    Eta Compute, a startup that demonstrated last summer at Hot Chips a very low power microcontroller using asynchronous technology, has come up with a new spin that it calls “the industry’s first neuromorphic platform.”

    In announcing the availability of its latest SoC platform IC based on TSMC’s 55nm ULP process, Paul Washkewicz, vice president of marketing and a co-founder of Eta Compute, Wednesday (March 15) pitched it as an ideal platform for “delivering neuromorphic computing based machine intelligence to mobile and edge devices.”

    But wait. When did Eta Compute’s 0.25V IoT chip, from Hot Chips last year, become a “neuromorphic computing” engine? Did the startup pivot slightly in strategy? Washkewicz explained that Eta ventured down the path of “machine intelligence,” when “customers started telling us that they want a little bit more intelligence on the edge.”

    Reply
  12. Tomi Engdahl says:

    Silicon Valley companies are undermining the impact of artificial intelligence
    https://techcrunch.com/2018/03/15/silicon-valley-companies-are-undermining-the-impact-of-artificial-intelligence/?utm_source=tcfbpage&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&utm_content=FaceBook&sr_share=facebook

    Silicon Valley companies are undermining the impact of artificial intelligence
    Ryan Kottenstette
    17 hours ago

    artificial intelligence
    Ryan Kottenstette
    Contributor
    Ryan Kottenstette is CEO and co-founder at Cape Analytics.
    Leveraging machine learning and artificial intelligence to glean information from large data sets is the greatest technology opportunity of a generation. After a decade of acquiring talent from startups and research universities, tech companies like Facebook, Google and Uber have amassed some of the best AI teams in the world.

    However, we are not seeing the impact we deserve beyond the tech sector. Unfortunately, progress in other industries has become collateral damage to the tech sector’s race for AI talent, and this issue has received little attention.

    Over the last five years, 90 percent of AI startups in Silicon Valley have been acquired by leading tech companies.

    Reply
  13. Tomi Engdahl says:

    Microsoft reaches a historic milestone, using AI to match human performance in translating news from Chinese to English
    https://blogs.microsoft.com/ai/machine-translation-news-test-set-human-parity/

    Reply
  14. Tomi Engdahl says:

    Microsoft says AI and machine learning driven by open source and the cloud
    http://www.zdnet.com/article/microsoft-says-ai-and-machine-learning-driven-by-open-source-and-the-cloud/

    Artificial intelligence and machine learning are rapidly gaining importance, and Mark Russinovich, Microsoft Azure chief technology officer, believes it’s because of open-source software and the cloud.

    Yes, Microsoft just announced that the next major edition of Windows 10 will support artificial intelligence (AI) and machine learning (ML). But, marketing hype aside, Microsoft knows darn well that the real heavy lifting for AI and ML happens on the cloud with open-source software.

    Reply
  15. Tomi Engdahl says:

    Washington waking up to threats of AI with new task force
    https://techcrunch.com/2018/03/15/washington-ai/?utm_source=tcfbpage&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&utm_content=FaceBook&sr_share=facebook

    Washington waking up to threats of AI with new task force
    Danny Crichton
    @dannycrichton / Yesterday

    Sever Panel Button
    Elon Musk has been one of the few Silicon Valley luminaries to place intense attention on the potential dangers of AI, raising a billion dollars with Y Combinator’s Sam Altman to found OpenAI . Musk has continued the drumbeat on AI’s dangers, telling a crowd at SXSW this week that “A.I. is far more dangerous than nukes” and asking “So why do we have no regulatory oversight? This is insane.”

    Well, the wheels of Washington are turning, and DCers are starting to investigate the opportunities and challenges that AI poses to the nation. Today, the Center for a New American Security (CNAS), one of America’s top defense and foreign policy think tanks, announced the creation of a Task Force on Artificial Intelligence and National Security, as part of the organization’s Artificial Intelligence and Global Security Initiative.

    Reply
  16. Tomi Engdahl says:

    Elon Musk: “AI is far more dangerous than nukes”
    http://www.huhmagazine.co.uk/14379/elon-musk-says-ai-is-far-more-dangerous-than-nukes

    “I’m close to AI and it scares the hell out of me,” said Musk. “It’s capable of vastly more than anyone knows, and the improvement is exponential.”

    Musk is well-known for his fear of AI and even admits to investing in AI research companies just so he can an eye on what they’re doing. “The danger of AI is much greater than the danger of nuclear warheads — by a lot,” he said. “Mark my words, AI is far more dangerous than nukes.”

    Reply
  17. Tomi Engdahl says:

    Cracking Open the Black Box of AI with Cell Biology
    https://spectrum.ieee.org/the-human-os/biomedical/diagnostics/cracking-open-the-black-box-of-ai-with-cell-biology

    “We’re interested in a particular [neural network] structure that was optimized not by computer scientists, but by evolution.”
    —Trey Ideker, UC San Diego

    These days, black box AI systems are accomplishing remarkable things. They are, just for starters, sorting cat photos for the Internet, beating grandmasters at the ancient game of Go, and sending self-driving cars speeding down highways.

    Although they’re called neural networks, these systems are only very roughly inspired by human neural systems, explains Trey Ideker, a professor of bioengineering and medicine at UC San Diego.

    Reply
  18. Tomi Engdahl says:

    Clever Machines Learn How to Be Curious
    By
    JOHN PAVLUS
    September 19, 2017
    https://www.quantamagazine.org/clever-machines-learn-how-to-be-curious-20170919/

    Computer scientists are finding ways to code curiosity into intelligent machines.

    “You can think of curiosity as a kind of reward which the agent generates internally on its own, so that it can go explore more about its world,” Agrawal said. This internally generated reward signal is known in cognitive psychology as “intrinsic motivation.” The feeling you may have vicariously experienced while reading the game-play description above — an urge to reveal more of whatever’s waiting just out of sight, or just beyond your reach, just to see what happens — that’s intrinsic motivation.

    Humans also respond to extrinsic motivations, which originate in the environment. Examples of these include everything from the salary you receive at work to a demand delivered at gunpoint. Computer scientists apply a similar approach called reinforcement learning to train their algorithms: The software gets “points” when it performs a desired task, while penalties follow unwanted behavior.

    Reply
  19. Tomi Engdahl says:

    Can A.I. Be Taught
    to Explain Itself?
    https://www.nytimes.com/2017/11/21/magazine/can-ai-be-taught-to-explain-itself.html

    As machine learning becomes more powerful, the field’s researchers increasingly find themselves unable to account for what their algorithms know — or how they know it.

    Reply
  20. Tomi Engdahl says:

    Redditor Builds an AI Assistant with a Raspberry Pi, IBM Watson, and a Whole Lot More
    https://blog.hackster.io/redditor-builds-an-ai-assistant-with-a-raspberry-pi-ibm-watson-and-a-whole-lot-more-6aab56a9f6d0

    With popular AI assistants on the market from Apple, Microsoft, Google, and Amazon, you might wonder why someone would bother creating their own. It could be to develop something more capable and knowledgeable. But, more realistically, Redditor Ideanusx probably made Ada for good ol’ fashioned fun.

    Reply
  21. Tomi Engdahl says:

    SXSW 2018: The Future of AI Assistants
    https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/sxsw-2018-the-future-of-ai-assistants.amp.html

    In the years to come, what will be the biggest improvement in AI-powered digital assistants? It’s likely to be the ability to accommodate a fundamental aspect of being human: The fact that we all have different personas, we show different facets of ourselves depending on where we are and who we are with, and our personas change over time. And different personas want different things from their AI assistants. Assistants that can understand your personal circumstances are less likely to remind you to pick up your rash prescription as you drive by the pharmacy if there are other people in the car, bug you about work email at home, or keep suggesting fun nightclubs if you’ve just had a baby.

    That was the message from Sunday’s panel on “Designing the Next Wave of Natural Language and AI” at the SXSW festival in Austin, Texas. The panel included Ben Brown from Google; Ed Doran from Microsoft; Karen Giefer from Frog; and Andrew Hill from Mercedes-Benz.

    Reply
  22. Tomi Engdahl says:

    How The Brain Saves Energy By Doing Less
    https://semiengineering.com/how-the-brain-saves-energy-by-doing-less/

    No matter how efficient they become, neuromorphic computers are fundamentally different than human brains.

    Reply
  23. Tomi Engdahl says:

    AI: The Next Big Thing
    https://semiengineering.com/ai-the-next-big-thing/

    And it’s going to push the limit on semiconductor design, manufacturing and packaging.

    The next big thing isn’t actually a thing. It’s a set of finely tuned statistical models. But developing, optimizing and utilizing those models, which collectively fit under the umbrella of artificial intelligence, will require some of the most advanced semiconductors ever developed.

    The demand for artificial intelligence is almost ubiquitous. As with all “next big things,” it is a horizontal technology that plays across many vertical market segments. Specialized chips are being developed for the cloud, for mid-range devices, and for edge devices in order to enable AI and its building blocks—machine learning, deep learning, neural networks. Many of these components are being designed for the advanced nodes using the most advanced manufacturing processes, which collectively will propel Moore’s Law, “More Than Moore,” and just about everything connected to semiconductors well into the future.

    Reply
  24. Tomi Engdahl says:

    Neural networks help identify license plates for traffic control
    https://www.vision-systems.com/articles/print/volume-23/issue-2/features/neural-networks-help-identify-license-plates-for-traffic-control.html?cmpid=enl_vsd_vsd_newsletter_2018-03-19&pwhid=6b9badc08db25d04d04ee00b499089ffc280910702f8ef99951bdbdad3175f54dcae8b7ad9fa2c1f5697ffa19d05535df56b8dc1e6f75b7b6f6f8c7461ce0b24&eid=289644432&bid=2038140

    Combining off-the-shelf cameras and a PC running neural network software allows Singapore authorities to perform traffic monitoring and enforcement.

    Automatic number plate recognition (ANPR) or license plate recognition (LPR) is a challenging task to perform in real-time. This is due to a number of reasons including the different types of license plates that need to be recognized, the varying lighting conditions encountered, and the need to capture fast-moving objects at night with high-enough contrast.

    Reply
  25. Tomi Engdahl says:

    Xceler Systems: Graph Architecture
    https://semiengineering.com/xceler-systems-graph-architecture/

    Startup building AI chip like a brain, a few synapses at a time.

    Rather than try to build a computer that looks like a brain, Gautam Kavipurapu and Xceler Systems are building smaller bits that act like synapses. When the design is advanced enough and there are enough of them, they will create complex interconnections among themselves in ways other neuromorphic designs do not.

    In the meantime, the building blocks of the Xceler Graph Architecture can help pay their own way as purpose-built accelerators designed for specific jobs. Kavipurapu said modules will function like a “microcontroller on steroids,” providing non-AI apps with some of the depth of cognitive analytics but without the rigidity of their learning or decision-making structures.

    “The problem with neuromorphic designs is that you end up with something very far away from what the brain looks like and what it does,” Kavipurapu said. “Synapses aren’t all the same, and the ones that are similar don’t always behave the same. Neurons have different structures, and functions and are classified as such. Neurons in the medulla act differently than neurons in the frontal cortex. For processing a particular object, the processing location in the brain relates to how big an object is, or how far away, so there’s a spatial aspect that we tend to miss.”

    Reply
  26. Tomi Engdahl says:

    What’s New in MATLAB for Deep Learning?
    https://se.mathworks.com/solutions/deep-learning/features.html

    MATLAB makes deep learning easy and accessible for everyone, even if you’re not an expert. Check out the latest features for designing and building your own models, network training and visualization, and deployment.

    Reply
  27. Tomi Engdahl says:

    Startup Runs Spiking Neural Network on Arm
    https://www.eetimes.com/document.asp?doc_id=1333080

    Eta Compute, a startup that demonstrated last summer at Hot Chips a very low power microcontroller using asynchronous technology, has come up with a new spin that it calls “the industry’s first neuromorphic platform.”

    In announcing the availability of its latest SoC platform IC based on TSMC’s 55nm ULP process, Paul Washkewicz, vice president of marketing and a co-founder of Eta Compute, Wednesday (March 15) pitched it as an ideal platform for “delivering neuromorphic computing based machine intelligence to mobile and edge devices.”

    But wait. When did Eta Compute’s 0.25V IoT chip, from Hot Chips last year, become a “neuromorphic computing” engine?

    Reply
  28. Tomi Engdahl says:

    Turning Big Data analytics into actionable information
    http://www.controleng.com/single-article/turning-big-data-analytics-into-actionable-information/244bfc5e81c801b5b8b8766311628cb5.html

    To be effective, Big Data analytics must deliver on the end-user experience, and not on the hype associated with artificial intelligence and machine learning.

    Some might assume Big Data analytics is synonymous with machine learning (ML) or artificial intelligence (AI), but this is incorrect. Although some aspects of these technologies may be used in Big Data analytic applications, focusing only on them is sure to create confusion and inflate expectations. The hype around AI suggests automatic generation of insights when it is applied to Big Data. It yields results with little or no effort from the end user. The data analytics user experience, however, is quite different.

    Given that a 10-year old function was repackaged, relabeled, and hyped as AI indicates confusion among the flavors of cognitive computing. Partly to blame is a lack of understanding about what constitutes AI, ML, deep learning, and other variations of “cognitive computing,” as well as the arguments about supervised and unsupervised variations.

    This kind of hype is in contrast to the bitter reality, which shows a large percentage of collected data doesn’t even get analyzed or leveraged for insights by the engineers and analysts who could use it to produce actionable information.

    Beyond the algorithm

    Cognitive computing algorithms are an important part of the solution for analytics in process manufacturing and the Industrial Internet of Things (IIoT) solutions, but only a part of it. Other aspects include data wrangling, which is the required data connectivity, cleansing, and contextualization to prepare data for use. The data analytics application’s focus must include these preparatory steps so results are accelerated from data connection to insight to distribution.

    Finally, data analytics applications should enable users to expand and extend analytics to whatever level is required. End users will keep expanding the use of data analytics applications and the need for specific algorithms. Therefore, data analytics applications should include extensibility to additional algorithms through features such as REST API, OData, and integration of algorithms into the user experience.

    When data analytics applications allow users to focus on the problem rather than the technology, desired results can be obtained quickly, as these use case examples show.

    Reply
  29. Tomi Engdahl says:

    New AI algorithm monitors sleep with radio waves (MIT & Mass General)
    Learning Sleep Stages from Radio Signals
    https://semiengineering.com/new-ai-algorithm-monitors-sleep-with-radio-waves-mit-mass-general/

    Monitoring sleep with AI
    To make it easier to diagnose and study sleep problems, researchers at MIT and Massachusetts General Hospital have devised a new way to monitor sleep stages without sensors attached to the body by using a device that employs an advanced artificial intelligence algorithm to analyze the radio signals around the person and translate those measurements into sleep stages: light, deep, or rapid eye movement (REM).

    http://sleep.csail.mit.edu/files/rfsleep-paper.pdf

    Reply
  30. Tomi Engdahl says:

    Bring Deep Learning Algorithms To Your Security Cameras
    https://hackaday.com/2018/03/21/bring-deep-learning-algorithms-to-your-security-cameras/

    AI is quickly revolutionizing the security camera industry. Several manufacturers sell cameras which use deep learning to detect cars, people, and other events. These smart cameras are generally expensive though, compared to their “dumb” counterparts. [Martin] was able to bring these detection features to a standard camera with a Raspberry Pi, and a bit of ingenuity.

    [Martin’s] goal was to capture events of interest, such as a person on screen, or a car in the driveway. The data for the events would then be published to an MQTT topic, along with some metadata such as confidence level. OpenCV is generally how these pipelines start, but [Martin’s] camera wouldn’t send RTSP images over TCP the way OpenCV requires, only RTSP over UDP. To solve this, Martin captures the video stream with FFmpeg. The deep learning AI magic is handled by the darkflow library, which is itself based upon Google’s Tensorflow.

    Enhancing my ordinary IP security cameras with AI
    https://harizanov.com/2018/03/enhancing-my-ordinary-security-cameras-with-ai/

    Reply
  31. Tomi Engdahl says:

    The new semiconductor boom: AI
    http://gfxspeak.com/2018/03/21/the-semiconductor-boom/

    Looks like the first quarter of 2018 is shaping up to be big for the future of AI. Companies in the segment including cloud companies, IP companies, and the traditional semiconductor companies all have major ambitions for the huge explosive market that AI is becoming.

    A battle is forming around processor platforms, cloud companies Google favoring custom chips to augment classic semiconductors such as CPUs and GPUs. Semiconductor and IP companies are designing chips to enable efficient hardware and neural net systems that can run faster and use less power. Intel, on the other hand is proposing its open platform ecosystem built on Xeon, FPGAs, and specialized processors including Nervana and Saffron.

    Cloud companies opt in

    Google has recently announced the availability of its Tensor Processing Unit (TPU), which will be accessible via the Google Cloud. Google introduced its TPU at Google I/O last year. At that time the chip was being used by beta customers. The company’s high-profile beta customer at the time was Lyft, which was using AI to recognize surroundings, locations, street signs etc.

    Nvidia

    Nvidia has announced its new Volta GPU with 640 tensor cores, which delivers over 100 Teraflops, has been adopted by all the leading cloud suppliers including Amazon Web Services, Microsoft Azure, Google Compute Platform, Oracle Cloud, Baidu Cloud, Alibaba Cloud, Tencent Cloud and others. On the OEM side, Dell EMC, Hewlett Packard Enterprise, Huawei, IBM and Lenovo have all announced Volta-based offerings for their customers.

    Microsoft and Intel

    Microsoft has teamed with Intel and is offering FPGAs for AI processing on Azure. FPGAs are currently being put to work on Microsoft’s in-house learning operations as well as providing Azure customers with accelerated networking and compute. Microsoft says they will provide customers the ability the run their own models on the FPGAs.

    With the acquisition of FPGA maker Altera in 2015, Intel admitted that the X86 architecture can’t do everything. Over the years, Intel has had faced these facts of life over and over again, only to get it wrong and retreat back to their X86 forever stance, but it’s different this time as the entire tech industry comes to grips with the end of the PC era and Intel’s CEO Krzanich represents the return of the engineer as an Intel leader. Intel sees FPGAs as the perfect companion to its processors for AI work.

    AI, in fact, has brought the FPGA back to life as tool of innovation rather than a back-room development platform. Its very flexibility and adaptability fits in with AI tasks.

    Last August at Hot Chips it was revealed that Microsoft has partnered with Intel on Project Brainwave for Azure. Intel’s 14 nm Stratix 10 FPGAS accelerate Microsoft’s Azure-based deep learning platform. (MS announced in 2016 it would use FPGAs for AI.) Microsoft claimed at Hot Chips that its use of FPGAs with “soft” Deep Neural Network (DNN) Unit synthesized onto the FPGAs instead of hardwired Processing Units (DPUs) is an advantage even though DPUs might be faster. Microsoft says they have defined highly customized, narrow precision data types that increase performance without losing model accuracy and because Project Brainwave can scale across a range of data types by changing the software DNN according to requirements.

    Amazon’s AWS also relies on off the shelf processors including Intel’s Xeons, Xilinx FPGAs, and also GPUs. The company gives customers the ability to sign up for the best combination of processors for the job. The strength of AWS is its ability to package complexity and sell it. AWS has also been encouraging developers to initiate DIY machine learning projects using the company’s Sagemaker machine learning system to build, train, and deploy machine learning models. Lately, the company has introduced the Deep Lens camera with built-in deep learning algorithms for image recognition that developers can use to explore deep learning with Sagemaker.

    Intel has beefed up its Xeon Processors for AI and HPC including features for performance and interconnectivity. The Advanced Vector Extension 512 (Intel AVX-512) improves performance and throughput for advanced analytics, HPC applications and data compression QuickAssist Technology speeds up data compression and cryptography and of course Intel continues to improve its hardware security.

    Apple

    The best known mobile AI processor is included with the latest Apple iPhone X and so far, it hasn’t blown anyone away with its brilliance unless you consider talking emoji puppets (animoji) using face tracking to mimic your speech the sine qua non of intelligence. As a reminder, Apple’s A11 is a 64-bit ARMv8 six core CPU with two high performance 2.39 GHz cores called Monsoon, and four energy efficient cores, call Mistral. The A11’s performance controller gives the chip access to all six cores simultaneously. The A11 has three-core GPU by Apple, the M11 motion coprocessor an image processor supporting computational photography, and the new Neural Engine that comes into play for Face ID, Animoji and other machine learning tasks.

    Arm

    Arm has been threatening to join the AI party and has been putting together the pieces starting with its 2016 acquisition of Apical for imaging technology. At the IEEE conference Arm discussed its Trillium Platform which includes Machine Learning (ML) and Object Detection (OD) processors complementing Arm NN Software, and the existing Arm Compute library and the CMSIS-NN Neural Network kernels. Arm describes the software as providing a bridge between popular NN training frameworks and the Arm-based inferencing implementations.

    Amazon

    According to a story in the Information, Amazon is reportedly planning on building hardware accelerated AI into Alexa, so she can respond faster and smarter. As it is now, Alexa has to pass everything to the cloud and back. If the Internet is down, she’s as dumb as a rock.

    Huawei

    Huawei has announced that its latest mobile processor, the Kirin 970, will feature a neural engine. The new chip will have an 8-core Arm-based CPU, 12 Core GPU (Mali), an Image DSP, and the Kirin Neural Processor

    Qualcomm

    Last December at a company meeting for press and analysts Gary Brotman, Qualcomm’s head of artificial intelligence and machine learning product management introduced the Snapdragon 845, which would later get a big rollout at CES and Mobile World Congress in Barcelona. He said “the Snapdragon 845 is our third generation AI platform.” The chip provides AI features on the phone because, says Brotman, “you shouldn’t have to be connected to the Internet to take advantage of the intelligence in your device.”

    Reply
  32. Tomi Engdahl says:

    IBM Speeds Up Machine Learning
    https://www.eetimes.com/document.asp?doc_id=1333097

    IBM wants to make machine learning as fast as snapping your fingers. At its own IBM THINKconference this week, IBM Research unveiled a newly published benchmark using an online advertising dataset released by Criteo Labs of more than 4 billion training examples. IBM was able to train a logistic regression classifier in 91.5 seconds — 46 times faster than the best result that has been previously reported, which used TensorFlow on Google Cloud Platform to train the same model in 70 minutes.

    Known as IBM Snap Machine Learning (Snap ML) because it trains models “faster than you can snap your fingers,” the new library powered by artificial intelligence software provides high-speed training of popular machine learning models on modern CPU/GPU computing systems and can be used to train models to find new and interesting patterns, or to retrain existing models at wire-speed as new data becomes available. The resulting benefits include lower cloud costs for users, less energy, more agile development and a faster time to result.

    Reply
  33. Tomi Engdahl says:

    EDA Chief Calls AI the New Driver
    https://www.eetimes.com/document.asp?doc_id=1333101

    It’s the age of AI, Moore’s Law is not dead, and technology is changing everything, according to Aart de Geus, co-chief executive of Synopsys, in a talk at the company’s annual user group conference here.

    The advent of AI is a mile marker on par with the invention of the printing press and the steam engine, De Geus said.

    “It will drive the semiconductor industry for the next few decades because Big Data needs machine learning and machine learning needs more computation, which generates more data. This will impact health, transportation and other vertical markets as they go digital,” he said.

    Reply
  34. Tomi Engdahl says:

    To Speed Up AI, Mix Memory and Processing
    https://spectrum.ieee.org/computing/hardware/to-speed-up-ai-mix-memory-and-processing

    If John von Neumann were designing a computer today, there’s no way he would build a thick wall between processing and memory. At least, that’s what computer engineer Naresh Shanbhag of the ­University of Illinois at Urbana-Champaign believes. The eponymous von Neumann architecture was published in 1945. It enabled the first stored-memory, reprogrammable computers—and it’s been the backbone of the industry ever since.

    Now, Shanbhag thinks it’s time to switch to a design that’s better suited for today’s data-intensive tasks. In February, at the International Solid-State Circuits Conference (ISSCC), in San Francisco, he and others made their case for a new architecture that brings computing and memory closer together. The idea is not to replace the processor altogether but to add new functions to the memory that will make devices smarter without requiring more power.

    Reply
  35. Tomi Engdahl says:

    EDA Chief Calls AI the New Driver
    https://www.eetimes.com/document.asp?doc_id=1333101

    It’s the age of AI, Moore’s Law is not dead, and technology is changing everything, according to Aart de Geus, co-chief executive of Synopsys, in a talk at the company’s annual user group conference here.

    The advent of AI is a mile marker on par with the invention of the printing press and the steam engine, De Geus said.

    “It will drive the semiconductor industry for the next few decades because Big Data needs machine learning and machine learning needs more computation, which generates more data. This will impact health, transportation and other vertical markets as they go digital,” he said.

    Reply
  36. Tomi Engdahl says:

    Colin Lecher / The Verge:
    A look at what happens when algorithms used to automate health assessments in Arkansas start cutting essential state-sponsored home care for disabled patients — Illustrations by William Joel; Photography by Amelia Holowaty Krales — For most of her life, Tammy Dobbs, who has cerebral palsy …

    What happens when an algorithm cuts your health care
    https://www.theverge.com/2018/3/21/17144260/healthcare-medicaid-algorithm-arkansas-cerebral-palsy

    Reply
  37. Tomi Engdahl says:

    The rise of artificial intelligence is creating new variety in the chip market, and trouble for Intel
    https://www.economist.com/news/business/21717430-success-nvidia-and-its-new-computing-chip-signals-rapid-change-it-architecture?etear=sasexpectexceptional

    The success of Nvidia and its new computing chip signals rapid change in IT architecture

    “WE ALMOST went out of business several times.” Usually founders don’t talk about their company’s near-death experiences. But Jen-Hsun Huang, the boss of Nvidia, has no reason to be coy. His firm, which develops microprocessors and related software, is on a winning streak. In the past quarter its revenues increased by 55%, reaching $2.2bn, and in the past 12 months its share price has almost quadrupled.

    A big part of Nvidia’s success is because demand is growing quickly for its chips, called graphics processing units (GPUs), which turn personal computers into fast gaming devices. But the GPUs also have new destinations: notably data centres where artificial-intelligence (AI) programmes gobble up the vast quantities of computing power that they generate.

    Reply
  38. Tomi Engdahl says:

    When AI Goes Awry
    https://semiengineering.com/when-ai-goes-awry/

    So far there are no tools and no clear methodology to eliminating bugs. That would require understanding what an AI bug actually is.

    The race is on to develop intelligent systems that can drive cars, diagnose and treat complex medical conditions, and even train other machines.

    The problem is that no one is quite sure how to diagnose latent or less-obvious flaws in these systems—or better yet, to prevent them from occurring in the first place. While machines can do some things very well, it’s still up to humans to devise programs to train them and observe them, and that system is far from perfect.

    “Debugging is an open area of research,” said Jeff Welser, vice president and lab director at IBM Research Almaden. “We don’t have a good answer yet.”

    He’s not alone. While artificial intelligence, deep learning and machine learning are being adopted across multiple industries, including semiconductor design and manufacturing, the focus has been on how to use this technology rather than what happens when something goes awry.

    “Debugging is an open area of research,” said Norman Chang, chief technologist at ANSYS. “That problem is not solved.”

    “Debugging is based on understanding,” said Steven Woo, vice president of enterprise solutions technology and distinguished inventor at Rambus. “There’s a lot to learn about how the brain hones in, so it remains a challenge to debug in the classical sense because you need to understand when misclassification happens. We need to move more to an ‘I don’t know’ type of classification.”

    Reply
  39. Tomi Engdahl says:

    Few Countries Will Benefit From the AI Revolution
    https://news.slashdot.org/story/18/03/26/2126253/few-countries-will-benefit-from-the-ai-revolution?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Slashdot%2Fslashdot%2Fto+%28%28Title%29Slashdot+%28rdf%29%29

    According to Chinese venture capitalist and former Google China president Kai-Fu Lee, the list of countries well-positioned to embrace a future powered by artificial intelligence is exceedingly short: United States and China. “The countries that are not in good shape are the countries that have perhaps a large population, but no AI, no technologies, no Google, no Tencent, no Baidu, no Alibaba, no Facebook, no Amazon,” Lee says. “These people will basically be data points to countries whose software is dominant in their country. If a country in Africa uses largely Facebook and Google, they will be providing their data to help Facebook and Google make more money, but their jobs will still be replaced nevertheless.”

    The list of countries that will benefit from the AI revolution could be exceedingly short
    https://qz.com/1237457/kai-fu-lee-says-the-list-of-countries-that-will-benefit-from-the-ai-revolution-is-exceedingly-short/

    Here’s a comprehensive list of the countries that venture capitalist and former Google China president Kai-Fu Lee says are well-positioned to embrace a future powered by artificial intelligence:

    United States
    China

    Every other economy should brace for trouble ahead, Lee says in an interview with Edge.

    “The countries that are not in good shape are the countries that have perhaps a large population, but no AI, no technologies, no Google, no Tencent, no Baidu, no Alibaba, no Facebook, no Amazon,” Lee says. “These people will basically be data points to countries whose software is dominant in their country. If a country in Africa uses largely Facebook and Google, they will be providing their data to help Facebook and Google make more money, but their jobs will still be replaced nevertheless.”

    Lee is known for making big predictions about AI shaping the future of everything, suggesting that big banks will lower power under AI and arguing that an art degree will actually come in handy in an automated future. He’s also spoken about the US and China’s potential world dominance before, but gets a little more specific in the Edge interview.

    Reply
  40. Tomi Engdahl says:

    Chris Mellor / The Register:
    Pure Storage and Nvidia unveil AIRI, a four-petaflop platform using Nvidia’s DGX-1 servers and Pure’s FlashBlade storage system for large-scale AI initiatives

    If you’ve got $1m+ to blow on AI, meet Pure, Nvidia’s AIRI fairy: A hyperconverged beast
    0.5 PFLOPS FP32, 0.5 PB of effective flash storage
    http://www.theregister.co.uk/2018/03/27/pure_nvidia_ai_airi/

    Pure Storage and Nvidia have produced a converged machine-learning system to train AI models using millions of data points.

    It’s called AIRI – AI-Ready Infrastructure – and combines a Pure FlashBlade all-flash array with four Nvidia DGX-1 GPU-accelerated boxes and a pair of 100GbitE switches from Arista.

    The system has been designed by Pure and Nvidia, and is said to be easier and simpler to buy, deploy, and operate than buying and integrating the components separately; the standard converged infrastructure pitch.

    Reply
  41. Tomi Engdahl says:

    Evaluating AI’s applicability
    https://semiengineering.com/system-bits-march-27/

    As AI’s role in society continues to expand, J. B. Brown of the Kyoto University Graduate School of Medicine reports on a new evaluation method for the type of AI that predicts yes/positive/true or no/negative/false answers.

    Brown’s paper deconstructs the utilization of AI and analyzes the nature of the statistics used to report an AI program’s ability. The new technique also generates a probability of the performance level given evaluation data, answering questions such as: What is the probability of achieving accuracy greater than 90%?

    Reports of new AI applications appear in the news almost daily, including in society and science, finance, pharmaceuticals, medicine, and security, and while reported statistics seem impressive, research teams and those evaluating the results come across two problems, Brown said.

    For example, if an AI program is built to predict whether or not someone will win the lottery, it may always predict a loss. The program may achieve 99% accuracy, but interpretation is key to determine the accuracy of the conclusion that the program is accurate.

    Herein lies the problem, Brown said. In typical AI development, the evaluation can only be trusted if there is an equal number of positive and negative results. If the data is biased toward either value, the current system of evaluation will exaggerate the system’s ability.

    How accurate is your AI?
    https://www.kyoto-u.ac.jp/en/research/research_results/2017/180214_2.html

    As AI’s role in society continues to expand, J. B. Brown of the Graduate School of Medicine reports on a new evaluation method for the type of AI that predicts yes/positive/true or no/negative/false answers.

    Brown’s paper, published in Molecular Informatics, deconstructs the utilization of AI and analyzes the nature of the statistics used to report an AI program’s ability. The new technique also generates a probability of the performance level given evaluation data, answering questions such as: What is the probability of achieving accuracy greater than 90%?

    Reports of new AI applications appear in the news almost daily, including in society and science, finance, pharmaceuticals, medicine, and security.

    “While reported statistics seem impressive, research teams and those evaluating the results come across two problems,” explains Brown. “First, to understand if the AI achieved its results by chance, and second, to interpret applicability from the reported performance statistics.”

    “AI can assist us in understanding many phenomena in the world, but for it to properly provide us direction, we must know how to ask the right questions. We must be careful not to overly focus on a single number as a measure of an AI’s reliability.”

    Reply
  42. Tomi Engdahl says:

    Baidu Shows Off Its Instant Pocket Translator
    https://tech.slashdot.org/story/18/03/27/2148206/baidu-shows-off-its-instant-pocket-translator?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Slashdot%2Fslashdot%2Fto+%28%28Title%29Slashdot+%28rdf%29%29

    Baidu showed off the speed of its pocket translator for the first time in the United States during an afternoon presentation at MIT Technology Review’s EmTech Digital conference in San Francisco. The Chinese Internet giant has made significant strides improving machine language translation since 2015, using an advanced form of artificial intelligence known as deep learning

    On stage, the Internet-connected device was able to almost instantly translate a short conversation between Wu and senior editor Will Knight.

    Baidu shows off its instant pocket translator
    https://www.technologyreview.com/s/610623/baidu-shows-off-its-instant-pocket-translator/

    The Chinese internet giant says it’s made significant strides in machine translation thanks to neural networks.

    Reply
  43. Tomi Engdahl says:

    Artificial intelligence in the industrial enterprise
    https://www.plantengineering.com/single-article/artificial-intelligence-in-the-industrial-enterprise/453681f5ceb571fad400c2d53ea815c2.html?OCVALIDATE&[email protected]&ocid=101781

    Analytics can deliver insight as to how things are going, but artificial intelligence (AI) doesn’t become a thing until you start using machine learning and semantics for insight.

    For the Industrial Internet of Things (IIoT), predictive maintenance of machinery and equipment is the first application demonstrating wide commercial acceptance. “This can be done with classic regression and predictive analytics. With artificial intelligence, however, you go beyond the structured deterministic to the fuzzier stochastic,” said Jeff Kavanaugh, vice president, senior partner, Infosys. “With machine learning based on input such as audio signatures, the computer learns as a human would, by first paying attention to how a machine sounds when it’s healthy and then understanding anomalies.”

    Infosys recently conducted a global survey on the adoption of intelligent automation. The survey’s central point, that artificial intelligence technology is going mainstream, is a good one. A certain amount of skepticism is warranted, however, as to the specific figures

    Sample set asymmetry

    A question often asked is whether companies have the data needed to enable machine learning, and whether the data is in a form suitable for such use. “People have more data than they think, but less than they hope,” said Kavanaugh. “While there are a lot of data stores that don’t lend themselves to machine learning, there are instances where great amounts of data simply aren’t needed. At other times, companies can build on the power of accumulated data. Industrial manufacturers do have deep troves of simple data which can be converted to use cases, where they can go deep.”

    Asked to compare the potential impact of today’s emerging technologies with those of the 1980s, when PLCs, DCSs, SCADA, CAD, and ERP were all introduced, Kavanaugh said, “The introduction of new technologies of the 1980s brought significant change, but it was basically the automation of rows and columns, applied to the plant floor and out in the field. Today, incorporating experience, a multi-attribute perspective of what actually happens, is a bigger part. We’re talking about things that are inherently cognitive, in other words fuzzy. While the earlier transformation was from full analog to computerized operations, the current one is more pervasive, more connected, more intelligent—and ultimately—more profound.”

    Reply
  44. Tomi Engdahl says:

    The Linux Foundation announced the launch of the LF Deep Learning Foundation, “an umbrella organization focused on driving open source innovation in artificial intelligence, machine learning and deep learning”, with a goal of making those technologies available to data scientists and developers.

    https://www.linuxfoundation.org/projects/deep-learning/

    In addition, the Linux Foundation also debuted the Acumos AI project, an “open source framework that makes it easy to build, share, and deploy AI apps”.

    https://www.acumos.org/

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*