3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

5,200 Comments

  1. Tomi Engdahl says:

    Facebook’s new AI research is a real eye-opener
    https://techcrunch.com/2018/06/16/facebooks-new-ai-research-is-a-real-eye-opener/?utm_source=tcfbpage&sr_share=facebook

    AdChoices

    Facebook’s new AI research is a real eye-opener
    Devin Coldewey
    @techcrunch / 10 hours ago

    eyeopening
    There are plenty of ways to manipulate photos to make you look better, remove red eye or lens flare, and so on. But so far the blink has proven a tenacious opponent of good snapshots. That may change with research from Facebook that replaces closed eyes with open ones in a remarkably convincing manner.

    It’s far from the only example of intelligent “in-painting,” as the technique is called when a program fills in a space with what it thinks belongs there. Adobe in particular has made good use of it with its “context-aware fill,” allowing users to seamlessly replace undesired features

    Eye In-Painting with Exemplar Generative Adversarial Networks
    Computer Vision and Pattern Recognition (CVPR)
    https://research.fb.com/publications/eye-in-painting-with-exemplar-generative-adversarial-networks/

    Reply
  2. Tomi Engdahl says:

    Pre-Collision Assist with Pedestrian Detection – TensorFlow
    https://www.hackster.io/asadzia/pre-collision-assist-with-pedestrian-detection-tensorflow-db91cb

    Real-time hazard classification and tracking with TensorFlow. Sensor fusion with radar to filter for false positives.

    Most ADAS combine camera with RADAR or LIDAR. I however only had camera to start with. It turned out I can do some basic tasks like Lane detection and departure warning but not much else, till the day Walabot arrived. Cameras are great of classification and texture interpretation but they struggle with 3D mapping and motion estimation. For automotive applications, Walabot can be used as a short range RADAR.

    Reply
  3. Tomi Engdahl says:

    AI could get 100 times more energy-efficient with IBM’s new artificial synapses
    https://www.technologyreview.com/s/611390/ai-could-get-100-times-more-energy-efficient-with-ibms-new-artificial-synapses/

    Copying the features of a neural network in silicon might make machine learning more usable on small devices like smartphones.

    Neural networks are the crown jewel of the AI boom. They gorge on data and do things like transcribe speech or describe images with near-perfect accuracy (see “10 breakthrough technologies 2013: Deep learning”).

    The catch is that neural nets, which are modeled loosely on the structure of the human brain, are typically constructed in software rather than hardware, and the software runs on conventional computer chips. That slows things down.

    IBM has now shown that building key features of a neural net directly in silicon can make it 100 times more efficient. Chips built this way might turbocharge machine learning in coming years.

    The IBM chip, like a neural net written in software, mimics the synapses that connect individual neurons in a brain.

    The IBM researchers demonstrate the microelectronic synapses in a research paper published in the journal Nature. Their approach takes inspiration from neuroscience by using two types of synapses: short-term ones for computation and long-term ones for memory. This method “addresses a few key issues,” most notably low accuracy, that have bedeviled previous efforts to build artificial neural networks in silicon

    Equivalent-accuracy accelerated neural-network training using analogue memory
    https://www.nature.com/articles/s41586-018-0180-5

    Reply
  4. Tomi Engdahl says:

    Winning the Cyber Arms Race with Machine Learning
    https://www.securityweek.com/winning-cyber-arms-race-machine-learning

    Despite advances in cybersecurity technology, the number of days to detect a breach has increased from an average of 201 days in 2016 to an average of 206 days just a year later, according to the 2017 Ponemon Cost of Data Breach Study. While organizations are getting increasingly better at discovering data breaches on their own, 53 percent of breaches were discovered by an external source in 2017, meaning organizations had no idea their data had been compromised. Part of the problem is that there is no easy way for many organizations to automatically correlate and analyze all of the data being collected by the various security solutions that have been deployed across the network. That problem is compounded by the fact that many of these tools operate in isolation. The result is that IT teams have to hand correlate data collected from different sources looking for a needle in the haystack. The opportunity for human error is high and log files simply scroll by too quickly for anyone to be able to gather actionable information from them.

    Reply
  5. Tomi Engdahl says:

    New York Times:
    IBM unveils IBM Debater, an AI-powered debating program six years in the making that can perform tightly structured debates with humans on around 100 topics

    IBM Unveils System That ‘Debates’ With Humans
    https://www.nytimes.com/2018/06/18/technology/ibm-debater-artificial-intelligence.html

    A match between an Israeli college debate champion and a loquacious IBM computer program demonstrated on Monday new gains in the quest for computers that can hold conversations with humans. It also led to an unlikely question for the tech industry’s deep thinkers: Can a machine talk too much?

    Reply
  6. Tomi Engdahl says:

    AI Startup Wave Computing To Buy MIPS
    Wave to expand from AI training to AI inference
    https://www.eetimes.com/document.asp?doc_id=1333380

    Reply
  7. Tomi Engdahl says:

    IBM Refines AI Efficiency in Visual Analysis
    IBM develops BlockDrop, Neuromorphic Stereo
    https://www.eetimes.com/document.asp?doc_id=1333393

    Despite a slew of artificial intelligence processors poised to reach the market — each boasting its own “breakthrough” — myriad problems continue to dog today’s AI community ranging from issues with the energy, speed, and size of AI hardware to AI algorithms that have yet to demonstrate improvements in robustness and performance.

    In computer vision, the biggest challenge is how to “make visual analysis more efficient,” Rogerio Feris, research manager for computer vision and multimedia at IBM Research, told EE Times.

    Reply
  8. Tomi Engdahl says:

    AI Startup Seeks its Voice
    Syntiant to sample 20 TOPs/W chip this year
    https://www.eetimes.com/document.asp?doc_id=1333403

    Battery-powered devices will get a new option for hardware-accelerated speech interfaces next year if Kurt Busch makes his targets this year. The chief executive of Syntiant aims in 2018 to sample a novel machine learning chip and raise a Series B to make it in volume.

    The startup is designing a 20 tera-operations/watt chip using 4- to 8-bit precision to speed up AI operations initially for voice recognition. It uses an array of hundreds of thousands of NOR cells, computing TensorFlow neural network jobs in the analog domain.

    Syntiant will release a reference design pairing its sub-watt chip with an Infineon MEMS microphone. If it is successful, the two will collaborate on other designs. “We want to make it extremely easy to add voice control to any kind of device,” said Busch.

    Reply
  9. Tomi Engdahl says:

    An interview with the co-founder of Iris.ai – the world’s first Artificial Intelligence science assistant
    http://www.thesaint-online.com/2018/06/an-interview-with-the-developers-of-iris-ai-the-worlds-first-artificial-intelligence-science-assistant/

    Editor-in-chief Olivia Gavoyannis talks to the developers of Iris.ai about the software, and its implications for the future of research and development.

    Reply
  10. Tomi Engdahl says:

    Deep Learning Helps Fight the Spread of Child Pornography
    https://blog.hackster.io/deep-learning-helps-fight-the-spread-of-child-pornography-aa30c387c98d

    Most social media websites and apps have some sort of automated system to identify and remove offensive, inappropriate, and illegal images. Instagram, for example, has 60 million new pictures uploaded every single day, and it would be impossible to have humans manually check each one.

    Haschek created the open source image hosting service PictShare, and was troubled when it was brought to his attention that someone had uploaded child pornography to the service.

    he instead contacted INTERPOL and was told to delete the offending image and report the IP address of the uploader

    NSFW as a Service system

    The system uses a deep learning neural network that was trained on the Open NSFW model created by Yahoo to detect nudity. It runs on a Raspberry Pi in conjunction with an Intel Movidius neural compute stick

    Reply
  11. Tomi Engdahl says:

    Clean Water AI
    https://www.hackster.io/Nyceane/clean-water-ai-e40806

    Using AI to detect dangerous bacteria and harmful particles in the water.

    Clean Water AI is IoT device that classifies and detects dangerous bacterias and harmful particles. The system can run continuously in real time. The cities can install IoT devices across different water sources and they will be able to monitor water quality as well as contamination continuously.

    We are going to focus specifically on computer vision and image classification in this sample. To do this, we will be building nevus, melanoma, and seborrheic keratosis image classifier using deep learning algorithm, the Convolution Neural Network (CNN) through Caffe Framework.

    In this article we will focus on Supervised learning, it requires training on the server as well as deploying on the edge.

    Reply
  12. Tomi Engdahl says:

    Build Your Own Google Neural Synthesizer
    https://spectrum.ieee.org/geek-life/hands-on/build-your-own-google-neural-synthesizer

    stumbled on Google’s new neural music-synthesis project NSynth (Neural Synthesizer). This, I thought, might be just the ticket to get my music-giddy son hooked on the amazing things possible with machine learning.

    NSynth uses a deep neural network to distill musical notes from various instruments down to their essentials. Google’s developers first created a digital archive of some 300,000 notes, including up to 88 examples from about 1,000 different instruments, all sampled at 16 kilohertz. They then input those data into a deep-learning model that can represent all those wildly different sounds far more compactly using what they call “embeddings.

    NSynth: Neural Audio Synthesis
    https://magenta.tensorflow.org/nsynth

    NSynth (Neural Synthesizer), a novel approach to music synthesis designed to aid the creative process.

    Unlike a traditional synthesizer which generates audio from hand-designed components like oscillators and wavetables, NSynth uses deep neural networks to generate sounds at the level of individual samples.

    Reply
  13. Tomi Engdahl says:

    First-Ever Public Debate Between Robot And Human Ends In A Draw
    http://www.iflscience.com/technology/firstever-public-debate-between-robot-and-human-ends-in-a-draw/

    Robots are already besting us in a bunch of things, from board games to building cars. At least we still have the power of speech though, right?

    Uh, possibly not anymore. An artificial intelligence (AI) called Project Debater has just held its own against human counterparts in a debate.

    Reply
  14. Tomi Engdahl says:

    Weeknote #568: Making AI do your work for you
    https://blog.nordkapp.fi/weeknote-568-making-ai-do-your-work-for-you-381d32bcf02c

    First — I had a few problems “Can I use a Machine Learning (ML) algorithm to write a good weeknote for me?”

    libraries, some Recursive Neural Network libraries and started experimenting.

    These tools are based on the same principle: you have a body or material that you use to teach your algorithm, and it will then learn to produce new material that is based on your original. The tools then go over the material time after time, until they can create new material. To put it simply, if you tell your newly taught AI system that you want a sentence that begins with “When you want to make ice cream, you need…” the system can predict that the next word is “cream” and not “pencils”. Of course, it all depends on the material you use for teaching.

    Reply
  15. Tomi Engdahl says:

    The future of AI relies on a code of ethics
    https://techcrunch.com/2018/06/21/the-future-of-ai-relies-on-a-code-of-ethics/?sr_share=facebook&utm_source=tcfbpage

    Facebook has recently come under intense scrutiny for sharing the data of millions of users without their knowledge. We’ve also learned that Facebook is using AI to predict users’ future behavior and selling that data to advertisers.

    Reply
  16. Tomi Engdahl says:

    Transforming Standard Video Into Slow Motion with AI
    June 18, 2018
    https://news.developer.nvidia.com/transforming-standard-video-into-slow-motion-with-ai/?ncid=–43539

    Researchers from NVIDIA developed a deep learning-based system that can produce high-quality slow-motion videos from a 30-frame-per-second video, outperforming various state-of-the-art methods that aim to do the same. The researchers will present their work at the annual Computer Vision and Pattern Recognition (CVPR) conference in Salt Lake City, Utah this week.

    Reply
  17. Tomi Engdahl says:

    5 trending open source machine learning JavaScript frameworks
    https://opensource.com/article/18/5/machine-learning-javascript-frameworks?sc_cid=7016000000127ECAAY

    Whether you’re a JavaScript developer who wants to dive into machine learning or a machine learning expert who plans to use JavaScript, these open source frameworks may intrigue you.

    Reply
  18. Tomi Engdahl says:

    ReRAM enhances edge AI
    https://www.edn.com/design/integrated-circuit-design/4460755/ReRAM-enhances-edge-AI?utm_source=Aspencore&utm_medium=EDN&utm_campaign=social

    Today, most training occurs in data centers, with some on the edge. Large companies like Google, Facebook, Amazon, Apple, and Microsoft have massive amounts of consumer data they can feed their server farms to perform industrial-scale training for AI and improve their algorithms. The training phase requires very fast processors, such as GPUs or Google Tensor Processing Units.

    Inference occurs when data is collected by an edge device – a photo of a building or a face, for example – then sent to an inference engine for classification. Cloud-based AI, with its inherent delay, would be unacceptable for many applications. A self-driving car that needs to make real-time decisions about objects it sees is not feasible with a cloud-based AI architecture.

    As AI capabilities move to the edge, they will drive more AI applications, and increasingly these applications will require ever more powerful analysis and intelligence to allow systems to make operational decisions locally, whether partly, or fully autonomously, such as in self-driving cars.

    Reply
  19. Tomi Engdahl says:

    Benedict Evans:
    Machine learning may become a ubiquitous and fundamental enabling layer, similar to how relational databases impacted society in the eighties

    Ways to think about machine learning
    https://www.ben-evans.com/benedictevans/2018/06/22/ways-to-think-about-machine-learning-8nefy

    We’re now four or five years into the current explosion of machine learning, and pretty much everyone has heard of it. It’s not just that startups are forming every day or that the big tech platform companies are rebuilding themselves around it – everyone outside tech has read the Economist or BusinessWeek cover story, and many big companies have some projects underway. We know this is a Next Big Thing.

    Going a step further, we mostly understand what neural networks might be, in theory, and we get that this might be about patterns and data. Machine learning lets us find patterns or structures in data that are implicit and probabilistic (hence ‘inferred’) rather than explicit, that previously only people and not computers could find.

    Reply
  20. Tomi Engdahl says:

    Indeed, I think one could propose a whole list of unhelpful ways of talking about current developments in machine learning. For example:

    Data is the new oil
    Google and China (or Facebook, or Amazon, or BAT) have all the data
    AI will take all the jobs
    And, of course, saying AI itself.

    More useful things to talk about, perhaps, might be:

    Automation
    Enabling technology layers
    Relational databases.

    Source: https://www.ben-evans.com/benedictevans/2018/06/22/ways-to-think-about-machine-learning-8nefy

    Reply
  21. Tomi Engdahl says:

    Darrell M. West / Brookings:
    Survey of 2,021 US adult internet users: 61% are somewhat or very uncomfortable with robots; 52% say robots will perform most human activities within 30 years — Fifty-two percent of adult internet users believe within 30 years, robots will have advanced to the point where they can perform …

    Brookings survey finds 52 percent believe robots will perform most human activities in 30 years
    https://www.brookings.edu/blog/techtank/2018/06/21/brookings-survey-finds-52-percent-believe-robots-will-perform-most-human-activities-in-30-years/

    Reply
  22. Tomi Engdahl says:

    Kevin Roose / New York Times:
    Facebook’s blend of AI and humans to screen ads continues to snare non-political ads and legitimate news, angering some publishers as the process gets refined

    A Day Care and a Dog Rescue Benefit: On Facebook, They Were Political Ads
    https://www.nytimes.com/2018/06/21/business/facebook-political-ads.html

    What do a day care center, a vegetarian restaurant, a hair salon, an outdoor clothing maker and an investigative news publisher have in common?

    To Facebook, they looked suspiciously like political activists.

    Facing a torrent of criticism over its failure to prevent foreign interference during the 2016 election, the giant social network recently adopted new rules to make its advertising service harder to exploit.

    Under the new rules, advertisers who want to buy political ads in the United States must first prove that they live in the country, and mark their ads with a “paid for by” disclaimer. Any ad Facebook deems to contain political content is stored in a searchable public database.

    Reply
  23. Tomi Engdahl says:

    Julie Creswell / New York Times:
    Amid growing outcry from civil liberties groups, Orlando has ended its pilot project for police to use Amazon’s Rekognition facial recognition software for now — Amid a growing outcry about privacy concerns by civil liberties groups, officials in Orlando, Fla., said Monday …

    Orlando Pulls the Plug on Its Amazon Facial Recognition Program
    https://www.nytimes.com/2018/06/25/business/orlando-amazon-facial-recognition.html

    Reply
  24. Tomi Engdahl says:

    Ava Kofman / The Intercept:
    Interpol rolls out new international voice ID database, four years in the making, with 192 law enforcement agencies participating in audio clip sharing

    Interpol Rolls Out International Voice Identification Database Using Samples From 192 Law Enforcement Agencies
    https://theintercept.com/2018/06/25/interpol-voice-identification-database/

    Last week, Interpol held a final project review of its speaker identification system, a four-year, 10 million euro project that has recently come to completion. The Speaker Identification Integrated Project, what they call SiiP, marks a major development in the international expansion of voice biometrics for law enforcement uses — and raises red flags when it comes to privacy.

    Speaker identification works by taking samples of a known voice, capturing its unique and behavioral features, and then turning these features into an algorithmic template that’s known as a voice print or voice model.

    SiiP will join Interpol’s existing fingerprint and face databases, and its key advantage will be to facilitate a quick identification process — say, of a kidnapper making a phone call — even in the absence of other identifiers.

    SiiP’s database will include samples from YouTube, Facebook, publicly recorded conversations, and other sources where individuals might not realize that their voices are being turned into biometric voice prints.

    Reply
  25. Tomi Engdahl says:

    Apply “Ready-to-Use” Machine Learning to Improve Industrial Operations
    http://www.electronicdesign.com/industrial-automation/apply-ready-use-machine-learning-improve-industrial-operations?code=UM_NN8DS_005&utm_rid=CPG05000002750211&utm_campaign=18025&utm_medium=email&elq2=89094fcd668145a793c0a69037bb46a0

    Industrial practitioners can harness underutilized time-series data using machine learning to provide actionable insights that reduce downtime and improve throughput, operator safety, and product quality.

    Reply
  26. Tomi Engdahl says:

    Advancement in Neuromorphic computing with nanoscale devices that acts like a neuron
    https://www.electropages.com/2018/06/advancement-in-neuromorphic-computing/?utm_campaign=&utm_source=newsletter&utm_medium=email&utm_term=article&utm_content=Advancement+in+Neuromorphic+computing+with+nanoscale+devices+that+acts+like+

    Neuromorphic computing, electronic devices that imitate the processes of the human brain for faster, more energy efficient, computation could have come one step closer with the development of a nanoscale device that acts like a neuron.

    Neurons in the brain are the fundamental unit that makes it work, sending signals among the 100 billion neurons that power a person’s intellect. The neuron can have different electrical states, which enable it to send signals. For this reason, a memristor has been compared to a neuron, but a memristor does not transmit anything. The key advance in the nanoscale neuromorphic device is its transmission of a soliton wave. This is a magnetic wave that only travels through magnetic material. While the general idea of electromagnetism is that it propagates everywhere, for example, the way a magnet attracts iron filings, a soliton does not travel beyond the magnetic material.

    “Neurons can have a different level of charge…and they communicate in some way by axons,” s

    Soliton waves would transmit the signal through a circuit of magnetic material in a far more energy efficient way than today’s current carrying circuits.

    Reply
  27. Tomi Engdahl says:

    Deep learning-enabled video camera launched by Amazon
    https://www.vision-systems.com/articles/2018/06/deep-learning-enabled-video-camera-launched-by-amazon.html?cmpid=enl_vsd_vsd_newsletter_2018-06-25&pwhid=6b9badc08db25d04d04ee00b499089ffc280910702f8ef99951bdbdad3175f54dcae8b7ad9fa2c1f5697ffa19d05535df56b8dc1e6f75b7b6f6f8c7461ce0b24&eid=289644432&bid=2151852

    First announced by Amazon at the re:Invent conference in November, the Amazon Web Services (AWS) Deep Lens video camera—which is designed to put deep learning technology in the hands of developers—is now shipping to customers.

    DeepLens runs deep learning models directly on the device and is designed to provide developers with hands-on artificial intelligence technology. The device features a 4 MPixel camera that captures 1080P video along with an Intel Atom Processor that provides more than 100 GFLOPS of computer power, which AWS says is enough to run tens of frames of incoming video through on-board deep learning models every second.

    WS DeepLens runs the Ubunti 16.04 OS and is preloaded with AWS Greengrass Core, as well as a device-optimized version of MXNet, and the flexibility to use other frameworks such as TensorFlow and Caffe. Additionally, the The Intel clDNN library provides a set of deep learning primitives for computer vision and other AI workloads, according to Amazon.

    Reply
  28. Tomi Engdahl says:

    ReRAM enhances edge AI
    https://www.edn.com/design/integrated-circuit-design/4460755/ReRAM-enhances-edge-AI

    Today, most training occurs in data centers, with some on the edge. Large companies like Google, Facebook, Amazon, Apple, and Microsoft have massive amounts of consumer data they can feed their server farms to perform industrial-scale training for AI and improve their algorithms. The training phase requires very fast processors, such as GPUs or Google Tensor Processing Units.

    Removing the memory bottleneck

    All AI processors rely upon data sets, which represent models of the “learned” object classes (images, voices, etc.), to perform their recognition feats. Each object recognition and classification requires multiple memory accesses. The biggest challenge facing engineers today is overcoming memory speed and power bottlenecks in current architectures to get faster data access, while lowering the energy cost of that access.

    The greatest speed and energy efficiency can be gained by placing training data as close as possible to the AI processor core. But the storage architecture employed by today’s designs, created several years ago when there were no other practical solutions, is still the traditional combination of fast but small embedded SRAM with slower but large external DRAM. When trained models are stored this way, the frequent and massive movements of data between embedded SRAM, external DRAM, and the neural network increase energy consumption and add latency. Further, SRAM and DRAM are volatile memories, limiting the ability to achieve power savings during sleep periods.

    Much greater energy efficiencies and speeds can be achieved by storing the entire trained model directly on the AI processor die with low-power, non-volatile memory that is dense and fast. By enabling a new memory-centric architecture, the entire trained model or knowledge base could then be on-chip, connected directly to the neural network, with the potential for massive energy savings and performance improvements, resulting in greatly improved battery life and a better user experience. Today, several next-generation memory technologies are competing to accomplish this.

    ReRAM’s potential

    The ideal non-volatile embedded memory for AI applications would be very simple to manufacture, easy to integrate in the back-end-of-line of well-understood CMOS processes, easily scaled to advanced nodes, available in high volume, and able to deliver on the energy and speed requirements for these applications.

    Resistive RAM (ReRAM) has a much greater ability to scale than magnetic RAM (MRAM) or phase-change memory (PCM) alternatives, an important consideration when looking at 14, 12, and even 7 nm process nodes.

    Reply
  29. Tomi Engdahl says:

    Apply Deep Learning to Building-Automation IoT Sensors
    http://www.electronicdesign.com/embedded/apply-deep-learning-building-automation-iot-sensors?code=UM_NN8DS_004&utm_rid=CPG05000002750211&utm_campaign=18024&utm_medium=email&elq2=1b37cf0cfbb6490194676b4ba7a783c2

    Real-time systems like smart sensors in commercial buildings are taking advantage of the richer computation level of deep-learning-based technology.

    In building automation, sensors such as motion detectors, photocells, temperature, and CO2 and smoke detectors are used primarily for energy savings and safety. Next-generation buildings, however, are intended to be significantly more intelligent, with the capability to analyze space utilization, monitor occupants’ comfort, and generate business intelligence.

    To support such robust features, building-automation infrastructure requires considerably richer information that details what’s happening across the building space. Since current sensing solutions are limited in their ability to address this need, a new generation of smart sensors (see figure below) is required to enhance the accuracy, reliability, flexibility, and granularity of the data they provide.

    Data Analytics at the Sensor Node
    In the new era of the Internet of Things (IoT), there arises the opportunity to introduce a new approach to building automation that decentralizes the architecture and pushes the analytics processing to the edge (the sensor unit) instead of the cloud or a central server. Commonly referred to as edge computing, or fog computing, this approach provides real-time intelligence and enhanced control agility while simultaneously offloading the heavy communications traffic.

    Rule-Based or Data-Driven?
    The challenges associated with rich data analysis can be addressed in different ways. The conventional rule-based systems are supposedly easier to analyze. However, this advantage is negated as the system evolves, with patches of rules being stacked upon each other to account for the proliferation of new rule exceptions, thus resulting in a hard-to-decipher tangle of coded rules

    As the hard work of rule creation and modification is managed by human programmers, rule-based systems suffer from compromised performance.

    Once the features have been defined, the rules and/or formulas that use these features are learned automatically by the algorithm.

    When the rules are implemented within the sensor, it runs a two-staged, repeating process. In stage one, the human-defined features are extracted from the sensor data. In stage two, the learned rules are applied to perform the task at hand.

    Within the machine-learning domain, “deep learning” is emerging as a superior new approach that even alleviates engineers from the task of defining features. With deep learning, based on the numerous labeled samples, the algorithm determines for itself an end-to-end computation that extends from the raw sensor data all the way to the final output. The algorithm must discern the correct features and how best to compute them.

    Reply
  30. Tomi Engdahl says:

    Richard Nieva / CNET:
    Google opens its human-sounding Duplex AI to a small group of “trusted testers” and businesses that have opted in to receiving calls from Duplex — Google is moving ahead with Duplex, the stunningly human-sounding artificial intelligence software behind its new automated system …

    Google opens its human-sounding Duplex AI to public testing
    https://www.cnet.com/news/google-opens-its-human-sounding-duplex-ai-to-public-testing/?ftag=COS-05-10aaa0b&linkId=53565993

    The search giant gives us a closer look at its controversial artificial intelligence software while it works to tamp down fears about the technology.

    Google is moving ahead with Duplex, the stunningly human-sounding artificial intelligence software behind its new automated system that places phone calls on your behalf with a natural-sounding voice instead of a robotic one.

    The search giant said Wednesday it’s beginning public testing of the software, which debuted in May and which is designed to make calls to businesses and book appointments. Duplex instantly raised questions over the ethics and privacy implications of using an AI assistant to hold lifelike conversations for you.

    Google says its plan is to start its public trial with a small group of “trusted testers” and businesses that have opted into receiving calls from Duplex. Over the “coming weeks,” the software will only call businesses to confirm business and holiday hours

    Unlike the semi-robotic voice assistants we hear today — think Amazon’s Alexa, Apple’s Siri or the Google Assistant coming out of a Google Home smart speaker — Duplex sounds jaw-droppingly lifelike. It mimics human speech patterns, using verbal ticks like “uh” and “um.” It pauses, elongates words and intones its phrases just like you or I would.

    Setting a standard

    How Google handles the release of Duplex is important because that will set the tone for how the rest of the industry treats commercial AI technology at a mass scale. Alphabet, Google’s parent, is one of the most influential companies in the world, and the policies it carves out now will not only set a precedent for other developers, but also set expectations for users.

    Duplex is the stuff of sci-fi lore, and now Google wants to make it part of our everyday life.

    “Hi, I’m the Google Assistant, calling to make a reservation for a client. This automated call will be recorded.”

    Reply
  31. Tomi Engdahl says:

    A closer look at Google Duplex
    https://techcrunch.com/2018/06/27/a-closer-look-at-google-duplex/?sr_share=facebook&utm_source=tcfbpage

    Google’s appointment booking AI wowed the crowd and raised concern at I/O
    Brian Heater
    @bheater / 23 hours ago

    Reply
  32. Tomi Engdahl says:

    Facebook partners on open source AI development tools ONNX and PyTorch 1.0
    https://opensource.com/article/18/6/open-source-tools-accelerate-ai-development?sc_cid=7016000000127ECAAY

    Learn about these open source tools to accelerate artificial intelligence development and interoperability.

    Reply
  33. Tomi Engdahl says:

    Rough terrain? No problem for beaver-inspired autonomous robot
    http://www.buffalo.edu/news/releases/2018/06/017.html

    Reply
  34. Tomi Engdahl says:

    Machine Learning on a mini-PCIe Board
    https://blog.hackster.io/machine-learning-on-a-mini-pcie-board-fd918584688b

    Initially controversial, the accepted wisdom is that deep learning is eating software. Not now, not next year, but very soon, the way we approach software development will be fundamentally different.

    Reply
  35. Tomi Engdahl says:

    Weeknote #568: Making AI do your work for you
    https://blog.nordkapp.fi/weeknote-568-making-ai-do-your-work-for-you-381d32bcf02c

    Conclusions
    Well, as a first experiment it was fun, I did learn a lot but I would not yet replace any of us with an AI. Maybe with some more training, I trained the models only for a few hours each. The Markov tool went through the material only a million times, and the RNN was trained for eight hours, each round of training taking around five minutes. The RNN model in particular would probably be a lot better if I let it run for a couple of days.

    Reply
  36. Tomi Engdahl says:

    3 cool machine learning projects using TensorFlow and the Raspberry Pi
    https://opensource.com/article/17/2/machine-learning-projects-tensorflow-raspberry-pi?sc_cid=7016000000127ECAAY

    TensorFlow and the Raspberry Pi are working together in the city and on the farm. Learn about three recent, innovative projects.

    Reply
  37. Tomi Engdahl says:

    Robotics
    Layoffs at Watson Health Reveal IBM’s Problem with AI
    https://spectrum.ieee.org/the-human-os/robotics/artificial-intelligence/layoffs-at-watson-health-reveal-ibms-problem-with-ai

    IBM, a venerable tech company on a mission to stay relevant, has staked much of its future on IBM Watson. The company has touted Watson, its flagship artificial intelligence, as the premier product for turning our data-rich but disorganized world into a smart and tidy planet.

    Just last month, IBM CEO Ginni Rometty told a convention audience that we’re at an inflection point in history. Putting AI into everything will enable businesses to improve on “an exponential curve,” she said—a phenomenon that might one day be referred to as “Watson’s Law.”

    But according to engineers swept up in a major round of layoffs within IBM’s Watson division last month

    Reply
  38. Tomi Engdahl says:

    Facebook is using machine learning to self-tune its myriad of services
    https://techcrunch.com/2018/06/28/facebook-is-using-machine-learning-to-self-tune-its-myriad-of-services/?sr_share=facebook&utm_source=tcfbpage

    Regardless of what you may think of Facebook as a platform, they run a massive operation and when you reach their level of scale you have to get more creative in how you handle every aspect of your computing environment.

    Reply
  39. Tomi Engdahl says:

    Disney Imagineering has created autonomous robot stunt doubles
    The robot acrobats can flip through the air and stick a landing every time
    https://techcrunch.com/2018/06/28/disney-imagineering-has-created-autonomous-robot-stunt-doubles/

    Honda Halts Asimo Development in Favor of More Useful Humanoid Robots
    https://spectrum.ieee.org/automaton/robotics/humanoids/honda-halts-asimo-development-in-favor-of-more-useful-humanoid-robots

    Reply
  40. Tomi Engdahl says:

    Binary Neural Network Demonstration on Ultra96
    Binary Neural Network Demonstration on Ultra96
    https://www.hackster.io/karl-nl/binary-neural-network-demonstration-on-ultra96-6b48e0

    FPGA-based Binary Neural Network acceleration used for Image Classification on the Avnet Ultra96 based on the Xilinx Zynq UltraScale+ MPSoC.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*