3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

5,317 Comments

  1. Tomi Engdahl says:

    When doing machine learning in production, the choice of the model is just one of the many important criteria. Equally important: the definition of the problem, gathering high-quality data and the architecture of the machine learning pipeline.

    Learn how to architect a machine learning pipeline for multiclass text classification in 5 steps:

    1. Preprocess: preprocess the raw data to be used by fastText.
    2. Split: split the preprocessed data into train, validation and test data.
    3. Autotune: find the best parameters on the validation data.
    4. Train: train the final model with the best parameters on all the data.
    5. Test: get metrics and predictions on test data.

    You can now run the pipeline on the Valohai cloud with a few clicks.

    https://hubs.ly/H0mPH1W0

    https://blog.valohai.com/production-machine-learning-pipeline-text-classification-fasttext?utm_content=114495547&utm_medium=social&utm_source=facebook&hss_channel=fbp-1794976890716394&hsa_acc=70196702&hsa_cam=6149313708189&hsa_grp=6157378856989&hsa_ad=6157378879389&hsa_src=fb&hsa_net=facebook&hsa_ver=3

    Reply
  2. Tomi Engdahl says:

    Intel Officially Axes Nervana
    https://www.eetimes.com/intel-officially-axes-nervana/

    Intel’s AI ASIC strategy will be based on Habana chips from now on

    In a move widely speculated to have been looming, Intel has axed Nervana’s NNP-T and NNP-I training and inference chips for the data center in favor of Gaudi and Goya chips from recent acquisition Habana Labs.

    A statement emailed to EETimes said that Intel will cease development on Nervana’s NNP-T AI training chip (Spring Crest) for the data center, while merely honoring existing customer commitments to the NNP-I inference chip (Spring Hill), following “customer feedback”.

    “After acquiring Habana Labs in December and with input from our customers, we are making strategic updates to the data center AI acceleration roadmap. We will leverage our combined AI talent and technology to build leadership AI products,” Intel’s statement said.

    Reply
  3. Tomi Engdahl says:

    Someone Used Neural Networks To Upscale An 1895 Film To 4K 60 FPS, And The Result Is Really Quite Astounding
    Digg Feb 4, 2020 @09:33 A
    https://www.digg.com/2020/arrival-train-la-ciotat-upscaled

    The Lumière Brothers’ 1895 short “Arrival of a Train at La Ciotat” is one of the most famous film clips in history — you’ve almost certainly seen the 50-second movie at some point in your life.

    YouTuber Denis Shiryaev wanted to update the look of the clip, so — with the help of several neural networks — he upscaled the clip to 4K resolution and 60 FPS.

    Reply
  4. Tomi Engdahl says:

    just imagine that we now can do this in real time.

    Combine this with the AI stuff (like NVIDIA is doing) and you’re there.

    https://www.skyandtelescope.com/astronomy-resources/how-to-process-planetary-images/

    Reply
  5. Tomi Engdahl says:

    An algorithm that can spot cause and effect could supercharge medical AI
    https://www.technologyreview.com/s/615141/an-algorithm-that-can-spot-cause-and-effect-could-supercharge-medical-ai/

    The technique, inspired by quantum cryptography, would allow large medical databases to be tapped for causal links

    Reply
  6. Tomi Engdahl says:

    Smart Black-Box Neural Networks Recreate Classic Guitar Amp Sounds in Real-Time
    https://www.hackster.io/news/smart-black-box-neural-networks-recreate-classic-guitar-amp-sounds-in-real-time-0c5d2156607f

    Using black box modeling, researchers have been able to create convincing simulations of classic tube amps — running in real time.

    Researchers at Aalto University and Neural DSP Technologies claim to have created a neural network capable of emulating any guitar amplifier with enough accuracy to be indistinguishable from the real deal in blind listening tests.

    “Deep neural networks for guitar distortion modeling has been tested before,” explains Professor Vesa Välimäki of the work, “but this is the first time where blind-test listeners couldn’t tell the difference between a recording and a fake distorted guitar sound! This is akin to when the computer first learned to play chess.”

    Previous best efforts in virtual analog modellng have relied upon traditional circuit modeling techniques, a labor-intensive process

    Reply
  7. Tomi Engdahl says:

    Reuters built a prototype for automated news videos using Deepfakes tech

    https://thenextweb.com/neural/2020/02/07/reuters-built-a-prototype-for-automated-news-videos-using-deepfakes-tech/

    Coming to you live from the inside of an artificial neural network

    Reply
  8. Tomi Engdahl says:

    Automated system can rewrite outdated sentences in Wikipedia articles
    http://news.mit.edu/2020/automated-rewrite-wikipedia-articles-0212

    Text-generating tool pinpoints and replaces specific information in sentences while retaining humanlike grammar and style.

    Reply
  9. Tomi Engdahl says:

    We’re still in the very early days of AI, but it’s not too early to start thinking about AI’s environmental impact.

    AI in the 2020s Must Get Greener—and Here’s How
    https://spectrum.ieee.org/energywise/artificial-intelligence/machine-learning/energy-efficient-green-ai-strategies

    The environmental impact of artificial intelligence (AI) has been a hot topic as of late—and I believe it will be a defining issue for AI this decade. The conversation began with a recent study from the Allen Institute for AI that argued for the prioritization of “Green AI” efforts that focus on the energy efficiency of AI systems.

    This study was motivated by the observation that many high-profile advances in AI have staggering carbon footprints. A 2018 blog post from OpenAI revealed that the amount of compute required for the largest AI training runs has increased by 300,000 times since 2012. And while that post didn’t calculate the carbon emissions of such training runs, others have done so. According to a paper by Emma Strubel and colleagues

    an average American is responsible for about 36,000 tons of CO2 emissions per year; training and developing one machine translation model that uses a technique called neural architecture search was responsible for an estimated 626,000 tons of CO2.

    Red AI Isn’t All Bad
    Many of today’s Red AI projects are pushing science forward in natural language processing, computer vision, and other important areas of AI. While their carbon costs may be significant today, the potential for positive societal impact is also significant.

    As an analogy, consider the Human Genome Project (HGP)

    it’s critical to measure both the input and the output of RedAI projects. Many of the artifacts produced by RedAI experiments (for example, image representations for object recognition, or word embeddings in natural language processing) are enabling rapid advances in a wide range of applications.

    Reply
  10. Tomi Engdahl says:

    From models of galaxies to atoms, simple AI shortcuts speed up simulations by billions of times
    https://www.sciencemag.org/news/2020/02/models-galaxies-atoms-simple-ai-shortcuts-speed-simulations-billions-times

    Modeling immensely complex natural phenomena such as how subatomic particles interact or how atmospheric haze affects climate can take many hours on even the fastest supercomputers. Emulators, algorithms that quickly approximate these detailed simulations, offer a shortcut. Now, work posted online shows how artificial intelligence (AI) can easily produce accurate emulators that can accelerate simulations across all of science by billions of times.

    “This is a big deal,”

    Reply
  11. Tomi Engdahl says:

    Tegwyn Twmffat Turns to Deep Learning for a Smart Species-Identifying Bat Detector Build
    https://www.hackster.io/news/tegwyn-twmffat-turns-to-deep-learning-for-a-smart-species-identifying-bat-detector-build-8dac0273ee3f

    Using machine learning to recognize each species, the detector can note bat types and upload data to the cloud over LoRa

    Reply
  12. Tomi Engdahl says:

    Removing people from complex backgrounds in real time using TensorFlow.js in the web browser using JavaScript.

    https://github.com/jasonmayes/Real-Time-Person-Removal

    Live webcam demo:
    https://disappearing-people.glitch.me/

    Reply
  13. Tomi Engdahl says:

    Getting AI ethics wrong could ‘annihilate technical progress’
    https://horizon-magazine.eu/article/getting-ai-ethics-wrong-could-annihilate-technical-progress.html#utm_source=Facebook&utm_medium=share&utm_campaign=AI

    ‘It’s very difficult to be an AI researcher now and not be aware of the ethical implications these algorithms have,’ said Professor Bernd Stahl, director of the Centre for Computing and Social Responsibility at De Montfort University in Leicester, UK.

    ‘We have to come to a better understanding of not just what these technologies can do, but how they will play out in society and the world at large.’

    ‘Our artist has built a water gun with a face recognition on it so it will only squirt water at women or it can be changed to recognise a single individual or people of a certain age,’ said Prof. Stahl. ‘The idea is to get people to think about what this sort of technology can do.’

    While squirting water at people might seem like harmless fun, the issues are anything but. AI is already used to identify faces on social media, respond to questions on digital home assistants like Alexa and Siri, and suggest products for consumers when they are shopping online.

    Reply
  14. Tomi Engdahl says:

    U.S. Army Researchers Boost Distributed Deep Learning Efficiency by Up to 70 Percent
    https://www.hackster.io/news/u-s-army-researchers-boost-distributed-deep-learning-efficiency-by-up-to-70-percent-5e88c518730b

    By communicating only when significant changes have been made to the model, a key bottleneck in distributed deep learning is overcome.

    “There has been an exponential growth in the amount of data collected and stored locally on individual smart devices,” says Dr. Jemin George, an Army scientist at the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory. “Numerous research efforts as well as businesses have focused on applying machine learning to extract value from such massive data to provide data-driven insights, decisions and predictions.”

    communicating between nodes in the distributed network — an overhead the researchers claim to have reduced by up to 70 percent in ideal scenarios, through the application of a triggering system which tells nodes only to communicate their model with neighbouring nodes if there have been significant changes since the last transmission.

    Reply
  15. Tomi Engdahl says:

    Embedded processing specialist XMOS has announced a new crossover processor offering, it claims, enough power for edge AI in a part costing just $1.

    XMOS Unveils Xcore.ai, “The World’s Highest Processing Power for a Dollar”
    https://www.hackster.io/news/xmos-unveils-xcore-ai-the-world-s-highest-processing-power-for-a-dollar-429e7a2cc1a2?4cf0ed8641cfcbbf46784e620a0316fb

    First product demos due in June 2020, with registration now open for the chip’s alpha program.

    Reply
  16. Tomi Engdahl says:

    A Google AI tool that can recognize and label what’s in an image will no longer attach gender tags like “woman” or “man” to photos of people.

    Google’s Cloud Vision API is a service for developers that allows them to, among other things, attach labels to photos identifying the contents.

    The tool can detect faces, landmarks, brand logos, and even explicit content, and has a host of uses from retailers using visual search to researchers identifying animal species.

    In an email to developers on Thursday morning, seen by Business Insider, Google said it would no longer use “gendered labels” for its image tags. Instead, it will tag any images of people with “non-gendered” labels such as “person.”

    Google AI will no longer use gender labels like ‘woman’ or ‘man’ on images of people to avoid bias
    https://www.google.com/amp/s/www.businessinsider.com/google-cloud-vision-api-wont-tag-images-by-gender-2020-2%3famp

    Reply
  17. Tomi Engdahl says:

    Do AI startups have worse economics than SaaS shops?
    https://techcrunch.com/2020/02/21/do-ai-startups-have-worse-economics-than-saas-shops/?tpcc=ECFB2020

    A few days ago, Andreessen Horowitz’s Martin Casado and Matt Bornstein published an interesting piece digging into the world of artificial intelligence (AI) startups, and, more specifically, how those companies perform as businesses. Core to the argument presented is that while founders and investors are wagering “that AI businesses will resemble traditional software companies,” the well-known venture firm is “not so sure.”

    The Andreessen Horowitz (a16z) perspective is straightforward, arguing that AI-focused companies have lesser gross margins than software companies due to cloud compute and human-input costs, endure issues stemming from “edge-cases” and enjoy less product differentiation from competing companies when compared to software concerns.

    Reply
  18. Tomi Engdahl says:

    “Technology is always neutral. It depends on what we make with it. And therefore we want the application of these new technologies to deserve the trust of our citizens. This is why we are promoting a responsible human-centric approach to artificial intelligence.”

    [https://venturebeat.com/2020/02/19/eu-introduces-ai-strategy-to-build-ecosystem-of-trust/](https://venturebeat.com/2020/02/19/eu-introduces-ai-strategy-to-build-ecosystem-of-trust/)

    Reply
  19. Tomi Engdahl says:

    Technology is always neutral. It depends on what we make with it.

    Artificial intelligence is neutral UNLESS you give it a bias, and thats down to whatever idiot is giving the robot as its goals. even if it was UNINTENDED by the actual producer!!!

    Reply
  20. Tomi Engdahl says:

    Pentagon Adopts New Ethical Principles for Using AI in War
    https://www.securityweek.com/pentagon-adopts-new-ethical-principles-using-ai-war

    The Pentagon is adopting new ethical principles as it prepares to accelerate its use of artificial intelligence technology on the battlefield.

    The new principles call for people to “exercise appropriate levels of judgment and care” when deploying and using AI systems, such as those that scan aerial imagery to look for targets.

    Reply
  21. Tomi Engdahl says:

    The Pentagon promises to use artificial intelligence for good, not evil
    https://www.militarytimes.com/news/your-military/2020/02/25/the-pentagon-promises-to-use-artificial-intelligence-for-good-not-evil/?utm_source=facebook.com&utm_campaign=Socialflow+MIL&utm_medium=social

    The military has its eye on artificial intelligence solutions to everything from data analysis to surveillance, maintenance and medical care, but before the Defense Department moves full steam ahead into an AI future, they’re laying out some ethical principles to live by.

    Reply
  22. Tomi Engdahl says:

    I built a DIY license plate reader with a Raspberry Pi and machine learning
    Machine learning is finally becoming accessible
    https://towardsdatascience.com/i-built-a-diy-license-plate-reader-with-a-raspberry-pi-and-machine-learning-7e428d3c7401

    Reply
  23. Tomi Engdahl says:

    Artificial intelligence can help develop better climate models but at what cost?

    AI can help us fight climate change. But it has an energy problem, too
    https://horizon-magazine.eu/article/ai-can-help-us-fight-climate-change-it-has-energy-problem-too.html

    Artificial intelligence (AI) technology can help us fight climate change – but it also comes at a cost to the planet. To truly benefit from the technology’s climate solutions, we also need a better understanding of AI’s growing carbon footprint, say researchers.

    Reply
  24. Tomi Engdahl says:

    Adversarial attacks can lead to completely bizarre and ridiculous (from a human perspective) behavior from AI systems, rendering them less safe, predictable, or reliable.

    How Adversarial Attacks Could Destabilize Military AI Systems
    https://spectrum.ieee.org/automaton/artificial-intelligence/embedded-ai/adversarial-attacks-and-ai-systems

    Adversarial attacks pose a tangible threat to the stability and safety of AI and robotic technologies. The exact conditions for such attacks are typically quite unintuitive for humans, so it is difficult to predict when and where the attacks could occur. And even if we could estimate the likelihood of an adversarial attack, the exact response of the AI system can be difficult to predict as well, leading to further surprises and less stable, less safe military engagements and interactions. Even overall assessments of reliability are difficult in the face of adversarial attacks.

    We might hope that adversarial attacks would be relatively rare in the everyday world, since “random noise” that targets image classification algorithms is actually far from random

    Adversarial attacks will thus not be destabilizing if we follow a straightforward policy recommendation: Keep humans in (or on) the loop for these technologies. If there is human-AI teaming, then people can (hopefully!) recognize that an adversarial attack has occurred, and guide the system to appropriate behaviors.

    This recommendation is attractive, but is also necessarily limited in scope to applications where a human can be directly involved.

    Reply
  25. Tomi Engdahl says:

    AI Helps Scientists Discover Powerful New Antibiotic
    https://spectrum.ieee.org/the-human-os/artificial-intelligence/medical-ai/ai-discover-powerful-new-antibiotic-mit-news

    Deep learning appears to be a powerful new tool in the war against antibiotic-resistant infections. One new algorithm discovered a drug that, in real-world lab tests, killed off a broad spectrum of deadly bacteria, including some antibiotic-resistant strains. The same algorithm has unearthed another eight candidates that show promise in computer-simulated tests.

    How does one make an antibiotics-discovering neural network?

    Reply
  26. Tomi Engdahl says:

    Pari vuotta sitten olin seminaareissa sanottiin, että data on uusi öljy, mutta ilmastolle vaaraton. Nyt tiedetään enemmän.

    AI Is an Energy-Guzzler. We Need to Re-Think Its Design, and Soon
    https://singularityhub.com/2020/02/28/ai-is-an-energy-guzzler-we-need-to-re-think-its-design-and-soon/

    Reply
  27. Tomi Engdahl says:

    Google scientists built an adorable four-legged robot that taught itself to walk without human help
    https://www.businessinsider.com/google-researchers-robot-learning-walk-own-2020-3

    Reply
  28. Tomi Engdahl says:

    U.S. Department of Defense Unveils Principles of Ethics for Artificial Intelligence
    https://www.cyberpunks.com/u-s-department-of-defense-unveils-principles-of-ethics-for-artificial-intelligence/

    Defense Secretary Mark Esper signed off on a five-point AI ethics memorandum that will go into everything from the research and development of the technology, to the data used to explain how AI is implemented.

    The department’s AI ethical principles cover five major areas:

    Responsible. DoD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities.
    Equitable. The Department will take deliberate steps to minimize unintended bias in AI capabilities.
    Traceable. The Department’s AI capabilities will be developed and deployed such that relevant personnel possesses an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedures and documentation.
    Reliable. The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles.
    Governable. The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

    Reply
  29. Tomi Engdahl says:

    Koneoppiminen on kiinteä osa modernia syöpätutkimusta, sanoo professori Sampsa Hautaniemi. Miten tekoälyä hyödynnetään tutkimuksessa? Entä syövän hoidossa?

    Katso Sampsa Hautaniemen esitys tekoälystä ja syövästä Tiedekulman YouTubessa: https://youtu.be/Eq-AA8FQt0k

    Reply
  30. Tomi Engdahl says:

    Google releases quantum computing library
    https://techxplore.com/news/2020-03-google-quantum-library.html

    Google announced Monday that it is making available an open-source library for quantum machine-learning applications.

    TensorFlow Quantum, a free library of applications, is an add-on to the widely-used TensorFlow toolkit, which has helped to bring the world of machine learning to developers across the globe.

    “We hope this framework provides the necessary tools for the quantum computing and machine learning research communities to explore models of both natural and artificial quantum systems, and ultimately discover new quantum algorithms which could potentially yield a quantum advantage,” a report posted by members of Google’s X unit on the AI Blog states.

    https://ai.googleblog.com/2020/03/announcing-tensorflow-quantum-open.html?m=1

    Reply
  31. Tomi Engdahl says:

    The predictive database
    https://aito.ai/

    Traditional databases show what was. The predictive database shows what will be. Sales estimates, demand forecasts, churn predictions, pre-filled forms, and whatever future data points you need.

    Reply
  32. Tomi Engdahl says:

    Image Sensor Doubles as a Neural Net
    https://spectrum.ieee.org/tech-talk/computing/hardware/image-neural

    A new ultra-fast machine-vision device can process images thousands of times faster than conventional techniques with an image sensor that is also an artificial neural network.

    Machine vision technology often experiences delays from how cameras have to scan pixels row by row, convert video frames to digital signals and transmit such data to computers for analysis. Lukas Mennel, an electrical engineer at TU Wien, and his colleagues sought to speed up machine vision by cutting out the middleman—they created an image sensor that itself constitutes an artificial neural network that can simultaneously acquire and analyze images.

    The sensor consists of an array of pixels that each represents a neuron. Each pixel in turn consists of a number of subpixels that each represents a synapse. Each photodiode is based on a layer of tungsten diselenide, a two-dimensional semiconductor with a tunable response to light. Such tunable photoresponsivity allowed each photodiode to remember and respond to light in a programmable way. The scientists then created a neural network based on links between these photodiodes that they could train to, for instance, classify images as either the letters “n,” “v,” or “z.”

    “Our image sensor does not consume any electrical power when it is operating,” Mennel says. “The sensed photons themselves provide the energy for the electric current.”

    Reply
  33. Tomi Engdahl says:

    Intel to Release Neuromorphic-Computing System
    Pohoiki Springs, an experimental system to be rolled out this month, mimics the way human brains work to do computations faster with less energy
    https://www.wsj.com/articles/intel-to-release-neuromorphic-computing-system-11584540000

    Reply
  34. Tomi Engdahl says:

    Researchers Build an Image Sensor Which Doubles as a Neural Network for Nanosecond Recognition
    https://www.hackster.io/news/researchers-build-an-image-sensor-which-doubles-as-a-neural-network-for-nanosecond-recognition-56f4e8007326

    Powered by the photons it’s imaging, the prototype sensor can perform its own image recognition tasks in just 50ns.

    Reply
  35. Tomi Engdahl says:

    Kyle Wiggers / VentureBeat:
    Google open sources SEED RL, a TensorFlow 2.0-based architecture for scaling AI model training to thousands of machines while reducing costs by up to 80%

    Google open-sources framework that reduces AI training costs by up to 80%
    https://venturebeat.com/2020/03/23/google-open-sources-framework-that-reduces-ai-training-costs-by-up-to-80/

    Kyle Wiggers@Kyle_L_Wiggers March 23, 2020 11:35 AM
    Google AI logo
    Image Credit: Khari Johnson / VentureBeat

    Google researchers recently published a paper describing a framework — SEED RL — that scales AI model training to thousands of machines. They say that it could facilitate training at millions of frames per second on a machine while reducing costs by up to 80%, potentially leveling the playing field for startups that couldn’t previously compete with large AI labs.

    Training sophisticated machine learning models in the cloud remains prohibitively expensive. According to a recent Synced report, the University of Washington’s Grover, which is tailored for both the generation and detection of fake news, cost $25,000 to train over the course of two weeks. OpenAI racked up $256 per hour to train its GPT-2 language model, and Google spent an estimated $6,912 training BERT, a bidirectional transformer model that redefined the state of the art for 11 natural language processing tasks.

    SEED RL, which is based on Google’s TensorFlow 2.0 framework, features an architecture that takes advantage of graphics cards and tensor processing units (TPUs) by centralizing model inference.

    SEED RL’s learner component can be scaled across thousands of cores (e.g., up to 2,048 on Cloud TPUs), and the number of actors

    Massively Scaling Reinforcement Learning with SEED RL
    https://ai.googleblog.com/2020/03/massively-scaling-reinforcement.html

    Reply
  36. Tomi Engdahl says:

    What Machine Learning Can Do In Fabs
    https://semiengineering.com/what-machine-learning-can-do-in-fabs/

    Experts at the Table: It’s not as accurate as simulation, but it’s a lot faster.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*