3 AI misconceptions IT leaders must dispel


 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”


  1. Tomi Engdahl says:

    When doing machine learning in production, the choice of the model is just one of the many important criteria. Equally important: the definition of the problem, gathering high-quality data and the architecture of the machine learning pipeline.

    Learn how to architect a machine learning pipeline for multiclass text classification in 5 steps:

    1. Preprocess: preprocess the raw data to be used by fastText.
    2. Split: split the preprocessed data into train, validation and test data.
    3. Autotune: find the best parameters on the validation data.
    4. Train: train the final model with the best parameters on all the data.
    5. Test: get metrics and predictions on test data.

    You can now run the pipeline on the Valohai cloud with a few clicks.



  2. Tomi Engdahl says:

    Intel Officially Axes Nervana

    Intel’s AI ASIC strategy will be based on Habana chips from now on

    In a move widely speculated to have been looming, Intel has axed Nervana’s NNP-T and NNP-I training and inference chips for the data center in favor of Gaudi and Goya chips from recent acquisition Habana Labs.

    A statement emailed to EETimes said that Intel will cease development on Nervana’s NNP-T AI training chip (Spring Crest) for the data center, while merely honoring existing customer commitments to the NNP-I inference chip (Spring Hill), following “customer feedback”.

    “After acquiring Habana Labs in December and with input from our customers, we are making strategic updates to the data center AI acceleration roadmap. We will leverage our combined AI talent and technology to build leadership AI products,” Intel’s statement said.

  3. Tomi Engdahl says:

    Someone Used Neural Networks To Upscale An 1895 Film To 4K 60 FPS, And The Result Is Really Quite Astounding
    Digg Feb 4, 2020 @09:33 A

    The Lumière Brothers’ 1895 short “Arrival of a Train at La Ciotat” is one of the most famous film clips in history — you’ve almost certainly seen the 50-second movie at some point in your life.

    YouTuber Denis Shiryaev wanted to update the look of the clip, so — with the help of several neural networks — he upscaled the clip to 4K resolution and 60 FPS.

  4. Tomi Engdahl says:

    just imagine that we now can do this in real time.

    Combine this with the AI stuff (like NVIDIA is doing) and you’re there.


  5. Tomi Engdahl says:

    An algorithm that can spot cause and effect could supercharge medical AI

    The technique, inspired by quantum cryptography, would allow large medical databases to be tapped for causal links

  6. Tomi Engdahl says:

    Smart Black-Box Neural Networks Recreate Classic Guitar Amp Sounds in Real-Time

    Using black box modeling, researchers have been able to create convincing simulations of classic tube amps — running in real time.

    Researchers at Aalto University and Neural DSP Technologies claim to have created a neural network capable of emulating any guitar amplifier with enough accuracy to be indistinguishable from the real deal in blind listening tests.

    “Deep neural networks for guitar distortion modeling has been tested before,” explains Professor Vesa Välimäki of the work, “but this is the first time where blind-test listeners couldn’t tell the difference between a recording and a fake distorted guitar sound! This is akin to when the computer first learned to play chess.”

    Previous best efforts in virtual analog modellng have relied upon traditional circuit modeling techniques, a labor-intensive process

  7. Tomi Engdahl says:

    Reuters built a prototype for automated news videos using Deepfakes tech


    Coming to you live from the inside of an artificial neural network

  8. Tomi Engdahl says:

    Automated system can rewrite outdated sentences in Wikipedia articles

    Text-generating tool pinpoints and replaces specific information in sentences while retaining humanlike grammar and style.

  9. Tomi Engdahl says:

    We’re still in the very early days of AI, but it’s not too early to start thinking about AI’s environmental impact.

    AI in the 2020s Must Get Greener—and Here’s How

    The environmental impact of artificial intelligence (AI) has been a hot topic as of late—and I believe it will be a defining issue for AI this decade. The conversation began with a recent study from the Allen Institute for AI that argued for the prioritization of “Green AI” efforts that focus on the energy efficiency of AI systems.

    This study was motivated by the observation that many high-profile advances in AI have staggering carbon footprints. A 2018 blog post from OpenAI revealed that the amount of compute required for the largest AI training runs has increased by 300,000 times since 2012. And while that post didn’t calculate the carbon emissions of such training runs, others have done so. According to a paper by Emma Strubel and colleagues

    an average American is responsible for about 36,000 tons of CO2 emissions per year; training and developing one machine translation model that uses a technique called neural architecture search was responsible for an estimated 626,000 tons of CO2.

    Red AI Isn’t All Bad
    Many of today’s Red AI projects are pushing science forward in natural language processing, computer vision, and other important areas of AI. While their carbon costs may be significant today, the potential for positive societal impact is also significant.

    As an analogy, consider the Human Genome Project (HGP)

    it’s critical to measure both the input and the output of RedAI projects. Many of the artifacts produced by RedAI experiments (for example, image representations for object recognition, or word embeddings in natural language processing) are enabling rapid advances in a wide range of applications.

  10. Tomi Engdahl says:

    From models of galaxies to atoms, simple AI shortcuts speed up simulations by billions of times

    Modeling immensely complex natural phenomena such as how subatomic particles interact or how atmospheric haze affects climate can take many hours on even the fastest supercomputers. Emulators, algorithms that quickly approximate these detailed simulations, offer a shortcut. Now, work posted online shows how artificial intelligence (AI) can easily produce accurate emulators that can accelerate simulations across all of science by billions of times.

    “This is a big deal,”

  11. Tomi Engdahl says:

    Tegwyn Twmffat Turns to Deep Learning for a Smart Species-Identifying Bat Detector Build

    Using machine learning to recognize each species, the detector can note bat types and upload data to the cloud over LoRa

  12. Tomi Engdahl says:

    Removing people from complex backgrounds in real time using TensorFlow.js in the web browser using JavaScript.


    Live webcam demo:

  13. Tomi Engdahl says:

    Getting AI ethics wrong could ‘annihilate technical progress’

    ‘It’s very difficult to be an AI researcher now and not be aware of the ethical implications these algorithms have,’ said Professor Bernd Stahl, director of the Centre for Computing and Social Responsibility at De Montfort University in Leicester, UK.

    ‘We have to come to a better understanding of not just what these technologies can do, but how they will play out in society and the world at large.’

    ‘Our artist has built a water gun with a face recognition on it so it will only squirt water at women or it can be changed to recognise a single individual or people of a certain age,’ said Prof. Stahl. ‘The idea is to get people to think about what this sort of technology can do.’

    While squirting water at people might seem like harmless fun, the issues are anything but. AI is already used to identify faces on social media, respond to questions on digital home assistants like Alexa and Siri, and suggest products for consumers when they are shopping online.


Leave a Comment

Your email address will not be published. Required fields are marked *