3 AI misconceptions IT leaders must dispel


 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”


  1. Tomi Engdahl says:

    Neural network reconstructs human thoughts from brain waves in real time

    Researchers from Russian corporation Neurobotics and the Moscow Institute of Physics and Technology have found a way to visualize a person’s brain activity as actual images mimicking what they observe in real time. This will enable new post-stroke rehabilitation devices controlled by brain signals. The team published its research as a preprint on bioRxiv and posted a video online showing their “mind-reading” system at work.

  2. Tomi Engdahl says:

    Finding Pre-Trained AI In A Modelzoo Using Python

    Training a machine learning model is not a task for mere mortals, as it takes a lot of time or computing power to do so. Fortunately there are pre-trained models out there that one can use, and [Max Bridgland] decided it would be a good idea to write a python module to find and view such models using the command line.


  3. Tomi Engdahl says:

    Spleeter is the Deezer source separation library with pretrained models written in Python and uses Tensorflow. It makes it easy to train source separation model (assuming you have a dataset of isolated sources), and provides already trained state of the art model for performing various flavour of separation :

    Vocals (singing voice) / accompaniment separation (2 stems)
    Vocals / drums / bass / other separation (4 stems)
    Vocals / drums / bass / piano / other separation (5 stems)


  4. Tomi Engdahl says:

    Aiming at everyone from hobbyists to educators, Teachable Machine requires no prior experience to build simple ML models.

    Google’s Teachable Machine Uses TensorFlow.js to Bring Code-Free Machine Learning to the Browser

    Aiming at everyone from hobbyists to educators, Teachable Machine requires no prior experience to build simple ML models.

  5. Tomi Engdahl says:


    # My self driving car AI that is gonna make me rich :P
    import ArtificialIntelligence as myAiCar


    if myAiCar.goingToCrash():

  6. Tomi Engdahl says:

    Tiernan Ray / ZDNet:
    Facebook’s AI team details XLM-R, a natural language model which translates between 100 languages, but struggles with the limits of existing computing powe

    Facebook’s latest giant language AI hits computing wall at 500 Nvidia GPUs

    Facebook AI research’s latest breakthrough in natural language understanding, called XLM-R, performs cross-language tasks with 100 different languages including Swahili and Urdu, but it’s also running up against the limits of existing computing power.

  7. Tomi Engdahl says:

    “Everyone believes that their job will be the last job to be automated.” — R. David Edelman, director of MIT’s Project on Technology, Economy & National Security talks to Spectrum about AI’s coming impact on employment https://buff.ly/2OxPIq3

  8. Tomi Engdahl says:

    “THE CHINESE ‘DID’ IT” — “English AI Anchor” debuted Thursday at the World Internet Conference in the country’s eastern Zhejiang Province. Modeled on the agency’s Zhang Zhao presenter, the new anchor learns from live videos and is able to work 24 hours a day, reporting via social media and on the Xinhua website. ”‘He’ learns from live broadcasting videos by himself and can read texts as naturally as a professional news anchor,” the company said in an online statement.


  9. Tomi Engdahl says:

    “Pitää pystyä irrottautumaan siitä ajatuksesta, että tekniikan tehtävä olisi vain hioa ja parantaa nykyisiä prosesseja. Mieluummin kannustaisin puhtaalta pöydältä miettimään, miten voimme luoda kokonaan uutta bisnestä tekoälyn avulla”, tekoälyasiantuntija Harri Puolitaival rohkaisee. Tekoälyn mahdollisuudet ja haasteet puntarissa Sysartin blogissa:


    #tekoäly #AI

  10. Tomi Engdahl says:

    Siitä Harri on varma, että tekoäly mahdollistaa monien uusien liiketoimintamallien kehittämisen. “Pitää kuitenkin pystyä irrottautumaan siitä ajatuksesta, että tekniikan tehtävä olisi vain hioa ja parantaa nykyisiä prosesseja. Mieluummin kannustaisin puhtaalta pöydältä miettimään, miten voimme luoda kokonaan uutta bisnestä tekoälyn avulla. Tekoälyinnovaation taloudellista kannattavuutta arvioitaessa tulee myös huomioida, että itse tekninen kehittäminen ei yleensä ole se isoin ja kallein kysymys. Uuden työkalun tai järjestelmän käyttöönotto vaatii usein myös toimintamallien muutosta sekä panostamista datan keruuseen ja omistajuuteen. Kyse ei ole siis pelkästään IT-projektista vaan tekniikan luomien mahdollisuuksien ja liiketoimintatavoitteiden vuoropuhelusta.”


  11. Tomi Engdahl says:

    Exploring NLP concepts using Apache OpenNLP

    After going through the above, we can conclude the following about the Apache OpenNLP tool by exploring its pros and cons:


    It’s an easy to use API and understand
    Shallow learning curve and detailed documentation with lots of examples
    Covers a lot of NLP functionality, there’s more in the docs to explore than we did above
    Easy shell scripts and Apache OpenNLP scripts have been provided to play with the tool
    Lots of resources available below to learn more about NLP (See the Resources section below)
    Resources provided to quickly get started and explore the Apache OpenNLP tool


  12. Tomi Engdahl says:

    AI and the Future of Work: The Economic Impacts of Artificial Intelligence

    “Dig into every industry, and you’ll find AI changing the nature of work,” said Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). She cited recent McKinsey research that found 45 percent of the work people are paid to do today can be automated with currently available technologies. Those activities, McKinsey found, represent some US $2 trillion in wages.

    “If you live in Detroit or Toledo, where I come from, technology has been displacing jobs for the last half-century,” Goff said. “I don’t think that most people in this country have the increased anxiety that the coasts do, because they’ve been living this.”

    as AI automates some jobs, it will also open opportunities for “reskilling” that may have nothing to do with AI or automation

  13. Tomi Engdahl says:

    Exploring NLP concepts using Apache OpenNLP

  14. Tomi Engdahl says:

    Go champion retires after losing to AI, Richard Nixon deepfake gives a different kind of Moon-landing speech…
    Plus: Chilling details of the AI police state in Xinjiang

  15. Tomi Engdahl says:

    You can train an AI to fake UN speeches in just 13 hours

    Deep-learning techniques have made it easier and easier for anyone to forge convincing misinformation. But just how easy? Two researchers at Global Pulse, an initiative of the United Nations, decided to find out.

    In a new paper, they used only open-source tools and data to show how quickly they could get a fake UN speech generator up and running. They used a readily available language model that had been trained on text from Wikipedia and fine-tuned it on all the speeches given by political leaders at the UN General Assembly from 1970 to 2015. Thirteen hours and $7.80 later (spent on cloud computing resources), their model was spitting out realistic speeches on a wide variety of sensitive and high-stakes topics from nuclear disarmament to refugees.

  16. Tomi Engdahl says:

    The Extraordinary Link Between Deep Neural Networks and the Nature of the Universe

    Nobody understands why deep neural networks are so good at solving complex problems. Now physicists say the secret is buried in the laws of physics.

    In the last couple of years, deep learning techniques have transformed the world of artificial intelligence. One by one, the abilities and techniques that humans once imagined were uniquely our own have begun to fall to the onslaught of ever more powerful machines. Deep neural networks are now better than humans at tasks such as face recognition and object recognition. They’ve mastered the ancient game of Go and thrashed the best human players.

    But there is a problem. There is no mathematical reason why networks arranged in layers should be so good at these challenges. Mathematicians are flummoxed. Despite the huge success of deep neural networks, nobody is quite sure how they achieve their success.

    the answer lies in the regime of physics rather than mathematics.

    In the language of mathematics, neural networks work by approximating complex mathematical functions with simpler ones. When it comes to classifying images of cats and dogs, the neural network must implement a function that takes as an input a million grayscale pixels and outputs the probability distribution of what it might represent.

    the universe is governed by a tiny subset of all possible functions. In other words, when the laws of physics are written down mathematically, they can all be described by functions that have a remarkable set of simple properties.

    So deep neural networks don’t have to approximate any possible mathematical function, only a tiny subset of them.

    The laws of physics have other important properties. For example, they are usually symmetrical when it comes to rotation and translation. Rotate a cat or dog through 360 degrees and it looks the same; translate it by 10 meters or 100 meters or a kilometer and it will look the same. That also simplifies the task of approximating the process of cat or dog recognition.

    There is another property of the universe that neural networks exploit. This is the hierarchy of its structure. “Elementary particles form atoms which in turn form molecules, cells, organisms, planets, solar systems, galaxies, etc.,” say Lin and Tegmark. And complex structures are often formed through a sequence of simpler steps.

    This is why the structure of neural networks is important too: the layers in these networks can approximate each step in the causal sequence.

    “We have shown that the success of deep and cheap learning depends not only on mathematics but also on physics, which favors certain classes of exceptionally simple probability distributions that deep learning is uniquely suited to model,” conclude Lin and Tegmark.

  17. Tomi Engdahl says:

    New artificial intelligence system automatically evolves to evade internet censorship

  18. Tomi Engdahl says:

    The ONNX format becomes the newest Linux Foundation project

    The Linux Foundation today announced that ONNX, the open format that makes machine learning models more portable, is now a graduate-level project inside of the organization’s AI Foundation. ONNX was originally developed and open-sourced by Microsoft and Facebook in 2017 and has since become somewhat of a standard, with companies ranging from AWS to AMD, ARM, Baudi, HPE, IBM, Nvidia and Qualcomm supporting it. In total, more than 30 companies now contribute to the ONNX code base.

  19. Tomi Engdahl says:

    AWS DeepComposer – a machine learning-enabled musical keyboard for developers

    To train your models and create new musical compositions, AWS DeepComposer is priced at $99, this includes the keyboard, plus a 3-month free trial of AWS DeepComposer services to train your models and create original musical compositions.

  20. Tomi Engdahl says:

    Amazing AI Generates Entire Bodies of People Who Don’t Exist
    The algorithm whips up photorealistic models and outfits from scratch.

  21. Tomi Engdahl says:

    Forecasters predict that artificial intelligence will soon boost economic productivity enormously. They’re wrong.

    AI and Economic Productivity: Expect Evolution, Not Revolution

    2016, London-based DeepMind Technologies, a subsidiary of Alphabet (which is also the parent company of Google), startled industry watchers when it reported that the application of artificial intelligence had reduced the cooling bill at a Google data center by a whopping 40 percent. What’s more, we learned that year, DeepMind was starting to work with the National Grid in the United Kingdom to save energy throughout the country using deep learning to optimize the flow of electricity.

    Could AI really slash energy usage so profoundly? In the three years that have passed, I’ve searched for articles on the application of AI to other data centers but find no evidence of important gains. What’s more, DeepMind’s talks with the National Grid about energy have broken down. And the financial results for DeepMind certainly don’t suggest that customers are lining up for its services: For 2018, the company reported losses of US $571 million on revenues of $125 million, up from losses of $366 million in 2017. Last April, The Economist characterized DeepMind’s 2016 announcement as a publicity stunt, quoting one inside source as saying, “[DeepMind just wants] to have some PR so they can claim some value added within Alphabet.”

    encouraged me to look more deeply into the economic promise of AI and the rosy projections made by champions of this technology within the financial sector.

    analysts have lately been asserting that AI-enabled technologies will dramatically increase economic output. Accenture claims that by 2035 AI will double growth rates for 12 developed countries and increase labor productivity by as much as a third. PwC claims that AI will add $15.7 trillion to the global economy by 2030, while McKinsey projects a $13 trillion boost by that time.

    Wow. These are big numbers. If true, they create a powerful incentive for companies to pursue AI—with or without help from McKinsey consultants. But are these predictions really valid?

    Many of McKinsey’s estimates were made by extrapolating from claims made by various startups. For instance, its prediction of a 10 percent improvement in energy efficiency in the U.K. and elsewhere was based on the purported success of DeepMind and also of Nest Labs

    So I decided to investigate more systematically how well such AI startups were doing. I found that many were proving not nearly as valuable to society as all the hype would suggest.

    The U.K. government recently revised downward the amount it figures a smart meter will save each household annually, from £26 to just £11. And the cost of smart meters and their installation has risen, warns the U.K.’s National Audit Office. All of this is not good news for startups banking on the notion that smart thermostats, smart home appliances, and smart meters will lead to great energy savings.

    overall venture capital funding in the United States was $115 billion in 2018 [PDF], of which $9.3 billion went to AI startups. While that’s just 8 percent of the total, it’s still a lot of money, indicating that there are many U.S. startups working on AI (although some overstate the role of AI in their business plans to acquire funding).

    In my view, software that automates tasks normally carried out by white-collar workers is probably the most promising of the products and services that AI is being applied to. Similar to past improvements in tools for white-collar professionals, including Excel for accountants and computer-aided design for engineers and architects, these types of AI-based tools have the greatest potential impact on productivity.

    The relatively large number of startups I classified as working on basic hardware and software for computing (17) also suggests that productivity improvements are still many years away.

    The large number of these startups that are focused on cybersecurity (seven) highlights the increasing threat of security problems, which raise the cost of doing business over the Internet. AI’s ability to address cybersecurity issues will likely make the Internet more safe, secure, and useful. But in the end, this thrust reflects yet higher costs in the future for Internet businesses and will not, to my mind, lead to large productivity improvements within the economy as a whole.

    will AI bring substantial economic gains? Health care, you would think, might benefit greatly from AI. Yet the number of startups on my list that are applying AI to health care (three) seems oddly small if that were really the case. Perhaps this has something to do with IBM’s experience with its Watson AI, which proved a disappointment when it was applied to medicine.

    Still, many people remain hopeful that AI-fueled health care startups will fill the gap left by Watson’s failures.

    For the reasons I’ve given, it’s very hard for me to feel confident that any of the AI startups I examined will provide the U.S. economy with a big boost over the next decade. Similar pessimism is also starting to emerge from such normally cheery publications

    The most promising areas for rapid gains in productivity are likely to be found in robotic process automation for white-collar workers, continuing a trend that has existed for decades. But these improvements will be gradual, just as those for computer-aided design and computer-aided engineering software, spreadsheets, and word processing have been.

  22. Tomi Engdahl says:

    New Amazon tool simplifies delivery of containerized machine learning models

    As part of the flurry of announcements coming this week out of AWS re:Invent, Amazon announced the release of Amazon SageMaker Operators for Kubernetes, a way for data scientists and developers to simplify training, tuning and deploying containerized machine learning models.

    Packaging machine learning models in containers can help put them to work inside organizations faster, but getting there often requires a lot of extra management to make it all work. Amazon SageMaker Operators for Kubernetes is supposed to make it easier to run and manage those containers, the underlying infrastructure needed to run the models and the workflows associated with all of it.

  23. Tomi Engdahl says:


  24. Tomi Engdahl says:

    In the Age of AI

    FRONTLINE investigates the promise and perils of artificial intelligence, from fears about work and privacy to rivalry between the U.S. and China. The documentary traces a new industrial revolution that will reshape and disrupt our lives, our jobs and our world, and allow the emergence of the surveillance society.

  25. Tomi Engdahl says:

    Daniel West’s AI-powered automated machine is capable of recognizing and sorting any LEGO part that has ever been produced.

    This Amazing AI-Powered Machine Can Sort Every LEGO Brick Ever Made

    Daniel West’s AI-powered automated machine is capable of recognizing and sorting any LEGO part that has ever been produced.

  26. Tomi Engdahl says:

    A.I. Is Making it Easier to Kill (You). Here’s How. | NYT

    A tank that drives itself. A drone that picks its own targets. A machine gun with facial recognition software. Sounds like science fiction? A.I. fueled weapons are already here.

  27. Tomi Engdahl says:

    Democratizing AI – Finland offers free AI education to every EU citizen

    Finland will provide European citizens with free access to the Elements of AI, the groundbreaking online course made by Reaktor and the University of Helsinki. The course will be made available in all the official EU languages. This initiative by the Finnish Presidency aims to respond to the challenges posed by the transformation of work and to reinforce the digital leadership of the EU.

  28. Tomi Engdahl says:

    NVIDIA’s DIB-R Creates Convincing 3D Models from 2D Image Inputs, Adds Depth Perception to Cameras
    Once trained on a data set, DIB-R can create a 3D model from a single 2D image in under 100 milliseconds


Leave a Comment

Your email address will not be published. Required fields are marked *