3 AI misconceptions IT leaders must dispel


 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”


  1. Tomi Engdahl says:

    Aliya Ram / Financial Times:
    Report: 40% of 2,830 AI startups in Europe do not use AI programs in their products; ~8% of EU startups were AI companies in 2018, up from ~3% in 2015

  2. Tomi Engdahl says:

    In-Memory Vs. Near-Memory Computing

    New approaches are competing for attention as scaling benefits diminish.

  3. Tomi Engdahl says:

    Taming the AI Beast to Tackle Cyber-Threats

    The use of artificial intelligence and machine learning to both promote cyberattacks and mitigate against them means that this is truly an interesting time in the world of cybersecurity

  4. Tomi Engdahl says:

    Forty percent of ‘AI startups’ in Europe don’t actually use AI, claims report

    Companies want to take advantage of the AI hype

  5. Tomi Engdahl says:

    Is AI a Fad or a Sure-Fire Thing in Medtech?

    Yann Fleureau, co-founder and CEO of Cardiologs, weighs in on why 2019 will be known as the year of validation for artificial intelligence.

  6. Tomi Engdahl says:

    Eettiset periaatteet tekoälyn hyödyntämiseen

    ”Tekoäly voi tuntua vaikeasti ymmärrettävältä alueelta, johon liittyy pelkoja ja huolia, ja tekoälysovellukset voivat vaikuttaa kuin mustilta laatikoilta’’, kertoo Markku Väänänen, AI-kehityshankkeen vetäjä Digialta.

    ”Kun eettiset periaatteet on määritelty, pystymme keskittymään tekoälyn avulla saavutettaviin hyötyihin ja samalla voimme olla varmoja, että kokeilut ja toteutettavat ratkaisut ovat eettisellä pohjalla”, Väänänen sanoo.

  7. Tomi Engdahl says:

    Sigal Samuel / Vox:
    Georgia Institute of Technology study: object-detection models, used in self-driving vehicles, are 5% less accurate when detecting people with dark skin

    A new study finds a potential risk with self-driving cars: failure to detect dark-skinned pedestrians
    The findings speak to a bigger problem in the development of automated systems: algorithmic bias.

  8. Tomi Engdahl says:

    How Neural Networks Think (MIT)
    General-purpose technique sheds light on inner workings of neural nets trained to process language.

  9. Tomi Engdahl says:

    Sydney Johnson / EdSurge:
    Turnitin, a developer of AI software that checks for plagiarism, will be acquired by media conglomerate Advance Publications, reportedly for nearly $1.75B

    Turnitin to Be Acquired by Advance Publications for $1.75B

  10. Tomi Engdahl says:

    The Winograd Transformation
    How to streamline convolutional neural networks.

  11. Tomi Engdahl says:

    Now any business can access the same type of AI that powered AlphaGo

    A startup called CogitAI has developed a platform that lets companies use reinforcement learning, the technique that gave AlphaGo mastery of the board game Go.

  12. Tomi Engdahl says:

    Say “Hello” to Google Coral
    Edge TPU, Google’s custom ASIC for machine learning has arrived

  13. Tomi Engdahl says:

    Can predictive analytics be made safe for humans?
    Plus, the more moral relativity of Huawei vs. America

  14. Tomi Engdahl says:

    Securing Learning Machines:
    Learn the Fundamentals Behind the Buzz of Machine Learning, Artificial Neural Networks, and Artificial Intelligence Before You Attempt to Secure the Product

  15. Tomi Engdahl says:

    Oscar Schwartz / The Guardian:
    AI-based emotion detection has become a $20B industry, but some experts say the foundational science behind the tech is flawed and can adversely affect society — Machines can now allegedly identify anger, fear, disgust and sadness. ‘Emotion detection’ has grown from a research project to a $20bn industry

    Don’t look now: why you should be worried about machines reading your emotions

    Machines can now allegedly identify anger, fear, disgust and sadness. ‘Emotion detection’ has grown from a research project to a $20bn industry

  16. Tomi Engdahl says:

    Nick Statt / The Verge:
    An overview of how advances of modern AI research can help developers build more sophisticated and immersive games

    How artificial intelligence will revolutionize the way video games are developed and played
    The advances of modern AI research could bring unprecedented benefits to game development

  17. Tomi Engdahl says:

    RSAC 2019: The Dark Side of Machine Learning

    As smart devices permeate our lives, Google sends up a red flag and shows how the underlying systems can be attacked.

    SAN FRANCISCO – The same machine-learning algorithms that made self-driving cars and voice assistants possible can be hacked to turn a cat into guacamole or Bach symphonies into audio-based attacks against a smartphone.

    These are examples of “adversarial attacks” against machine learning systems whereby someone can subtly alter an image or sound to trick a computer into misclassifying it. The implications are huge in a world growing more saturated with so-called machine intelligence.

    Here at the RSA Conference, Google researcher Nicholas Carlini gave attendees an overview of the possible attack vectors that could not only flummox machine-learning systems, but also extract sensitive information from large data sets inadvertently.

  18. Tomi Engdahl says:

    OpenAI shifts from nonprofit to ‘capped-profit’ to attract capital

    OpenAI may not be quite so open going forward. The former nonprofit announced today that it is restructuring as a “capped-profit” company that cuts returns from investments past a certain point. But some worry that this move — or rather the way they made it — may result in making the innovative company no different from the other AI startups out there.

  19. Tomi Engdahl says:

    Shelby Brown / CNET:
    YouTube Stories and ARCore now offer an Augmented Faces API which lets users add Snapchat-like AR effects to short video clips

    YouTube Stories adds Snapchat-like AR effects

    YouTube Stories has fun, realistic filters ready for download for short video clips.

    From dog ears to fairy crowns to monstrous fangs, filters and AR effects are old hat for social-media selfies. Now, YouTube is jumping on the bandwagon.

    In this video, the glasses are virtual.

    In its latest release, YouTube Stories and ARCore now offer the Augmented Faces API so you can add glasses, masks, hats and other items to short video clips. Google, which owns YouTube, said in a blog post Friday that its use of 3D mesh has resulted in more realistic filters.

    Real-Time AR Self-Expression with Machine Learning

    One of the key challenges in making these AR features possible is proper anchoring of the virtual content to the real world; a process that requires a unique set of perceptive technologies able to track the highly dynamic surface geometry across every smile, frown or smirk.

    To make all this possible, we employ machine learning (ML) to infer approximate 3D surface geometry to enable visual effects, requiring only a single camera input without the need for a dedicated depth sensor. This approach provides the use of AR effects at realtime speeds, using TensorFlow Lite for mobile CPU inference or its new mobile GPU functionality where available.

  20. Tomi Engdahl says:

    Designing An AI SoC

    Balancing custom design, rapidly changing algorithms and the need for economies of scale.

  21. Tomi Engdahl says:

    AI: Where’s The Money?
    What the market for AI hardware might look like in 2025.

    A one-time technology outcast, Artificial Intelligence (AI) has come a long way.

    Start with a striking McKinsey estimate that growth in the semiconductor market from 2017 to 2025 will be dominated by AI semiconductors at 5X higher CAGR than all other semiconductor types combined. Whatever you may think of the role of AI in our future, not playing in this segment is a hard sell. There’s a Tractica survey which breaks this growth down further by implementation platforms: CPU versus GPU, FPGA and ASIC. In 2019, CPU-based platforms start at about $3B, growing to around $12B in 2025. GPU-based systems start near $6B in 2019 and grow to around $20B in 2025. The FPGA contribution is pretty small, maybe around $1B in 2025. But the ASIC segment grows from ~$2B in 2019 to around $30B in 2025. ASIC implementations of AI will overtake even GPU-based AI in dollar volume by around 2022.

    The implementation breakdown shouldn’t be too surprising. CPU-based platforms will work well for low-cost, low-performance applications – a smart microwave – where system designers don’t want to deal with non-standard processing. GPUs made the AI revolution real and will continue to be important in relatively high-performance datacenter training where power and cost are not a concern, also in prototypes for emerging applications like robotics and augmented reality headsets. But for anyone looking for battery-powered high-performance and low-cost at volume, or the ultimate in differentiated performance and capability in mega-datacenters where cost is not a concern, ASIC is (and always has been) the best solution.

    Now let’s look at chip architecture. On the edge, we see each application tuned to just a few use-cases, often with tight latency requirements, and an SoC architecture tightly optimized to execute these use cases.

    The implementation needs in the datacenter are quite different and are also somewhat different between training and inference. Datacenter service providers want high throughput through multiple lanes of neural-net engines and don’t to want to tune applications to a specific job. They want ultra-high-performance general-purpose AI solutions, using a common set of hardware, so are trending more and more to spatially-distributed mesh-architectures using homogenous processing elements, organized in regular topologies like grids, rings and tori.

  22. Tomi Engdahl says:

    AI Needs Memory to Get Cozier with Compute

    Big data applications have already driven the need for architectures that put memory closer to compute resources, but artificial intelligence (AI) and machine learning are further demonstrating how hardware and hardware architectures play a critical role in successful deployments. A key question, however, is where is the memory going to reside?

    Research commissioned by Micron Technology found that 89% of respondents say it is important or critical that compute and memory are architecturally close together

  23. Tomi Engdahl says:

    Finding Defects In Chips With Machine Learning

    Better algorithms and more data could bolster adoption, particularly at advanced nodes.

  24. Tomi Engdahl says:

    Domain Expertise Becoming Essential For Analytics

    More data doesn’t mean much unless you know what to do with it.

  25. Tomi Engdahl says:

    Is Analog Signal Processing the Future of AI?

    NUREMBERG, Germany — Gene Frantz may have been the visionary for digital signal processing (DSP) back in the 1970s, but now he thinks we need to turn our attention back to analog to tackle the big challenges of artificial intelligence (AI).

    Speaking during the launch of Octavo’s OSD32MP1 — the company’s first SiP based on the newly announced STMicroelectronics STM32MP1 microprocessor — Frantz told EE Times that he believes SiP and analog processing will be the future. He said AI needs a better solution and suggested that we should consider going back to analog signal processing.

    “When most people listen to the words ‘analog signal processing’ they probably think analog computing, but that’s not really what I am saying,” Frantz said. “If I can take the whole idea of signal processing and do an analog arithmetic logic unit (ALU) or mixed signal ALU, I can increase the performance by orders of magnitude, and at the same time reduce the power dissipation by orders of magnitude. And the only problem with that is that I have an issue with dynamic range, with accuracy and with linearity. Those are major issues. But the question is, if I can give you three or four orders of magnitude of higher performance, and three or four orders of magnitude lower power dissipation at the same time, do you think those three problems can be solved?”

  26. Tomi Engdahl says:

    Using Analog For AI

    Can mixed-signal architectures boost artificial intelligence performance using less power?

    If the only tool you have is a hammer, everything looks like a nail. But development of artificial intelligence (AI) applications and the compute platforms for them may be overlooking an alternative technology—analog.

    The semiconductor industry has a firm understanding of digital electronics and has been very successful making it scale. It is predictable, has good yield, and while every development team would like to see improvements, the tooling and automation available for it makes large problems tractable. But scaling is coming to an end and we know that applications still hunger for more compute capabilities. At the same time, the power being consumed by machine learning cannot be allowed to grow in the way that it has been.

    The industry has largely abandoned analog circuitry, except for interfacing with the real world and for communications. Analog is seen as being difficult, prone to external interference, and time-consuming to design and verify. Moreover, it does not scale without digital assistance and does not see many of the same advantages as digital when it comes to newer technologies.

    And yet, analog may hold the key for the future progression of some aspects of AI.

  27. Tomi Engdahl says:

    Memory Tradeoffs Intensify in AI, Automotive Applications

    Why choosing memories and architecting them into systems is becoming much more difficult.

  28. Tomi Engdahl says:

    Machine learning can boost the value of wind energy

    Carbon-free technologies like renewable energy help combat climate change, but many of them have not reached their full potential. Consider wind power: over the past decade, wind farms have become an important source of carbon-free electricity as the cost of turbines has plummeted and adoption has surged. However, the variable nature of wind itself makes it an unpredictable energy source—less useful than one that can reliably deliver power at a set time.

  29. Tomi Engdahl says:

    AI as a Competitive Advantage in Telecom (Case study)

    Lowest hanging fruit — AI for operational excellence

    Few areas that could be well suited for NLP based automation are: Customer Support, Customer Satisfaction and Sentiment Analysis, Chatbots, Invoicing Automation, HR Automation, etc.

    Case Example: Telenor

    Step further — AI for organisational transformation

  30. Tomi Engdahl says:

    How Artificial Intelligence Is Changing Science

    The latest AI algorithms are probing the evolution of galaxies, calculating quantum wave functions, discovering new chemical compounds and more. Is there anything that scientists do that can’t be automated?

  31. Tomi Engdahl says:

    Hal Hodson / 1843:
    Profile of DeepMind, which reached a pre-acquisition arrangement that would prevent Google from unilaterally taking control of its IP, according to a source — Demis Hassabis founded a company to build the world’s most powerful AI. Then Google bought him out. Hal Hodson asks who is in charge

    DeepMind and Google: the battle to control artificial intelligence

    Hassabis proposed a middle ground: AGI should take inspiration from the broad methods by which the brain processes information – not the physical systems or the particular rules it applies in specific situations. In other words it should focus on understanding the brain’s software, not its hardware. New techniques like functional magnetic resonance imaging (fMRI), which made it possible to peer inside the brain while it engaged in activities, had started to make this kind of understanding feasible. The latest studies, he told the audience, showed that the brain learns by replaying experiences during sleep, in order to derive general principles. AI researchers should emulate this kind of system.

    A logo appeared in the lower-right corner of his opening slide, a circular swirl of blue. Two words, closed up, were printed underneath it: DeepMind.

    DeepMind ended up raising £2m; Thiel contributed £1.4m. When Google bought the company in January 2014 for $600m, Thiel and other early investors earned a 5,000% return on their investment.

    For many founders, this would be a happy ending. They could slow down, take a step back and spend more time with their money. For Hassabis, the acquisition by Google was just another step in his pursuit of AGI. He had spent much of 2013 negotiating the terms of the deal. DeepMind would operate as a separate entity from its new parent. It would gain the benefits of being owned by Google, such as access to cash flow and computing power, without losing control.

    Hassabis thought DeepMind would be a hybrid: it would have the drive of a startup, the brains of the greatest universities, and the deep pockets of one of the world’s most valuable companies. Every element was in place to hasten the arrival of AGI and solve the causes of human misery.

    DeepMind’s work culminated in 2016 when a team built an AI program that used reinforcement learning alongside other techniques to play Go. The program, called AlphaGo, caused astonishment when it beat the world champion in a five-game match in Seoul in 2016.

    Like Deep Blue in 1997, AlphaGo changed perceptions of human accomplishment. The human champions, some of the most brilliant minds on the planet, no longer stood at the pinnacle of intelligence. Nearly 20 years after he had confided his ambition to Fujuwarea, Hassabis fulfilled it.

    DeepBlue won through the brute strength and speed of computation, but AlphaGo’s style appeared artistic, almost human.

    Hassabis has always said that DeepMind would change the world for the better. But there are no certainties about AGI. If it ever comes into being, we don’t know whether it will be altruistic or vicious, or if it will submit to human control. Even if it does, who should take the reins?

    DeepMind is particularly proud of the algorithms it developed that calculate the most efficient means to cool Google’s data centres, which contain an estimated 2.5m computer servers. DeepMind said in 2016 that they had reduced Google’s energy bill by 40%. But some insiders say such boasts are overblown. Google had been using algorithms to optimise its data centres long before DeepMind existed.

    DeepMind’s careful unveiling of AI advances forms part of its strategy of managing up, signalling its reputational worth to the powers that be. That’s especially valuable at a time when Google stands accused of invading users’ privacy and spreading fake news.

    Five years after the acquisition by Google, the question of who controls DeepMind is coming to a crunch point. The firm’s founders and early employees are approaching earn-out, when they can leave with the financial compensation that they received from the acquisition (Hassabis’s stock was probably worth around £100m). But a source close to the company suggests that Alphabet has pushed back the founders’ earn-outs by two years. Given his relentless focus, Hassabis is unlikely to jump ship.

  32. Tomi Engdahl says:

    This BeagleBone’s Got AI

    Now, there’s a new BeagleBone, and this time the color is AI. The BeagleBoard foundation has just unveiled the BeagleBone AI, and it is going to be the most powerful BeagleBone ever developed.

    BeagleBone AI
    The Fast Track for Embedded Machine Learning

  33. Tomi Engdahl says:

    Google Launches AI Platform That Looks Remarkably Like A Raspberry Pi

    Google has promised us new hardware products for machine learning at the edge, and now it’s finally out. The thing you’re going to take away from this is that Google built a Raspberry Pi with machine learning. This is Google’s Coral, with an Edge TPU platform, a custom-made ASIC that is designed to run machine learning algorithms ‘at the edge’. Here is the link to the board that looks like a Raspberry Pi.

  34. Tomi Engdahl says:

    Dan Falk / Quanta Magazine:
    A look at how AI is being used in science, from probing the evolution of galaxies and calculating quantum wave functions to discovering new chemical compounds

    How Artificial Intelligence Is Changing Science

  35. Tomi Engdahl says:

    Elizabeth Dwoskin / Washington Post:
    Stanford is launching the Institute for Human-Centered Artificial Intelligence to put humans and ethics at the center of AI and aims to raise $1B+

  36. Tomi Engdahl says:

    NIST’s benchmark test for facial recognition systems uses images of immigrants, US visa applicants, abused children, and dead people, without consent

    The Government Is Using the Most Vulnerable People to Test Facial Recognition Software

    Our research shows that any one of us might end up helping the facial recognition industry, perhaps during moments of extraordinary vulnerability.

  37. Tomi Engdahl says:

    Ryan Merkley / Creative Commons:
    Creative Commons says copyright can’t protect photos from being used for facial recognition, like IBM did, and that public policy should address privacy issues

    Use and Fair Use: Statement on shared images in facial recognition AI

    Yesterday, NBC News published a story about IBM’s work on improving diversity in facial recognition technology and the dataset that they gathered to further this work. The dataset includes links to one million photos from Flickr, many or all of which were apparently shared under a Creative Commons license. Some Flickr users were dismayed to learn that IBM had used their photos to train the AI, and had questions about the ethics, privacy implications, and fair use of such a dataset being used for algorithmic training. We are reaching out to IBM to understand their use of the images, and to share the concerns of our community.

    CC is dedicated to facilitating greater openness for the common good. In general, we believe that the use of publicly available data on the Internet has led to greater innovation, collaboration, and creativity. But there are also real concerns that data can be used for negative activities or negative outcomes.

  38. Tomi Engdahl says:

    Using Analog For AI

    Can mixed-signal architectures boost artificial intelligence performance using less power?

  39. Tomi Engdahl says:

    Artificial Intelligence (AI): Worldwide Opportunities & Projections – The Market is Set to Record a CAGR of 50% During 2018-2024

    Report Findings


    Growth in adoption of cloud-based applications and services
    Rising demand for analyzing and interpreting vast amount of data
    Growing demand for intelligent virtual assistants
    Growth in investment in AI technologies


    Lack of personnel with technical expertise


    Rising adoption of AI in developing regions
    Development of smart robots

  40. Tomi Engdahl says:

    Dan Falk / Quanta Magazine:
    A look at how AI is being used in science, from probing the evolution of galaxies and calculating quantum wave functions to discovering new chemical compounds

    How Artificial Intelligence Is Changing Science

    The latest AI algorithms are probing the evolution of galaxies, calculating quantum wave functions, discovering new chemical compounds and more. Is there anything that scientists do that can’t be automated?


Leave a Comment

Your email address will not be published. Required fields are marked *