3 AI misconceptions IT leaders must dispel


 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”


  1. Tomi Engdahl says:

    New Facial Recognition Tech Only Needs Your Eyes and Eyebrows
    You won’t be able to hide behind a mask

    The term “facial recognition” typically refers to technology that can identify your entire face. How that recognition happens can vary, and can include infrared or lidar technology. Either way, you need the geometry of a person’s entire face to make it work.
    But in the coronavirus era, when everyone is advised to wear a mask, exposed faces are increasingly rare. That’s breaking facial recognition systems everywhere, from iPhones to public surveillance apparatuses

    Now, facial recognition company Rank One says it has a solution. This week, the company released a new form of facial recognition called periocular recognition, which can supposedly identify individuals by just their eyes and eyebrows. Rank One says the new system uses an entirely different algorithm from its standard facial recognition system and is specifically meant for masked individuals. Rank One says it will ship the technology to all of its active customers for free.

  2. Tomi Engdahl says:

    Microsoft and Intel project converts malware into images before
    analyzing it
    Microsoft and Intel have recently collaborated on a new research
    project that explored a new approach to detecting and classifying
    malware. Called STAMINA (STAtic Malware-as-Image Network Analysis),
    the project relies on a new technique that converts malware samples
    into grayscale images and then scans the image for textural and
    structural patterns specific to malware samples.

    Microsoft and Intel project converts malware into images before analyzing it

    Microsoft and Intel Labs work on STAMINA, a new deep learning approach for detecting and classifying malware.

  3. Tomi Engdahl says:

    Thunderbolt Flaws Expose Millions of PCs to Hands-On Hacking
    SECURITY PARANOIACS HAVE warned for years that any laptop left alone
    with a hacker for more than a few minutes should be considered
    compromised. Now one Dutch researcher has demonstrated how that sort
    of physical access hacking can be pulled off in an ultra-common
    component: The Intel Thunderbolt port found in millions of PCs.. Also:

  4. Tomi Engdahl says:

    Tenstorrent Is Changing the Way We Think About AI Chips

    GPUs and CPUs are reaching their limits as far as AI is concerned. That’s why Tenstorrent is creating something different.

    GPUs and CPUs are not going to be enough to ensure a stable future for artificial intelligence. “GPUs are essentially at the end of their evolutionary curve,” Ljubisa Bajic, CEO of AI chip startup Tenstorrent told Design News. “[GPUs] have done a great job; they’ve pushed the field to to the point where it is now. But in order to make any kind of order of magnitude type jumps GPUs are going to have to go.”

    Spiking neural networks (SNNs) more closely mimic the functions of biological neurons, which send information via spikes in electrical activity. “Here people try to simulate natural neurons almost directly by writing out the differential equations that describe their operation and then implementing them as close we can in hardware,” Bajic explained. “So to an engineer this comes down to basically having many scalar processor cores connected to the scalar network.”

    This is very inefficient from a hardware standpoint. But Bajic said that SNNs have an efficiency that biological neurons have in that only a certain percentage of neurons are activated depending on what the neural net is doing – something that’s highly desirable in terms of power consumption in particular.

  5. Tomi Engdahl says:

    Today Sony announced that it has developed and is distributing smart image sensors that use machine learning to process captured images on the sensor itself. They can then select only relevant images, or parts of images, to send on to cloud-based systems or local hubs.

    Sony Builds AI Into a CMOS Image Sensor

    Sony today announced that it has developed and is distributing smart image sensors. These devices use machine learning to process captured images on the sensor itself. They can then select only relevant images, or parts of images, to send on to cloud-based systems or local hubs.

    This technology, says Mark Hanson, vice president of technology and business innovation for Sony Corp. of America, means practically zero latency between the image capture and its processing; low power consumption enabling IoT devices to run for months on a single battery; enhanced privacy; and far lower costs than smart cameras that use traditional image sensors and separate processors.

    Sony builds these chips by thinning and then bonding two wafers—one containing chips with light-sensing pixels and one containing signal processing circuitry and memory. This type of design is only possible because Sony is using a back-illuminated image sensor.

    “We originally went to backside illumination so we could get more pixels on our device,” says Hanson. “That was the catalyst to enable us to add circuitry; then the question was what were the applications you could get by doing that.”

    Sony’s smart image processor can identify and track objects, only sending data on to the cloud when it spots an anomaly.

  6. Tomi Engdahl says:


    Nvidian piti alun perin järjestää GPU Technology -tapahtumansa maaliskuussa. Nyt tapahtuma järjestettiin virtuaalisesti ja Nvidian pääjohtaja Jensen Huangin keynote-puhe oli nauhoitettu miehen omassa keittiössä. Puheessaan mies paljasti maailman toistaiseksi tehokkaimman tekoälyprosessorin.

    Tähän asti Nvidian Volta-piirit ovat olleet se mittari, johon tekoälyprosessoreja on verrattu. Yhtiön uusi A100-piiri teki Voltasta kerralla hitaan. 54 miljardista 7 nanometrin prosessissa valmistetusta transistorista koostuva piiri on jopa 20 kertaa Volta-piirejä tehokkaampi, Jensen hehkutti.

    A100-prosessori on ensimmäinen, joka perustu Nvidian uuteen Ampere-arkitehtuuriin. Nvidian mukaan jo kahdeksantoista palveluntarjoajaa on sitoutunut käyttämänä A100-pohjaisia järjestelmiä. Mukana ovat esimerkiksi Alibaba Cloud, Amazon Web Services, Baidu Cloud, Cisco, Dell Technologies, Google Cloud, Hewlett Packard Enterprise, Microsoft Azure ja Oracle.

  7. Tomi Engdahl says:

    Sonyn uusi prosessori tuo tekoälyn suoraan kameroihin

    Käyttäjä voi valita, mitä dataa IMX500-piirit tuottavat laitteen käyttöön. Valittavina on raaka pikselidata (eli normaali kuvainformaatio), metadata, kuva ISP-prosessoidussa muodossa tai haluttu kiinnostuksen kohde. Sonyn mukaan piirit tunnistavat älykkäästi videokuvan kohteita 1,3 millisekunnissa. Tämä tarkoittaa käytännössä kuvan objektien reaaliaikaista seurantaa.

    Käyttäjät voivat kirjoittaa valitsemansa AI-mallit piirin sulautettuun muistiin ja muuttaa ja päivittää malleja oman sovelluksen vaatimusten tai järjestelmän käyttöpaikan olosuhteiden mukaan. Samoja malleja voidaan myös käyttää eri sovelluksissa objektien tunnistamisesta käyttäytymisen tunnistamiseen.

  8. Tomi Engdahl says:

    Artificial intelligence is struggling to cope with how the world has changed
    Narrow artificial intelligence is finding it hard to make good predictions in an environment filled with change. That means it’s time to look at better models for AI.


  9. Tomi Engdahl says:

    By monitoring people’s positions and the overall power usage of a property, this new deep learning system can infer appliance locations.

    CSAIL Engineers Build a System for Tracking Home Appliances — Without Manual Intervention

    By monitoring people’s positions and the overall power usage of a property, a new deep learning system can infer appliance locations.

  10. Tomi Engdahl says:

    Kumpi robotin pitäisi pelastaa – viaton kalastaja vai juopunut veneilijä? Vastaa ja katso, mitä mieltä muut ovat

    Tervetuloa tulevaisuuteen. Robotit pelastavat ja hoivaavat sinua, tekoäly ohjaa autoasi. Millaisia päätöksiä tekoälyjen pitäisi mielestäsi tehdä? Minkälainen eettinen koodi robotteihin pitäisi koodata? Lue neljä tarinaa robottien ristiriitatilanteista ja vastaa, miten robotin pitäisi toimia.

  11. Tomi Engdahl says:

    Nvidia’s bleeding-edge Ampere GPU architecture revealed: 5 things PC gamers need to know
    Nvidia’s next-gen GPU architecture is finally here.

  12. Tomi Engdahl says:

    Sony Launches “World’s First” Vision Sensor with On-Board Edge AI Processing Capabilities
    Designed for edge AI, Sony’s new IMX500 and IMX501 can run computer vision tasks entirely locally — and quickly, too.

  13. Tomi Engdahl says:

    Hands-On with the NVIDIA Jetson Xavier NX Developer Kit
    Finally available in a bundle with baseboard, does NVIDIA’s Volta-based edge AI acceleration machine deliver on its promises?

    Unveiled late last year, the Jetson Xavier NX is the latest entry in NVIDIA’s deep learning-accelerating Jetson family. Described by the company as “the world’s smallest supercomputer” and directly targeting edge AI implementations, the Developer Kit edition which bundles the core system-on-module (SOM) board with an expansion baseboard was originally due to launch in March this year — but a last-minute delay saw the device slip to May, launching today at $399. Does it deliver on its heady promise?

  14. Tomi Engdahl says:

    Campbell Kwan / ZDNet:
    Sony and Microsoft announce partnership to embed Microsoft Azure AI capabilities onto Sony’s new image sensor with built-in AI that was announced last week

    Microsoft and Sony to create smart camera solutions for AI-enabled image sensor

    Sony’s image sensor will have Microsoft Azure artificial intelligence capabilities.

    Sony and Microsoft have joined together to create artificial intelligence-powered (AI) smart camera solutions to make it easier for enterprise customers to perform video analytics, the companies announced.

    The companies will embed Microsoft Azure AI capabilities onto Sony’s AI-enabled image sensor IMX500. Announced last week, the IMX500 is the world’s first image sensor to contain a pixel chip and logic chip. The logic chip, called Sony’s digital signal processor, is dedicated to AI signal processing, along with memory for the AI model.

    “Video analytics and smart cameras can drive better business insights and outcomes across a wide range of scenarios for businesses,” said Takeshi Numoto, corporate vice president and commercial chief marketing officer at Microsoft.

    Sony and Microsoft also announced that they will create a smart camera managed app powered by Azure Internet of Things (IoT) and cognitive services that it hopes to use alongside the IMX500 sensor to provide new video analytics use cases for enterprise customers.

    According to Sony, the app will allow independent software vendors (ISVs) and smart camera original equipment manufacturers (OEMs) to develop AI models,

  15. Tomi Engdahl says:


    Nvidia made a slew of announcements along with the company’s virtual GTC keynote. Primary among them was its new GPU architecture, Ampere. The new architecture aims to unify AI training and inference and boost performance by up to 20x over its predecessors. It adds automatic mixed precision and support for both Tensor Float (TF32) and Floating Point 64 (FP64). The first Ampere GPU is the A100, a universal workload accelerator built for AI, data analytics, scientific computing and cloud graphics. It can be partitioned into as many as seven independent instances for inferencing tasks or combine with other A100s as a single GPU.

    Nvidia’s EGX Edge AI platform was expanded with the addition of the EGX A100 for larger commercial off-the-shelf servers (which incorporates newly-acquired Mellanox technology and is and based on the Ampere architecture) and the credit card-sized EGX Jetson Xavier NX for micro servers and edge AI. And with health on the forefront of many minds, the company’s Clara healthcare platform was updated for faster genomic sequencing, new AI models, and integration of sensors for smart hospitals.

  16. Tomi Engdahl says:

    Spiking Neural Networks: Research Projects or Commercial Products?
    Opinions differ widely, but in this space that isn’t unusual.

    Spiking neural networks (SNNs) often are touted as a way to get close to the power efficiency of the brain, but there is widespread confusion about what exactly that means. In fact, there is disagreement about how the brain actually works.

    Some SNN implementations are less brain-like than others. Depending on whom you talk to, SNNs are either a long way away or close to commercialization. The varying definitions of SNNs leads to differences in how the industry is seen.

    “A few startups are doing their own SNNs,” said Ron Lowman, strategic marketing manager of IP at Synopsys. “It’s being driven by guys that have expertise in how to train, optimize, and write software for them.”

    On the other hand, Flex Logix Inference Technical Marketing Manager Vinay Mehta said that, “SNNs are out further than reinforcement learning,” referring to a machine-learning concept that’s still largely in the research phase.

    The entire notion of a “neural network” is motivated by attempts to model how the brain works. But current neural networks — like the convolutional neural networks (CNNs) that are so prevalent today – don’t follow the design of the brain. Instead, they rely on matrix multiplication for incorporating synaptic weights and gradient-descent algorithms for supervised training.

    Those working on SNNs often refer to these as “classical” networks or “artificial” neural networks (ANNs). That said, Alexandre Valentian, head of advanced technologies and system-on-chip laboratory for CEA-Leti, noted that CNNs reflect more of an approach or type of application, while SNNs reflect an implementation. “CNNs can be implemented in spikes – it’s not CNN vs. SNN.”

  17. Tomi Engdahl says:

    11 Myths About Inference Acceleration
    The inference-acceleration market has heated up dramatically, and in turn, it’s led to many misconceptions circulating amongst ill-informed vendors and customers. This article debunks 11 of the most common myths.


  18. Tomi Engdahl says:

    Facial recognition firms are scrambling to see around face masks

    Because of face coverings prompted by the coronavirus pandemic, companies are trying to ID people based on just their eyes and cheekbones.

  19. Tomi Engdahl says:

    Face Recognition with Python, in Under 25 Lines of Code

  20. Tomi Engdahl says:

    AI/DC: I made a bot write an AC/DC song

    Using lyrics.rip to scrape the Genius Lyrics Database, I made a Markov Chain write AC/DC lyrics. This is the end result- “Great Balls”.

  21. Tomi Engdahl says:

    Help wanted: Autonomous robot guide
    As Postmates ramps up its autonomous delivery, an at-home tech job emerges

  22. Tomi Engdahl says:

    The MaeGo Autonomous Robot Lets Kids Learn Coding While Playing

    MaeGo is an autonomous robot rover built for a target shooting game, but it also gives kids an opportunity to learn coding.

  23. Tomi Engdahl says:

    Speech Synthesis Enters The Uncanny Valley, Or ‘What Will Biggie Rap Next?’

    Speech synthesis, the use of computers to generate realistic human speech, is rapidly entering the ‘uncanny valley’ – creepily almost-realistic.

    Recent approaches have used neural networks, trained using only speech examples and text transcripts, to generate human-like text-to-speech synthesis.

    The ‘voice’ voice was computer-generated, using a text-to-speech model trained on the speech patterns of The Notorious B.I.G. In a nutshell, the approach uses an AI to ‘learn’ how an audio file of an individual’s speech compares to a text transcript. Once trained, the model can synthesize speech from text that conforms to the ‘learned’ speech patterns.

    The Vocal Synthesis channel on Youtube features a wide range of examples that demonstrate what’s currently possible.


  24. Tomi Engdahl says:

    Luxonis Unveils Ultra-Compact DepthAI-Compatible 4k60 Computer Vision Module, MegaAI

    Supporting 4k30 encode and 4k60 streaming from an on-board camera module, plus 4 TOPS of compute, the megaAI is small but mighty.

  25. Tomi Engdahl says:

    Automated Pinball Machine Scores Big with Computer Vision

    This scratch-built pinball machine doesn’t just play ball; it plays itself.

    a pinball machine! The entire machine is made from scratch using CNC routed plywood, solenoid-powered actuators, some hobbyist electronics, and a Linux computer.

    An Arduino Mega lies at the heart of this build. However, most off-the-shelf pinball components use solenoids. These run on a 48v source and require quite a bit of current that the Mega isn’t able to deliver. The team went with some IRF44V MOSFETs to safely drive the required power to various flippers and bumpers, along with some protection circuitry to boot.

    As for the automation, a webcam mounted above the playfield keeps an eye on the ball’s position with the help of a computer running an OpenCV script. This looks for a ball entering the “Flip Zone” and sends a command to the Mega to trigger the flippers when it’s time to strike.

    Interestingly, the way they’ve written the OpenCV script does not detect the ball using circle detection, but instead by using a reference photo. A picture is taken of the playfield with no ball present and the flippers down, then all subsequent frames are compared against this baseline. Any differences between the two images are marked as a potential ball.


  26. Tomi Engdahl says:

    Researchers Create Perovskite “Tree” Memories for More Natural, Energy-Efficient AI Hardware

    Designed to replace software-based AI with specific hardware, the new material is claimed to be considerably more efficient.

  27. Tomi Engdahl says:

    Influencers Say Instagram Is Biased Against Plus-Size Bodies, And They May Be Right

    Plus-size influencers have long complained about their posts being flagged on social media, and there are a few reasons why it might be happening.

    There have been numerous reports of people like Fatale having their pictures and videos flagged and removed from social media.

    Even very famous women aren’t spared.

    While there’s no hard data showing images of plus-size people are flagged more often, there have been so many anecdotes of it that influencers can’t help but see a pattern.

    According to experts who spoke with BuzzFeed News, it’s very possible they are right. Content moderation on social media apps is usually a mix of artificial intelligence and human moderators, and both methods have a potential bias against larger bodies.

    “Technology and discrimination goes way back,” he told BuzzFeed News. “Anytime you design a new project or a new prototype you have to think about how it is going to break.”

    Companies like Facebook build their own proprietary image and video moderation AI. They build it by feeding it millions of images so it can identify patterns and learn what is acceptable and what is not. It learns, for example, to identify pornography, or a nipple, or a bikini. As it scans images uploaded by users, it decides how likely that image is to contain banned content. If it’s very sure, it can automatically flag the content. If it’s only sort of sure, it can forward that content along to a human to double-check.

    The problem is there are so many gray areas, and the AI can only make its guesses based on what it’s been taught. That’s where the first potential problem arises. If the AI wasn’t fed many images of plus-size women, which is a possibility given the bias against larger bodies in media, that could be the start of a problem.

    “If you take two models, one plus-size one not plus-size, there’s a chance there are more pixels related to skin,” he said.

    Since the AI doesn’t know the context of what it’s seeing, this could lead to incorrect categorization. However, these AI systems are built by people, and people are biased.

    “This technology is not trained to remove content based on a person’s size, it is trained to look for violating elements — such as visible genitalia or text containing hate speech.”

    Because of all these gray areas, and because of the sheer scale of these moderation databases, actually fixing a potential problem like this would be expensive and time-consuming, and companies have very little motivation to do it.

    Lo said apps like Instagram or TikTok are under pressure to keep things PG, both to keep themselves available on app stores, but also due to laws like FOSTA-SESTA or the resources it takes to remove terrorism-related content. It’s just easier to err on the side of caution.

    “I shouldn’t be silenced and erased because you are hypersexualizing my body because it’s bigger,”

  28. Tomi Engdahl says:

    Sony will embed Microsoft Azure AI on the Sony intelligent vision sensor IMX500, to extract more image data from a smart camera and give an option for having cloud-based AI inferencing from multiple cameras/devices. Sony is making a smart camera app that works with the sensor that includes Azure IoT and Cognitive Services to hook systems to the Cloud via Microsoft Azure if needed. Sony and Microsoft are targeting enterprises that, for instance, may want to gather inventory or shelf-stock intelligence and use AI to crunch the data into actionable intelligence. Independent software vendors (ISVs) specializing in computer vision and video analytics solutions, as well as smart camera original equipment manufacturers (OEMs) are the targets and both Sony and Microsoft will work with partners and enterprise customers in the areas of computer vision and video analytics as part of Microsoft’s AI & IoT Insider Labs program, according to a press release.


  29. Tomi Engdahl says:

    AI for cybersecurity is a hot new thing—and a dangerous gamble

    Machine learning and artificial intelligence can help guard against cyberattacks, but hackers can foil security algorithms by targeting the data they train on and the warning flags they look for.

  30. Tomi Engdahl says:

    Microsoft sacks journalists to replace them with robots
    Users of the homepages of the MSN website and Edge browser will now see news stories generated by AI

  31. Tomi Engdahl says:

    Microsoft is cutting dozens of MSN news production workers and replacing them with artificial intelligence

    Microsoft won’t renew the contracts for dozens of news production contractors working at MSN and plans to use artificial intelligence to replace them, several people close to the situation confirmed on Friday.

    The roughly 50 employees — contracted through staffing agencies Aquent, IFG and MAQ Consulting — were notified Wednesday that their services would no longer be needed beyond June 30.

    “Like all companies, we evaluate our business on a regular basis,”

  32. Tomi Engdahl says:

    Dave Gershgorn / OneZero :
    Analysis of NIST-submitted vendors shows at least 45 companies now advertise real-time facial recognition services, from RealNetworks to Toshiba

    From RealPlayer to Toshiba, Tech Companies Cash in on the Facial Recognition Gold Rush
    At least 45 companies now advertise real-time facial recognition

  33. Tomi Engdahl says:

    Hardware Security For AI Accelerators
    Learn about the threats to AI/ML assets.

    Dedicated accelerator hardware for artificial intelligence and machine learning (AI/ML) algorithms are increasingly prevalent in data centers and endpoint devices. These accelerators handle valuable data and models, and face a growing threat landscape putting AI/ML assets at risk. Using fundamental cryptographic security techniques performed by a hardware root of trust can safeguard these assets from attack.

    Hardware Security for AI Accelerators

    Dedicated accelerator hardware for artificial intelligence and machine learning (AI/ML) algorithms are increasingly prevalent in data centers and endpoint devices. These accelerators handle valuable data and models, and face a growing threat landscape putting AI/ML assets at risk. Using fundamental cryptographic security techniques performed by a hardware root of trust can safeguard these assets from attack.

  34. Tomi Engdahl says:

    Introducing Google Coral Edge TPU — a New Machine Learning ASIC from Google

  35. Tomi Engdahl says:

    Can the EU make AI “trustworthy”? No – but they can make it just

    Today, 4 June 2020, European Digital Rights (EDRi) submitted its answer to the European Commission’s consultation on the AI White Paper. On top of our response, in our additional paper we outline recommendations to the European Commission for a fundamental rights- based AI regulation. You can find our consultation response, recommendations paper, and answering guide for the public here

  36. Tomi Engdahl says:

    IBM will no longer offer, develop, or research facial recognition technology

    IBM’s CEO says we should reevaluate selling the technology to law enforcement

    IBM will no longer offer general purpose facial recognition or analysis software, IBM CEO Arvind Krishna said in a letter to Congress today. The company will also no longer develop or research the technology, IBM tells The Verge.

    “IBM firmly opposes and will not condone uses of any [facial recognition] technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” Krishna said in the letter. “We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”

  37. Tomi Engdahl says:

    Amazon Won’t Let Police Use Its Facial Recognition Technology For One Year

    After facing scrutiny for its ties to police in the wake of the George Floyd protests, Amazon said Wednesday it would ban police from using its controversial facial recognition technology for one year.

    Organizations working to end human trafficking, such as Thorn, the International Center for Missing and Exploited Children, and Marinus Analytics, can continue using the technology, which is called Rekognition.

    In a statement, Amazon said it hopes Congress will pass legislation governing the use of facial recognition during the moratorium.


Leave a Comment

Your email address will not be published. Required fields are marked *