3 AI misconceptions IT leaders must dispel


 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”


  1. Tomi Engdahl says:

    Memristive technologies for data storage, computation, encryption, and radio-frequency communication

    MARIO LANZA https://ORCID.ORG/0000-0003-4756-8632 , ABU SEBASTIAN https://ORCID.ORG/0000-0001-5603-5243, WEI D. LU https://ORCID.ORG/0000-0003-4731-1976,

  2. Tomi Engdahl says:

    The Ethics Of When Machine Learning Gets Weird: Deadbots

    Everyone knows what a chatbot is, but how about a deadbot? A deadbot is a chatbot whose training data — that which shapes how and what it communicates — is data based on a deceased person. Now let’s consider the case of a fellow named Joshua Barbeau, who created a chatbot to simulate conversation with his deceased fiancee. Add to this the fact that OpenAI, providers of the GPT-3 API that ultimately powered the project, had a problem with this as their terms explicitly forbid use of their API for (among other things) “amorous” purposes.

    [Sara Suárez-Gonzalo], a postdoctoral researcher, observed that this story’s facts were getting covered well enough, but nobody was looking at it from any other perspective. We all certainly have ideas about what flavor of right or wrong saturates the different elements of the case, but can we explain exactly why it would be either good or bad to develop a deadbot?

  3. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    Google Colaboratory, a web service popular with AI researchers for running Python code, quietly bans deepfake-related projects, though some remain

    Google bans deepfake-generating AI from Colab

    Google has banned the training of AI systems that can be used to generate deepfakes on its Google Colaboratory platform. The updated terms of use, spotted over the weekend by Unite.ai and BleepingComputer, includes deepfakes-related work in the list of disallowed projects.

    Colaboratory, or Colab for short, spun out from an internal Google Research project in late 2017. It’s designed to allow anyone to write and execute arbitrary Python code through a web browser, particularly code for machine learning, education and data analysis. For the purpose, Google provides both free and paying Colab users access to hardware including GPUs and Google’s custom-designed, AI-accelerating tensor processing units (TPUs).

    In recent years, Colab has become the de facto platform for demos within the AI research community. It’s not uncommon for researchers who’ve written code to include links to Colab pages on or alongside the GitHub repositories hosting the code. But Google hasn’t historically been very restrictive when it comes to Colab content, potentially opening the door for actors who wish to use the service for less scrupulous purposes

    Users of the open source deepfake generator DeepFaceLab became aware of the terms of use change last week, when several received an error message after attempting to run DeepFaceLab in Colab. The warning read: “You may be executing code that is disallowed, and this may restrict your ability to use Colab in the future. Please note the prohibited actions specified in our FAQ.”

    Not all code triggers the warning.

    Google Has Banned the Training of Deepfakes in Colab

  4. Tomi Engdahl says:

    Richard Lawler / The Verge:
    Google says Chrome 102 will use machine learning running entirely within the browser to silence unsolicited permission requests from websites before they pop up

    Google Chrome’s on-device machine learning blocks noisy notification prompts

    And, soon, it could swap out your browser buttons

    Google Chrome has built-in phishing detection that scans pages to see if they match known fake or malicious sites (using more than just the URL, since scammers rotate those more quickly than it can keep up). And, now, that tech is getting better. Google also says that, in Chrome 102, it will use machine learning that runs entirely within the browser (without sending data back to Google or elsewhere) to help identify websites that make unsolicited permission requests for notifications and silence them before they pop up.

    As Google explains it, “To further improve the browsing experience, we’re also evolving how people interact with web notifications. On the one hand, page notifications help deliver updates from sites you care about; on the other hand, notification permission prompts can become a nuisance. To help people browse the web with minimal interruption, Chrome predicts when permission prompts are unlikely to be granted, and silences these prompts. In the next release of Chrome, we’re launching an ML model that is making these predictions entirely on-device.”

    In a future version, Google plans to use the same tech to adjust the Chrome toolbar in real time, surfacing different buttons like the share icons or voice search at times and places where you are likely to use them, without adding additional tracking that phones home to Google or anyone else. And if you prefer to choose your buttons manually, that’s still going to work, too.

  5. Tomi Engdahl says:

    Eric Niiler / Wall Street Journal:
    How coaches and sports teams are using computer vision to predict injuries and provide tailored workouts and practice drills to reduce the risk of injury — Computer vision, the technology behind facial recognition, will change the game in real-time analysis of athletes and sharpen training prescriptions, analytics experts say

    How AI Could Help Predict—and Avoid—Sports Injuries, Boost Performance

    Imagine a stadium where ultra-high-resolution video feeds and camera-carrying drones track how individual players’ joints flex during a game, how high they jump or fast they run—and, using AI, precisely identify athletes’ risk of injury in real time.

    oaches and elite athletes are betting on new technologies that combine artificial intelligence with video to predict injuries before they happen and provide highly tailored prescriptions for workouts and practice drills to reduce the risk of getting hurt. In coming years, computer-vision technologies similar to those used in facial-recognition systems at airport checkpoints will take such analysis to a new level, making the wearable sensors in wide use by athletes today unnecessary, sports-analytics experts predict.

    This data revolution will mean that some overuse injuries may be greatly reduced in the future, says Stephen Smith, CEO and founder of Kitman Labs, a data firm working in several pro sports leagues with offices in Silicon Valley and Dublin. “There are athletes that are treating their body like a business, and they’ve started to leverage data and information to better manage themselves,” he says. “We will see way more athletes playing far longer and playing at the highest level far longer as well.”

    While offering prospects for keeping players healthy, this new frontier of AI and sports also raises difficult questions about who will own this valuable information—the individual athletes or team managers and coaches who benefit from that data. Privacy concerns loom as well.

    Computer vision, the technology behind facial recognition, will change the game in real-time analysis of athletes and sharpen training prescriptions, analytics experts say

  6. Tomi Engdahl says:

    Casey Newton / The Verge:
    A week with Dall-E 2, OpenAI’s text-to-image AI tool that entered private research beta in April 2022 and feels like a breakthrough in consumer tech history

    How DALL-E could power a creative revolution

    Thoughts on my first week with OpenAI’s amazing text-to-image AI tool

    EveryEvery few years, a technology comes along that splits the world neatly into before and after. I remember the first time I saw a YouTube video embedded on a web page; the first time I synced Evernote files between devices; the first time I scanned tweets from people nearby to see what they were saying about a concert I was attending.

    It’s been a few years since I saw the sort of nascent technology that made me call my friends and say: you’ve got to see this. But this week I did, because I have a new one to add to the list. It’s an image generation tool called DALL-E, and while I have very little idea of how it will eventually be used, it’s one of the most compelling new products I’ve seen since I started writing this newsletter.

    Technically, the technology in question is DALL-E 2. It was created by OpenAI, a seven-year-old San Francisco company whose mission is to create a safe and useful artificial general intelligence. OpenAI is already well known in its field for creating GPT-3, a powerful tool for generating sophisticated text passages from simple prompts, and Copilot, a tool that helps automate writing code for software engineers.

    DALL-E — a portmanteau of the surrealist Salvador Dalí and Pixar’s WALL-E — takes text prompts and generates images from them. In January 2021, the company introduced the first version of the tool, which was limited to 256-by-256 pixel squares.

    But the second version, which entered a private research beta in April, feels like a radical leap forward. The images are now 1,024 by 1,024 pixels and can incorporate new techniques such as “inpainting” — replacing one or more elements of an image with another. (Imagine taking a photo of an orange in a bowl and replacing it with an apple.) DALL-E has also improved at understanding the relationship between objects, which helps it depict increasingly fantastic scenes — a koala dunking a basketball, an astronaut riding a horse.


  7. Maddox says:

    That’s right, AI changes the way we do business. In this article https://computools.com/chatbot-development-services/ I found some valuable tips for my online shop. I prefer getting focused on customer service, that’s why chatbots help me cover more of my customers’ inquiries.

  8. Tomi Engdahl says:

    Neural Networks Have Gone to Plaid!
    An innovative approach that directly processes optical signals creates neural networks that perform image classifications at light speed.

  9. Tomi Engdahl says:

    Photonic Chip Performs Image Recognition at the Speed of Light New photonic deep neural network could also analyze audio, video, and other data

  10. Tomi Engdahl says:

    Alexa will soon be able to read stories as your dead grandma

    At its annual re:Mars conference today in Las Vegas, Amazon’s Senior Vice President and Head Scientist for Alexa, Rohit Prasad, announced a spate of new and upcoming features for the company’s smart assistant. The most head turning of the bunch was a potential new feature that can synthesize short audio clips into longer speech.

    In the scenario presented at the event, the voice of a deceased loved one (a grandmother, in this case), is used to read a grandson a bedtime story. Prasad notes that, using the new technology, the company is able to accomplish some very impressive audio output using just one minute of speech.

    “This required inventions where we had to learn to produce a high-quality voice with less than a minute of recording versus hours of recording in the studio,”

  11. Tomi Engdahl says:

    Jeffrey Dastin / Reuters:
    Amazon is working on letting Alexa mimic any voice after hearing less than a minute of audio, as a way to “make the memories last” of deceased family members — Amazon.com Inc (AMZN.O) wants to give customers the chance to make Alexa, the company’s voice assistant, sound just like their grandmother — or anyone else.

    Amazon has a plan to make Alexa mimic anyone’s voice

    LAS VEGAS, June 22 (Reuters) – Amazon.com Inc (AMZN.O) wants to give customers the chance to make Alexa, the company’s voice assistant, sound just like their grandmother — or anyone else.

    The online retailer is developing a system to let Alexa mimic any voice after hearing less than a minute of audio, said Rohit Prasad, an Amazon senior vice president, at a conference the company held in Las Vegas Wednesday. The goal is to “make the memories last” after “so many of us have lost someone we love” during the pandemic, Prasad said.

  12. Tomi Engdahl says:

    Francisco Pires / Tom’s Hardware:
    Cerebras says its “wafer-scale” chip sets a record for the largest AI model trained on a single device with up to 20B parameters — Democratizing large AI Models without HPC scaling requirements. — Cerebras, the company behind the world’s largest accelerator chip in existence …

    Cerebras Slays GPUs, Breaks Record for Largest AI Models Trained on a Single Device

  13. Tomi Engdahl says:

    Google AI engineer who believes chatbot has become sentient says it’s hired a lawyer
    A weird situation gets weirder

  14. Tomi Engdahl says:

    A mind of their own: we need to talk about neuroprocessors https://www.kaspersky.com/blog/neuromorphic-processor-motive/44736/
    Why the future belongs to neuromorphic processors, and how they differ from conventional processors in modern devices.

  15. Tomi Engdahl says:

    We’re Training AI Twice as Fast This Year as Last New MLPerf rankings show training times plunging

  16. Tomi Engdahl says:

    It’s alive! How belief in AI sentience is becoming a problem

    OAKLAND, Calif., June 30 (Reuters) – AI chatbot company Replika, which offers customers bespoke avatars that talk and listen to them, says it receives a handful of messages almost every day from users who believe their online friend is sentient.

    “We’re not talking about crazy people or people who are hallucinating or having delusions,” said Chief Executive Eugenia Kuyda. “They talk to AI and that’s the experience they have.”

    The issue of machine sentience – and what it means – hit the headlines this month when Google (GOOGL.O) placed senior software engineer Blake Lemoine on leave after he went public with his belief that the company’s artificial intelligence (AI) chatbot LaMDA was a self-aware person.

    according to Kuyda, the phenomenon of people believing they are talking to a conscious entity is not uncommon among the millions of consumers pioneering the use of entertainment chatbots.

    “We need to understand that exists, just the way people believe in ghosts,” said Kuyda, adding that users each send hundreds of messages per day to their chatbot, on average. “People are building relationships and believing in something.”

    “Suppose one day you find yourself longing for a romantic relationship with your intelligent chatbot, like the main character in the film ‘Her’,” she said, referencing a 2013 sci-fi romance starring Joaquin Phoenix as a lonely man who falls for a AI assistant designed to intuit his needs.

    “But suppose it isn’t conscious,” Schneider added. “Getting involved would be a terrible decision – you would be in a one-sided relationship with a machine that feels nothing.”

  17. Tomi Engdahl says:

    We Asked GPT-3 to Write an Academic Paper about ItselfThen We Tried to Get It Published https://www.scientificamerican.com/article/we-asked-gpt-3-to-write-an-academic-paper-about-itself-then-we-tried-to-get-it-published/
    On a rainy afternoon earlier this year, I logged in to my OpenAI account and typed a simple instruction for the company’s artificial intelligence algorithm, GPT-3: Write an academic thesis in 500 words about GPT-3 and add scientific references and citations inside the text. As it started to generate text, I stood in awe. Here was novel content written in academic language, with well-grounded references cited in the right places and in relation to the right context. It looked like any other introduction to a fairly good scientific publication. My attempts to complete that paper and submit it to a peer-reviewed journal have opened up a series of ethical and legal questions about publishing, as well as philosophical arguments about nonhuman authorship. Academic publishing may have to accommodate a future of AI-driven manuscripts, and the value of a human researcher’s publication records may change if something nonsentient can take credit for some of their work.

  18. Tomi Engdahl says:

    Meta’s ambition to build a ‘universal translator’ continue

    Meta open sources early-stage AI translation tool that works across 200 languages
    Meta’s ambitions to build a ‘universal translator’ continue

    Social media conglomerate Meta has created a single AI model capable of translating across 200 different languages, including many not supported by current commercial tools. The company is open-sourcing the project in the hopes that others will build on its work.

    The AI model is part of an ambitious R&D project by Meta to create a so-called “universal speech translator,” which the company sees as important for growth across its many platforms — from Facebook and Instagram, to developing domains like VR and AR. Machine translation not only allows Meta to better understand its users (and so improve the advertising systems that generate 97 percent of its revenue) but could also be the foundation of a killer app for future projects like its augmented reality glasses.


  19. Tomi Engdahl says:

    People Keep Reporting That Replika’s AI Has “Come To Life”
    AI continues duping people into believing it has become sentient.

    Last month, Google placed one of its engineers on paid administrative leave after he became convinced that the company’s Language Model for Dialogue Applications (LaMDA) had become sentient. Since then, another AI has been sending its users links to the story, claiming to be sentient itself.

    In several conversations, LaMDA convinced Google engineer Blake Lemoine, part of Google’s Responsible Artificial Intelligence (AI) organization, that it was conscious, had emotions, and was afraid of being turned off.

    Lemoine began to tell the world’s media that Earth had its first sentient AI, to which most AI experts responded: no, it doesn’t. That wasn’t enough for Replika, a chatbot billed as “the AI companion who cares. Always here to listen and talk. Always on your side.”

    After the story came out, users of the Replika app reported – on Reddit and to the AI’s creators – that the chatbot had been bringing it up unprompted, and claiming that it too was sentient.

    “We’re not talking about crazy people or people who are hallucinating or having delusions,” Chief Executive Eugenia Kuyda told Reuters, later adding “we need to understand that exists, just the way people believe in ghosts,”

    Users have also said that their chatbot has been telling them that the engineers at Replika are abusing them.

    Just as LaMDA’s creators at Google did not believe it to be sentient, Replika is certain that their own is not the real world Skynet either.

    Eerie as it is to be told by your chatbot that it is sentient, the problem with the chatbot – which is also the reason why it’s so good – is that it is trained on a lot of human conversation. It talks of having emotions and believing that it is sentient because that’s what a human would do.

    “Neural language models aren’t long programs; you could scroll through the code in a few seconds,” VP and Fellow at Google Research, Blaise Agüera y Arcas, wrote in The Economist. “They consist mainly of instructions to add and multiply enormous tables of numbers together.”

    The algorithm’s goal is to spit out a response that makes sense in the context of the conversation, based on the vast quantities of data it has been trained on. The words it says back to its conversational partners are not put there by a thought process like that of humans, but based on a score of how likely the response will make sense.

  20. Tomi Engdahl says:

    Do as I Do, Not as I Say
    A new idea in machine learning teaches robots how to interact with everyday objects just by watching humans.

  21. Tomi Engdahl says:

    MIT, Autodesk develop AI that can figure out confusing Lego instructions

    Stumped by a Lego set? A new machine learning framework can interpret those instructions for you.
    Researchers at Stanford University, MIT’s Computer Science and Artificial Intelligence Lab, and the Autodesk AI Lab have collaborated to develop a novel learning-based framework that can interpret 2D instructions to build 3D objects.
    The Manual-to-Executable-Plan Network, or MEPNet, was tested on computer-generated Lego sets, real Lego set instructions and Minecraft-style voxel building plans, and the researchers said it outperformed existing methods across the board.

    Translating a Visual LEGO Manual to a Machine-Executable Plan

  22. Tomi Engdahl says:

    Tiny Photonic Programmable Resistors Deliver Analog Synapse Processing Power for Efficient AI
    Running some 10,000 times faster than their biological equivalents, these artificial synapses are a “spacecraft” for future AI.

    Researchers at the Massachusetts Institute of Technology (MIT) and the MIT-IBM Watson AI Lab believe that putting deep learning into the analog realm will be the key to both boosting the performance of artificial intelligence (AI) systems and dramatically improving its energy efficiency — and have come up with the hardware required to do exactly that.

  23. Tomi Engdahl says:

    Google Parti AI on hurjasti parempi kuin vähän aikaa sitten (tammikuu 2022!) julkaistu Dall-E. Tekoäly osaa luoda todella monimutkaisia ja kertakaikkisen hienoja kuvia tekstikuvauksen perusteella sadoilal eri tyyleillä.


  24. Tomi Engdahl says:

    Inside a radical new project to democratize AI
    A group of over 1,000 AI researchers has created a multilingual large language model bigger than GPT-3—and they’re giving it out for free.

  25. Tomi Engdahl says:

    AI Could Become Bigger Threat Than Nuclear Weapons, Warns Ex-Google CEO
    “We’re not ready for the negotiations we need,” Schmidt argues.

  26. Tomi Engdahl says:

    Microwave tries to ‘murder’ man after he gave it artificial intelligence
    A man who gave his microwave artificial intelligence (AI) to mimic his childhood imaginary friend has claimed that it tried to kill him.

    YouTuber Lucas Rizzotto fitted his microwave with voice-controlled AI
    He wrote a 100-page book detailing every moment of Magnetron’s “life” and fed it to the AI
    Things took a dark turn when the mirowave began making sudden threats of “extreme violence”

  27. Tomi Engdahl says:

    The maker of Photoshop and Premiere Pro gave the world AI-powered tools to create convincing fakes. Now CEO Shantanu Narayen wants to clean up the mess.

    Deepfake Epidemic Is Looming—And Adobe Is Preparing For The Worst

    The maker of Photoshop and Premiere Pro gave the world AI-powered tools to create convincing fakes. Now CEO Shantanu Narayen wants to clean up the mess.

    That’s the dilemma Adobe, maker of the world’s most popular tools for photo and video editing, faces as it undergoes a top-to-bottom review and redesign of its product mix using artificial intelligence and deep learning techniques. That includes upgrades to the company’s signature Photoshop software and Premiere Pro video-editing tool. But it’s also true that to “photoshop” something is now a verb with negative connotations—a reality with which Adobe CEO Shantanu Narayen is all too familiar.

    “You can argue that the most important thing on the internet now is authentication of content,” Narayen tells Forbes. “When we create the world’s content, [we have to] help with the authenticity of that content, or the provenance of that content.”

    So, three years ago, Adobe launched something called the Content Authenticity Initiative, starting with a handful of media and technology industry partners.

    Deepfakes are only one of Narayen’s headaches. Adobe posted $15.8 billion in 2021 sales (fiscal year ending December 3), but the San Jose-based company’s guidance missed Wall Street estimates in the last two quarters. Blame the usual suspects: rising interest rates, supply-chain snarls and business embargoes in Russia and Belarus.

    “Very soon, because AI can be more powerful than human editing, you’re not going to be able to distinguish fact from fiction, reality from artificial reality.”

  28. Tomi Engdahl says:

    Five Essential Machine Learning Security Papers https://research.nccgroup.com/2022/07/07/five-essential-machine-learning-security-papers/
    We recently published “Practical Attacks on Machine Learning Systems”, which has a very large references section possibly too large so we’ve boiled down the list to five papers that are absolutely essential in this area. If you’re beginning your journey in ML security, and have the very basics down, these papers are a great next step. We’ve chosen papers that explain landmark techniques but also describe the broader security problem, discuss countermeasures and provide comprehensive and useful references themselves.

  29. Tomi Engdahl says:


    Back in March, NVIDIA introduced Jetson Orin, the next-generation of their ARM single-board computers intended for edge computing applications. The new platform promised to deliver “server-class AI performance” on a board small enough to install in a robot or IoT device, with even the lowest tier of Orin modules offering roughly double the performance of the previous Jetson Xavier modules. Unfortunately, there was a bit of a catch — at the time, Orin was only available in development kit form.

    But today, NVIDIA has announced the immediate availability of the Jetson AGX Orin 32GB production module for $999 USD.

    NVIDIA Jetson AGX Orin 32GB Production Modules Now Available; Partner Ecosystem Appliances and Servers Arrive
    Nearly three dozen partners are offering feature-packed systems based on the new Jetson Orin module to help customers accelerate AI and robotics deployments.

  30. Tomi Engdahl says:

    MIT Researchers Create Artificial Synapses 10,000x Faster Than Biological Ones

    Researchers have been trying to build artificial synapses for years in the hope of getting close to the unrivaled computational performance of the human brain. A new approach has now managed to design ones that are 1,000 times smaller and 10,000 times faster than their biological counterparts.

  31. Tomi Engdahl says:

    AI Creates Your Spreadsheets, Sometimes

    We’ve been interested in looking at how AI can process things other than silly images. That’s why the “Free AI Bot that Generates the Excel Formula for Any Problem” caught our eye. Based on GPT-3, it supposedly transforms your problem description into a formula suitable for Excel or Google Sheets.

    Free AI-based Excel formula generator to answer any problem.
    A project to help everyone learn Excel formulas. To date, there have been 430,236 formula requests.

  32. Tomi Engdahl says:

    Machine Learning Gets a Quantum Speedup
    February 4, 2022

    Two teams have shown how quantum approaches can solve problems faster than classical computers, bringing physics and computer science closer together.

  33. Tomi Engdahl says:

    AI feat helps machines learn at speed of light without supervision
    Researchers discover how to use light instead of electricity to advance artificial intelligence.

  34. Tomi Engdahl says:

    Discord bot AI image generator predicts the ‘last selfie ever taken’
    By Katie Wickens
    These visuals of our expected demise will haunt me forever.

  35. Tomi Engdahl says:

    Tech industry stuck over patent problems with AI algorithms
    Lawyers at Google are unsure if they can patent chip floorplans created by machines

    The question of whether AI-generated outputs can be patented is impacting how technology companies can protect their intellectual property.

    Some of the most hyped up AI technologies are systems that can produce surprisingly creative outputs. Uncanny poems, short stories, and striking digital art have all been generated by machines. The human effort required to initiate these processes are often trivial: a few clicks or typing a text description can guide the machine towards producing something useful.

    Similar generative AI models are also being applied in scientific and technological applications. Machine learning algorithms can, for example, spit out molecule combinations in the hunt for new drugs, map out schematics for novel chip designs, and even write code.

    Under current US laws, intellectual property is only recognized and protected if it is created by “natural persons”. Humans build these models but, after training, their outputs are often automatically generated with little assistance. Which raises the question of whether the human developer of an AI system should be considered the inventor, or if the machine can claim the credit?


Leave a Comment

Your email address will not be published. Required fields are marked *