3 AI misconceptions IT leaders must dispel


 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”


  1. Tomi Engdahl says:

    Shutterstock to integrate OpenAI’s DALL-E 2 and launch fund for contributor artists

  2. Tomi Engdahl says:

    This Novel “Mechanical Neural Network” Architected Material Learns to Respond to Its Environment

    Built using voice coils, strain gauges, and flexures, this mechanical marvel acts as a physical neural network — “learning” as it moves.

  3. Tomi Engdahl says:

    Benj Edwards / Ars Technica:
    DeviantArt announces DreamUp, a text-to-image generator based on Stable Diffusion, drawing intense criticism from its artist community — Confused artists discover their work will be used for AI training by default. — On Friday, the online art community DeviantArt announced DreamUp …

    DeviantArt upsets artists with its new AI art generator, DreamUp [Updated]
    Confused artists discover their work will be used for AI training by default.

    On Friday, the online art community DeviantArt announced DreamUp, an AI-powered text-to-image generator service powered by Stable Diffusion. Simultaneously, DeviantArt launched an initiative that ostensibly lets artists opt out of AI image training but also made everyone’s art opt in by default, which angered many members.

  4. Tomi Engdahl says:

    Matthias Bastian / The Decoder:
    Meta AI and Papers with Code unveil Galactica, an open-source LLM for generating literature reviews, wiki articles, lecture notes on scientific topics, and more — The Galactica large language model (LLM) is being trained with millions of pieces of academic content.

    Galactica is an open source language model for scientific progress

    The Galactica large language model (LLM) is being trained with millions of pieces of academic content. It is designed to help the research community better manage the “explosion of information.”

    Galactica was developed by Meta AI in collaboration with Papers with Code. The team identified information overload as a major obstacle to scientific progress. “Researchers are buried under a mass of papers, increasingly unable to distinguish between the meaningful and the inconsequential.”

    Galactica is designed to help sort through scientific information. It has been trained with 48 million papers, textbooks and lecture notes, millions of compounds and proteins, scientific websites, encyclopedias and more from the “NatureBook” dataset.

  5. Tomi Engdahl says:

    Tekoälyn tuottamat röntgenkuvat huijasivat erikoislääkäreitä

    Jyväskylän yliopiston AI hub Keski-Suomi hankkeessa kehitettiin tekoälymenetelmä, jonka tuottamilla synteettisillä röntgenkuvilla voitaisiin korvata ja täydentää polven nivelrikkodiagnostiikan menetelmätutkimuksessa käytettävää röntgendataa. Keinotekoiset röntgenkuvat pystyvät huijaamaan jopa lääketieteen ammattilaisia.

  6. Tomi Engdahl says:

    Sharon Goldman / VentureBeat:
    Intel unveils FakeCatcher, a web-based real-time deepfake detector that analyzes the subtle “blood flow” in video pixels; the company claims a 96% accuracy rate — On Monday, Intel introduced FakeCatcher, which it says is the first real-time detector of deepfakes — that is …

    Intel unveils real-time deepfake detector, claims 96% accuracy rate

    On Monday, Intel introduced FakeCatcher, which it says is the first real-time detector of deepfakes — that is, synthetic media in which a person in an existing image or video is replaced with someone else’s likeness.

    Intel claims the product has a 96% accuracy rate and works by analyzing the subtle “blood flow” in video pixels to return results in milliseconds.

    Intel’s deepfake detector is based on PPG signals

    Unlike most deep learning-based deepfake detectors, which look at raw data to pinpoint inauthenticity, FakeCatcher is focused on clues within actual videos. It is based on photoplethysmography, or PPG, a method for measuring the amount of light that is absorbed or reflected by blood vessels in living tissue. When the heart pumps blood, it goes to the veins, which change color.

  7. Tomi Engdahl says:

    Overseeing artificial intelligence: Moving your board from reticence to confidence

    A look at how A.I.-related risks are escalating, why this puts more pressure on corporate boards, and what steps directors can take right now to grow more comfortable in their A.I. oversight roles.

    Corporations have discovered the power of artificial intelligence (A.I.) to transform what’s possible in their operations. Through algorithms that learn from their own use and constantly improve, A.I. enables companies to:

    Bring greater speed and accuracy to time-consuming and error-prone tasks such as entity management
    Process large amounts of data quickly in mission-critical operations like cybersecurity
    Increase visibility and enhance decision-making in areas from ESG to risk management and beyond

    But with great promise comes great responsibility—and a growing imperative for monitoring and governance.

    “As algorithmic decision-making becomes part of many core business functions, it creates the kind of enterprise risks to which boards need to pay attention,” writes Washington, D.C.–based law firm Debevoise & Plimpton.

    Many boards have hesitated to take on a defined role in A.I. oversight, given the highly complex nature of A.I. technology and specialized expertise involved.

    Increasing A.I. adoption escalates business risks

    Across industries, A.I. also poses challenges to environmental, social, and governance (ESG), a rising board priority. Despite A.I.’s ability to automate and accelerate data collection, reporting, and analysis, the technology can cause negative impacts to the environmental. For example, when just one image-recognition algorithm trains itself to recognize one type of image it needs to process millions of images. All of this processing requires energy-intensive data centers.

    “It’s a use of energy that we don’t really think about,” Professor Virginia Dignum of Sweden’s Umeå University told the European Commission’s Horizons magazine. “We have data farms, especially in the northern countries of Europe and in Canada, which are huge. Some of those things use as much energy as a small city.”

    A.I. can also have a negative impact on the “S” in ESG, with examples from the retail world demonstrating A.I.’s potential for undermining equity efforts, perpetuating bias and causing companies to overstep on customer privacy.

  8. Tomi Engdahl says:

    Meet Unstable Diffusion, the group trying to monetize AI porn generators
    Of course, it’s an ethical minefield

  9. Tomi Engdahl says:

    Why Meta’s latest large language model survived only three days online
    Galactica was supposed to help scientists. Instead, it mindlessly spat out biased and incorrect nonsense.

  10. Tomi Engdahl says:

    Art professionals are increasingly concerned that text-to-image platforms will render hundreds of thousands of well-paid creative jobs obsolete.

    AI Is Coming For Commercial Art Jobs. Can It Be Stopped?

    Earlier this summer, a piece generated by an AI text-to-image application won a prize in a state fair art competition, prying open a Pandora’s Box of issues about the encroachment of technology into the domain of human creativity and the nature of art itself. As fascinating as those questions are, the rise of AI-based image tools like Dall-E, Midjourney and Stable Diffusion, which rapidly generate detailed and beautiful images based on text descriptions supplied by the user, pose a much more practical and immediate concern:

    They could very well hold a shiny, photorealistically-rendered dagger to the throats of hundreds of thousands of commercial artists working in the entertainment, videogame, advertising and publishing industries, according to a number of professionals who have worked with the technology.

    How impactful would this be to the global creative economy that runs on spectacular imagery? Think about the 10 minutes of credits at the end of every modern Hollywood blockbuster. 95 percent of those names are people working in the creation of visual imagery like special effects, animation and production design. Same with videogames, where commercial artists hone their skills for years to score plum jobs like concept artist and character designer.

    These jobs, along with more traditional tasks like illustration, photography and design, are how most visual artists in today’s economy get paid.

    Very soon, all that work will be able to be done by non-artists working with powerful AI-based tools capable of generating hundreds of images in every style imaginable in a matter of minutes – tools ostensibly and even earnestly created to empower ordinary people to express their visual creativity. And these tools are evolving rapidly in capabilities.

    This isn’t an issue for the far-off dystopian future. Dall-E (a project of the MicrosoftMSFT -0.2%- and Elon Musk-backed nonprofit OpenAI), Midjourney and others have been in limited deployment for months, with imagery posted all over the internet. Then in August, an open-source project, Stable Diffusion from stability.ai, publicly released its model set under a permissive creative commons license, giving anyone with a web browser or mid-grade PC the tools to create stunning, sometimes disconcerting images to their specifications, including for commercial use.

    “The progress is exponential,” said Jason Juan, a veteran art director and artist for gaming and entertainment clients including Disney and Warner Bros. “It will allow more people who have solid ideas and clear thoughts to visualize things which were difficult to achieve without years of art training or hiring highly skilled artists. The definition of art will also evolve, since rendering skills might no longer be the most essential.”

    Artists have taken notice.

    Last week, according to the AI image search database Librarie.ai, Rutkowski’s name turned up hundreds of thousands of times in image prompt searches, which means that hundreds of thousands of images have been created sampling his distinctive style.

    “I’m very concerned about it,” said Rutkowski. “As a digital artist, or any artist, in this era, we’re focused on being recognized on the internet. Right now, when you type in my name, you see more work from the AI than work that I have done myself, which is terrifying for me. How long till the AI floods my results and is indistinguishable from my works?”

    Juan emphasized that human intervention is still important and necessary to achieve the desired outcomes from any new technology, including AI. “Any new invention will not replace the current industry right away. It is a new medium and it will also grow a new ecosystem which will impact the current industry in a way we might not have expected. But the impact will be very big.”

    David Holz, founder of Midjourney, underscored that point in an exclusive interview. “Right now, our professional users are using the platform for concepting. The hardest part of [a commercial art project] is often at the beginning, when the stakeholder doesn’t know what they want and has to see some ideas to react to. Midjourney can help people converge on the idea they want much more quickly, because iterating on those concepts is very laborious.”

    “The type of work I do, single images and illustrations, that’s already going away because of this,” said Robinson. “Right now, the AI has a little trouble keeping images consistent, so sequential storytelling like comics still needs a lot of human intervention, but that’s likely to change.”

    Grubaugh sees entire swaths of the creative workforce evaporating. “Concept artists, character designers, backgrounds, all that stuff is gone. As soon as the creative director realizes they don’t need to pay people to produce that kind of work, it will be like what happened to darkroom techs when Photoshop landed, but on a much larger scale.”

    “Why would anyone pay to have an artist design a book cover or album jacket when you can just type in a few words and get what you want?”

    Despite the potential for disruption, even people in the industry who stand to benefit from automating creative work say the issues require legal clarification. “On the business side, we need some clarity around copyright before using AI-generated work instead of work by a human artist,” said Juan. “The problem is, the current copyright law is outdated and is not keeping up with the technology.”

    Holz agrees this is a gray area, especially because the data sets used to train Midjourney and other image models deliberately anonymize the sources of the work, and the process for authenticating images and artists is complex and cumbersome. “It would be cool if the images had metadata embedded in them about the copyright holder, but that’s not a thing,” he said.

    “They’re training the AI on his work without his consent? I need to bring that up to the White House office,” she said. “If these models have been trained on the styles of living artists without licensing that work, there are copyright implications. There are rules for that. This requires a legislative solution.”

    Braga said that regulation may be the only answer, because it is not technically possible to “untrain” AI systems or create a program where artists can opt-out if their work is already part of the data set.

    “The only way to do it is to eradicate the whole model that was built around nonconsensual data usage,” she explained.

    The problem is, the source code to at least one of the platforms is already out in the wild and it will be very difficult to put the toothpaste back in the tube. And even if the narrow issue of compensating living artists is addressed, it won’t solve the larger threat of a simple tool deskilling and demonetizing the entire profession of commercial art and illustration.

    Holz doesn’t see it that way. His mission with Midjourney, he says, is to “try to expand the imaginative powers of the human species” and make it possible for more people to visualize ideas from their imagination through art. He also emphasized that he sees Midjourney as primarily a consumer platform.

    OpenAI, the company behind the Dall-E product, who declined to be interviewed for this story, similarly positions itself as working “to ensure that artificial general intelligence benefits all of humanity.”

    Stability.ai, the company developing Stable Diffusion, articulates their mission as “to make state of the art machine learning accessible for people from all over the world.”

    The usual arguments in favor of AI are that the systems automate repetitive tasks that humans dislike anyway, like answering the same customer questions over and over again, or checking millions of bags at security checkpoints. In this case, said Robinson, “AI is coming for the fun jobs” – the creatively-rewarding jobs people work and study their whole lives to obtain, and potentially incur six figures worth of student debt to qualify for. And it’s doing it before anyone has a chance to pay attention.

    “I see an opportunity to monetize for the creators, through licensing,” said Braga. “But there needs to be political support.

    “There’s no doubt that AI will have a great positive impact in the number crunching areas of our lives,” said McKean, “but the more it takes over from the jobs that we do and find meaning in… I think we should not give up that meaning lightly. There needs to be some fight-back.”

  11. Tomi Engdahl says:

    Breakthrough Machine Learning AI Runs Nuclear Fusion Reactor | New AI Supercomputer With 13.5+ Million Processor Cores | New Brain Model For Conscious AI

    Why This Breakthrough AI Now Runs A Nuclear Fusion Reactor | New AI Supercomputer

  12. Tomi Engdahl says:

    Humans and neural networks are working together to optimize one of the most fundamental and ubiquitous operations in all of mathematics. https://www.quantamagazine.org/ai-reveals-new-possibilities-in-matrix-multiplication-20221123/

  13. Tomi Engdahl says:

    3D for everyone? Nvidia’s Magic3D can generate 3D models from text
    New AI aims to democratize 3D content creation, no modeling skills required.

  14. Tomi Engdahl says:

    Here’s A Plain C/C++ Implementation Of AI Speech Recognition, So Get Hackin’

    [Georgi Gerganov] recently shared a great resource for running high-quality AI-driven speech recognition in a plain C/C++ implementation on a variety of platforms. The automatic speech recognition (ASR) model is fully implemented using only two source files and requires no dependencies. As a result, the high-quality speech recognition doesn’t involve calling remote APIs, and can run locally on different devices in a fairly straightforward manner. The image above shows it running locally on an iPhone 13, but it can do more than that.

    [Georgi]’s work is a port of OpenAI’s Whisper model, a remarkably-robust piece of software that does a truly impressive job of turning human speech into text. Whisper is easy to set up and play with, but this port makes it easier to get the system working in other ways.


  15. Tomi Engdahl says:

    Will Shanklin / Engadget:NEW
    Amazon announces Create with Alexa, a generative AI that lets children create animated stories via voice prompts on three topics, available on Echo Show devices

    Amazon’s Create With Alexa generates unique animated children’s stories on Echo Show
    The AI generation feature is exclusive to Echo Show smart displays.

    Tools like DALL-E 2, Stable Diffusion and Midjourney, which generate images based on a few lines of text, briefly set social media ablaze this year. But Amazon’s entry into the AI art world is a bit different. Create with Alexa lets children guide the creation of animated stories using a few kid-friendly prompts.

    Since Create with Alexa is visual storytelling, it’s only available on Echo Show devices, not the company’s audio-only speakers. Amazon says it works whether the device is in Amazon Kids mode or not.

    To create a new story, your child would begin by speaking, “Alexa, make a story,” and then following several prompts. The AI then generates an illustrated five-to-ten-line narrative — including animations, sound effects and music — built around their answers.

  16. Tomi Engdahl says:

    Will Douglas Heaven / MIT Technology Review:
    OpenAI releases a demo of ChatGPT, a chatbot version of GPT-3 that answers follow-up questions, admits its mistakes, challenges incorrect premises, and more

    While everyone waits for GPT-4, OpenAI is still fixing its predecessor

    A chatbot version of GPT-3 that admits its mistakes is more transparent than the original. But it’s still not perfect.

  17. Tomi Engdahl says:

    OpenAI’s GPT-4 will be as big of a leap from GPT-3 as GPT-3 was to GPT-2. Get ready. Early 2023 is going to be wild.

    #ai #artificialintelligence #experience #entertainment #technology #future

    GPT-4 Is Coming Soon. Here’s What We Know About It
    Official info, current trends, and predictions.

  18. Tomi Engdahl says:

    “There are hundreds of millions of dollars being deployed towards glorified tech demos.”


    All Flash No Pan
    Generative AI is undeniably having a moment. OpenAI’s text-to-image creator DALL-E has been dazzling the public for months, while its standout rival, a newcomer dubbed Stability AI, just raked in a cool $101 million in funding for its Stable Diffusion system. Video and music generators are popping up as well, and some experts predict that synthetic media will soon make up the vast majority of digital content.

    But according to Will Manidis, the founder and CEO of AI-driven healthcare startup ScienceIO, generative AI is all flash, no substance — and while it might be attracting VC cash now, most ventures will quickly fade into startup oblivion.

    “There are hundreds of millions of dollars being deployed towards glorified tech demos built on top of identical datasets,” the founder wrote in a Tuesday Twitter thread, referring to these generative machine systems. “Most, if not all of these, will fail.”

    “Where generative AI will change the world is in narrow, mostly boring domains,” he continued. “VCs are ignoring this.”

    Manidis’ argument, which centers on the VC-beloved text-to-image generators, rests heavily on the belief that the “creator economy” doesn’t really have much room for growth. Sure, it’s fun and sometimes useful to produce the AI-made artworks, but turning everyone into creators won’t actually generate new, major revenue streams.

    That being said, Manidis definitely believes that AI will revolutionize industry — just in less glamorous sectors. AI-generated anthropomorphic bowling balls are cool and all, but in his eyes? Data entry is where AI is really about to shine.

  19. Tomi Engdahl says:



    Artificial intelligence-powered generator tools that can create brand new music tracks at the click of a button are starting to come for musicians’ livelihoods — and that has lobbyist groups deeply concerned.

    Case in point, the Recording Industry Association of America (RIAA) is worried that AI-powered music generators could threaten both the wallets and rights of human artists.

    In response to the Office of the US Trade Representative’s request for comment, the RIAA issued a statement, condoning the use of AI music generators.

    Online services that use AI to “extract, or rather, copy, the vocals, instrumentals, or some portion of the instrumentals (a music stem) from a sound recording” to “generate, master or remix a recording to be very similar to or almost as good as reference tracks by select, well known sound recording artists” are infringing on its members’ “rights by making unauthorized copies of our members works,” the RIAA wrote in a new statement to the Office of the US Trade Representative.


  20. Tomi Engdahl says:

    Andrew Liszewski / Gizmodo:
    Disney unveils FRAN, an AI tool that helps TV or film producers make an actor look older or younger without the need for complex and expensive visual effects

    Disney Made a Movie Quality AI Tool That Automatically Makes Actors Look Younger (or Older)
    With just a few clicks, actors can look younger or older without the need for expensive visual effects.

  21. Tomi Engdahl says:

    Despite genuine concerns, generative AI seems unlikely to replace workers, instead complementing and empowering them by taking over mundane, repetitive tasks

    Generative AI: autocomplete for everything
    A joint blog post by Noah and roon on the future of work in the age of AI

  22. Tomi Engdahl says:

    New AI Tool Takes Your Photo Hundreds Of Years In The Past
    Ever wondered what you’d look like as a Pharaoh of Ancient Egypt?

  23. Tomi Engdahl says:

    Computing With Chemicals Makes Faster, Leaner AI Battery-inspired artificial synapses are gaining ground

  24. Tomi Engdahl says:

    Google frees nifty ML image-compression model… but it’s for JPEG-XL
    Yep. The very same JPEG-XL that’s just been axed from Chromium


Leave a Comment

Your email address will not be published. Required fields are marked *