3 AI misconceptions IT leaders must dispel


 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”


  1. Tomi Engdahl says:

    Financial Times:
    The popularity of the H100, which offers at least 3x better performance than the A100, with Big Tech, OpenAI, and others pushes Nvidia’s market cap toward $1T


  2. Tomi Engdahl says:

    ChatGPT:stä tuli varoitus: Vaara uhkaa käyttäjiä
    Tekoälybottien käyttämiseen liittyy paljon hupia, mutta myös vaaroja.

  3. Tomi Engdahl says:

    AI Image Generation Gets A Drag Interface

    AI image generators have gained new tools and techniques for not just creating pictures, but modifying them in consistent and sensible ways, and it seems that every week brings a fascinating new development in this area. One of the latest is Drag Your GAN, presented at SIGGRAPH 2023, and it’s pretty wild.

    It provides a point-dragging interface that modifies images based on their implied structure. A picture is worth a thousand words, so this short animation shows what that means. There are plenty more where that came from at the project’s site, so take a few minutes to check it out.


  4. Tomi Engdahl says:

    Scary ‘Emergent’ AI Abilities Are Just a ‘Mirage’ Produced by Researchers, Stanford Study Says
    “There’s no giant leap of capability,” the researchers said.

  5. Tomi Engdahl says:

    Samsung imposes ban on generative AI tools like ChatGPT
    Samsung has reportedly barred its staff from using generative AI platforms on their phones, tablets, and computers.

  6. Tomi Engdahl says:

    ‘Godfather of AI’ quits Google and gives warning about the future of technology
    ‘I don’t think they should scale this up more until they have understood whether they can control it,’ Geoffrey Hinton says

  7. Tomi Engdahl says:

    Matemaattisia keskusteluja tekoälyn kanssa – Onko ChatGPT:stä oppaaksi matematiikan oppimisessa?
    Tekoäly on tulossa osaksi oppimisen arkea. Kävin keskusteluja tekoälyn kanssa muun muassa eräiden matemaattisten lauseiden todistamisesta selvittääkseni, miten tekoälyä voisi käyttää matematiikan oppimisen tukena. Millaisia mahdollisuuksia tai uhkakuvia tekoälyn käyttöön liittyy?

  8. Tomi Engdahl says:

    Columbia Journalism Review:
    A look at media coverage of generative AI over the past six months, as journalists and academics criticize the hype and provide ways to improve reporting — The Tow Center looked at how news organizations have been covering generative AI over the past six months.

    How the media is covering ChatGPT

    The Tow Center looked at how news organizations have been covering generative AI over the past six months.

  9. Tomi Engdahl says:

    ChatGPT V. The Legal System: Why Trusting ChatGPT Gets You Sanctioned

    Recently, an amusing anecdote made the news headlines pertaining to the use of ChatGPT by a lawyer. This all started when a Mr. Mata sued the airline where years prior he claims a metal serving cart struck his knee. When the airline filed a motion to dismiss the case on the basis of the statute of limitations, the plaintiff’s lawyer filed a submission in which he argued that the statute of limitations did not apply here due to circumstances established in prior cases, which he cited in the submission.

    Unfortunately for the plaintiff’s lawyer, the defendant’s counsel pointed out that none of these cases could be found, leading to the judge requesting the plaintiff’s counsel to submit copies of these purported cases. Although the plaintiff’s counsel complied with this request, the response from the judge (full court order PDF) was a curt and rather irate response, pointing out that none of the cited cases were real, and that the purported case texts were bogus.

    The defense that the plaintiff’s counsel appears to lean on is that ChatGPT ‘assisted’ in researching these submissions, and had assured the lawyer – Mr. Schwartz – that all of these cases were real. The lawyers trusted ChatGPT enough to allow it to write an affidavit that they submitted to the court. With Mr. Schwartz likely to be sanctioned for this performance, it should also be noted that this is hardly the first time that ChatGPT and kin have been involved in such mishaps.

  10. Tomi Engdahl says:

    Forget ChatGPT, Try These 7 Free AI Tools!

    0:00 Intro
    0:16 DAD jokes
    0:42 bloopers
    0:47 7 AI Tools
    0:57 1. Google bard
    2:01 2. vidyo
    2:57 3. Beatoven
    4:04 4. Flair
    4:37 5. scribblediffusion
    5:53 6. runwayml
    7:05 7. tome
    8:08 Outro
    8:28 peu peu… peeeu….
    8:31 AI Ai Aii

  11. Tomi Engdahl says:

    Testing the limits of ChatGPT and discovering a dark side

  12. Tomi Engdahl says:

    Elon Musk on Sam Altman and ChatGPT: I am the reason OpenAI exists

  13. Tomi Engdahl says:

    James Vincent / The Verge:
    While OpenAI has become more upfront about ChatGPT’s limitations, the company should do more to make clear the bot can’t reliably distinguish fact from fiction — “May occasionally generate incorrect information.” — This is the warning OpenAI pins to the homepage of its AI chatbot ChatGPT …

    OpenAI isn’t doing enough to make ChatGPT’s limitations clear

    Users deserve blame for not heeding warnings, but OpenAI should be doing more to make it clear that ChatGPT can’t reliably distinguish fact from fiction.

    “May occasionally generate incorrect information.”

    This is the warning OpenAI pins to the homepage of its AI chatbot ChatGPT — one point among nine that detail the system’s capabilities and limitations.

    “May occasionally generate incorrect information.”

    It’s a warning you could tack on to just about any information source, from Wikipedia to Google to the front page of The New York Times, and it would be more or less correct.

    “May occasionally generate incorrect information.”

    Because when it comes to preparing people to use technology as powerful, as hyped, and as misunderstood as ChatGPT, it’s clear OpenAI isn’t doing enough.

    The misunderstood nature of ChatGPT was made clear for the umpteenth time this weekend when news broke that US lawyer Steven A. Schwartz had turned to the chatbot to find supporting cases in a lawsuit he was pursuing against Colombian airline Avianca. The problem, of course, was that none of the cases ChatGPT suggested exist.

  14. Tomi Engdahl says:

    Oliver Whang / New York Times:
    A look at the BabyLM Challenge, which aims to create language models with datasets that are less than one-ten-thousandth the size of those used by advanced LLMs


  15. Tomi Engdahl says:

    Kevin Roose / New York Times:
    OpenAI and DeepMind executives, Geoffrey Hinton, and 350+ others sign a statement saying “mitigating the risk of extinction from AI should be a global priority”


  16. Tomi Engdahl says:

    Bjørn Karmann created a lens-free camera called Paragraphica that uses location data to produce AI-generated photos.

    This Lens-Free Camera Produces AI-Generated Photos
    Bjørn Karmann created a lens-free camera called Paragraphica that uses location data to produce AI-generated photos.

    But now that AI-generated images and deep fakes are commonplace, we’re seeing another shift. To highlight that, Bjørn Karmann created a lens-free camera called Paragraphica that uses location data to produce AI-generated photos.


  17. Tomi Engdahl says:

    Eating Disorder Helpline Takes Down Chatbot After It Promotes Disordered Eating
    “This robot causes harm.”

    After firing its entire human staff and replacing them with a chatbot, Vice reports that an eating disorder helpline has already announced that it’s bringing its humans back.

  18. Tomi Engdahl says:

    Yhdyn Bruce Schneieriin: “I actually don’t think that AI poses a risk to human extinction. I think it poses a similar risk to pandemics and nuclear war—which is to say, a risk worth taking seriously, but not something to panic over.”

    On the Catastrophic Risk of AI

    Earlier this week, I signed on to a short group statement, coordinated by the Center for AI Safety:

    Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

  19. Tomi Engdahl says:

    AI is not the black elephant in the room

  20. Tomi Engdahl says:

    “We were training it in simulation to identify and target a Surface-to-air missile (SAM) threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton said, according to the blog post.

    AI-Controlled Drone Goes Rogue, ‘Kills’ Human Operator in USAF Simulated Test

    The Air Force’s Chief of AI Test and Operations said “it killed the operator because that person was keeping it from accomplishing its objective.”

  21. Tomi Engdahl says:

    If you read deeper, you find out that after the AI got awarded points for the human surviving, it changed tactics yet again. The AI began destroying things that would break communication between it and the human operator. Basically, the AI considered fulfilling its SEAD mission to be more important than what its human operator was requiring.

  22. Tomi Engdahl says:

    yeah this is a known machine learning problem: give it a set of targets its optimized to go after + a stop switch and it quickly learns ignoring or disabling the stop switch lets it focus on the more ‘rewarding’ behavior

    You try to fix this by setting ‘shut down when the stop switch says’ at a higher priority than ‘kill targets’ and you kind of get the opposite problem: now you’ve accidentally told it “do ‘misbehavior’ that forces a human to hit the stop switch as often as possible: a human hitting the stop switch is a better reward than anything else”

  23. Tomi Engdahl says:

    Until recently, AI researchers were all in on GPUs. A small but growing number are arguing that simpler CPUs deserve another chance to prove their worth. #AI

    The Case for Running AI on CPUs Isn’t Dead Yet GPUs may dominate, but CPUs could be perfect for smaller AI models

  24. Tomi Engdahl says:

    Tekoälyn mahdollisuudet päivittävät sen uhat

    Capgemini Research Instituten julkaiseman tutkimuksen mukaan lähes 60 prosenttia opettajista uskoo, että vuorovaikutusosaaminen tekoälyn kanssa nousee keskeiseksi työelämätaidoksi tulevaisuudessa. Vaikka monet tunnustavat generatiivisen tekoälyn mahdollisuudet, 78 prosenttia opettajista maailmanlaajuisesti jakaa huolen sen mahdollisista negatiivisista vaikutuksista oppimistuloksiin.

    Samaan aikaan 16–18-vuotiaat opiskelijat ovat opettajiaan vähemmän luottavaisia siihen, että heidän digitaitonsa ovat riittävän ajantasaiset työelämän vaatimuksiin nähden. Opiskelijat epäilevät perustaitojensa riittävyyttä erityisesti digitaalisen viestinnän ja datan hyödyntämisen taitojen alueilla.

    Lähes puolet (48 %) tutkimukseen osallistuneista ensimmäisen ja toisen asteen koulun opettajista kertoo heidän koulujensa kieltäneen tai rajoittaneen tekoälyn käyttöä tavalla tai toisella. Aikaiset omaksujat sen sijaan kertovat vähemmän rajoittavista lähestymistavoista, missä 19 prosenttia vastaajista kertoo tekoälytyökalujen olevan sallittuja määriteltyä tarkoitusta varten, ja 18 prosenttia kertoo tekoälysovellusten olevan arvioinnin kohteena niiden käytettävyyden ja hyödyn osalta.

    Monen mielessä tekoälyssä on kyse sekä mahdollisuuksista että uhkista. Pelkona on muun muassa, että kirjoittamisen arvostus laskee (66 %) ja että tekoäly rajoittaa opiskelijoiden luovuutta (66 %).

  25. Tomi Engdahl says:

    Tekoälystä kaikki hyöty irti: ilmainen kurssi opettaa promptaamaan

    Tekoälystä on lyhyessä ajassa tullut jokaiselle tärkeä apuri. Siitä on jo nyt hyötyä kaikille arjessa ja työelämässä. Tulevaisuuden mahdollisuudet ovat rajattomat. Tekoäly voidaan valjastaa keventämään hallinnollista työtä, tukemaan oppimista ja luovuutta sekä vapauttamaan aikaa inhimillisiin kohtaamisiin.

    Microsoft, Elisa ja Sulava ovat tänään julkistaneet opintokokonaisuuden, jonka avulla jokainen voi oppia valjastamaan tekoälyn osaksi työelämää ja arkea. Korkeakoulutasoinen mutta kenelle tahansa helposti omaksuttava Practical AI -opintokokonaisuus tutustuttaa generatiivisen tekoälyn käyttömahdollisuuksiin, ohjaa kädestä pitäen kokeilemaan sekä avaa turvallisen ja eettisen käytön periaatteita. Opintokokonaisuus on maksuton ja kaikille avoin. Kahden opintopisteen arvoisen kurssin voi myös liittää osaksi korkeakoulututkintoa.


  26. Tomi Engdahl says:


    <– there goes the groundbreaking AI technology… <– reason for rejection: didn't care enough of other people's copyrights…

  27. Tomi Engdahl says:


  28. Tomi Engdahl says:

    AI ‘godfather’ Yoshua Bengio feels ‘lost’ over life’s work

    One of the so-called “godfathers” of Artificial Intelligence (AI) has said he would have prioritised safety over usefulness had he realised the pace at which it would evolve.

    Prof Yoshua Bengio told the BBC he felt “lost” over his life’s work.

    The computer scientist’s comments come after experts in AI said it could lead to the extinction of humanity.

    Prof Bengio, who has joined calls for AI regulation, said he did not think militaries should be granted AI powers.

    He is the second of the so-called three “godfathers” of AI, known for their pioneering work in the field, to voice concerns about the direction and the speed at which it is developing.

  29. Tomi Engdahl says:

    Google’s Top Result for “Johannes Vermeer” Is an AI Knockoff of “Girl With a Pearl Earring”
    Google is inserting AI into art history.

  30. Tomi Engdahl says:


    Here’s a particularly grim new use for AI: Vice reports that there’s a new tool to “clone” a real person as an AI-powered romantic companion, with or without the consent of the real person

  31. Tomi Engdahl says:

    You can now run a GPT-3-level AI model on your laptop, phone, and Raspberry Pi
    Thanks to Meta LLaMA, AI text models may have their “Stable Diffusion moment.”

  32. Tomi Engdahl says:

    People Are Tricking a ChatGPT Competitor Into Talking Dirty
    “All bots can be edged into NSFW content.”

  33. Tomi Engdahl says:

    Google Labs unveils Pitchfork, an AI that can convert old code to new code and rewrite itself

  34. Tomi Engdahl says:

    Kolumni: Olemme hukkumassa keinoälyn luomaan musiikkiin – jo nyt
    “Suoratoiston piti tehdä portinvartijat turhiksi, mutta nyt niitä tulee ikävä. Oi ironiaa!” kirjoittaa Jukka Hätinen.

  35. Tomi Engdahl says:

    New York Lawyer Caught Using ChatGPT After Citing Cases That Don’t Exist
    Before he submitted it, he asked ChatGPT if the cases were real.

  36. Tomi Engdahl says:

    Regulatory framework proposal on artificial intelligence
    The Commission is proposing the first-ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally.


Leave a Comment

Your email address will not be published. Required fields are marked *