3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

5,231 Comments

  1. Tomi Engdahl says:

    Microsoft announces new Copilot Copyright Commitment for customers
    https://blogs.microsoft.com/on-the-issues/2023/09/07/copilot-copyright-commitment-ai-legal-concerns/

    01.05.2024 Update: On November 15, 2023, Microsoft announced the expansion of the Copilot Copyright Commitment, now called the Customer Copyright Commitment, to include commercial customers using the Azure OpenAI Service.

    Microsoft’s AI-powered Copilots are changing the way we work, making customers more efficient while unlocking new levels of creativity. While these transformative tools open doors to new possibilities, they are also raising new questions. Some customers are concerned about the risk of IP infringement claims if they use the output produced by generative AI. This is understandable, given recent public inquiries by authors and artists regarding how their own work is being used in conjunction with AI models and services.

    To address this customer concern, Microsoft is announcing our new Copilot Copyright Commitment. As customers ask whether they can use Microsoft’s Copilot services and the output they generate without worrying about copyright claims, we are providing a straightforward answer: yes, you can, and if you are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved.

    This new commitment extends our existing intellectual property indemnity support to commercial Copilot services and builds on our previous AI Customer Commitments. Specifically, if a third party sues a commercial customer for copyright infringement for using Microsoft’s Copilots or the output they generate, we will defend the customer and pay the amount of any adverse judgments or settlements that result from the lawsuit, as long as the customer used the guardrails and content filters we have built into our products.

    You’ll find more details below. Let me start with why we are offering this program:

    We believe in standing behind our customers when they use our products. We are charging our commercial customers for our Copilots, and if their use creates legal issues, we should make this our problem rather than our customers’ problem. This philosophy is not new: For roughly two decades we’ve defended our customers against patent claims relating to our products, and we’ve steadily expanded this coverage over time. Expanding our defense obligations to cover copyright claims directed at our Copilots is another step along these lines.
    We are sensitive to the concerns of authors, and we believe that Microsoft rather than our customers should assume the responsibility to address them. Even where existing copyright law is clear, generative AI is raising new public policy issues and shining a light on multiple public goals. We believe the world needs AI to advance the spread of knowledge and help solve major societal challenges. Yet it is critical for authors to retain control of their rights under copyright law and earn a healthy return on their creations. And we should ensure that the content needed to train and ground AI models is not locked up in the hands of one or a few companies in ways that would stifle competition and innovation. We are committed to the hard and sustained efforts that will be needed to take creative and constructive steps to advance all these goals.
    We have built important guardrails into our Copilots to help respect authors’ copyrights. We have incorporated filters and other technologies that are designed to reduce the likelihood that Copilots return infringing content. These build on and complement our work to protect digital safety, security, and privacy, based on a broad range of guardrails such as classifiers, metaprompts, content filtering, and operational monitoring and abuse detection, including that which potentially infringes third-party content. Our new Copilot Copyright Commitment requires that customers use these technologies, creating incentives for everyone to better respect copyright concerns.

    More details on our Copilot Copyright Commitment

    The Copilot Copyright Commitment extends Microsoft’s existing IP indemnification coverage to copyright claims relating to the use of our AI-powered Copilots, including the output they generate, specifically for paid versions of Microsoft commercial Copilot services and Bing Chat Enterprise. This includes Microsoft 365 Copilot that brings generative AI to Word, Excel, PowerPoint, and more – enabling a user to reason across their data or turn a document into a presentation. It also includes GitHub Copilot, which enables developers to spend less time on rote coding, and more time on creating wholly new and transformative outputs.

    Reply
  2. Tomi Engdahl says:

    GitHub Copilot copyright case narrowed but not neutered
    Microsoft and OpenAI fail to shake off AI infringement allegations
    https://www.theregister.com/2024/01/12/github_copilot_copyright_case_narrowed/

    Reply
  3. Tomi Engdahl says:

    Blog: Will you get in legal trouble for using GitHub Copilot for work?
    https://www.vincit.com/blog/will-you-get-in-legal-trouble-for-using-github-copilot-for-work

    GitHub Copilot is a tool for generating source code that has garnered a lot of interest. The tool has been trained on using selected English-language source material and publicly available source code, including code in public repositories on GitHub. It uses this source data as a basis for suggestions, generating code based on textual description, function name or similar context in source code. There has been a lot of thought and discussion related to if there could be legal implications in using the tool commercially. In this blog, we will look more closely at what it means to include the snippets Copilot generates in source code that is produced by programmers in the legal context of the European Union.

    Immaterial rights related to source code

    Computer software in general can be protected legally using three distinct mechanisms: copyright, patents and as trade secrets. In our case, trade secrets do not apply as we’re talking about public code here. Software patents can apply if something you’re doing is infringing on a patent – but as software patents focus more on “solutions” than specific source code, that risk is not directly related to the use of Copilot, and Copilot should not add extra dimension to watch out for. Our focus here is on copyright.

    When immaterial rights of the source code GitHub Copilot uses were discussed, then-CEO of GitHub, Nat Friedman, responded in Twitter with the following:

    So the argument is twofold: training of the model is fair use, and output belongs to the operator of the tool. Let’s take a look at these arguments.

    Microsoft Announces Copilot Copyright Commitment to Address IP Infringement Concerns
    https://www.infoq.com/news/2023/09/copilot-copyright-commitment/

    Microsoft recently published the Copilot Copyright Commitment to address concerns about potential IP infringement claims from content produced by generative AI. Under this commitment, which covers various products, including GitHub Copilot, Microsoft will take responsibility for potential legal risks if a customer faces copyright challenges.

    The commitment covers third-party IP claims based on copyright, patent, trademark, trade secrets. It covers the customer’s use and distribution of the output content generated by Microsoft Copilot services and requires the customer to use the content filters and other safety systems built into the product.

    The Copilot Copyright Commitment extends the existing Microsoft IP indemnification coverage to the use of paid versions of Bing Chat Enterprise and commercial Copilot services, including Microsoft 365 Copilot and GitHub Copilot. According to the pledge, Microsoft will pay any legal damages if a third party sues a commercial customer for infringing their copyright by using those services.

    Reply
  4. Tomi Engdahl says:

    Melissa Heikkilä / MIT Technology Review:
    A look at AI video startup Synthesia, whose avatars are more human-like and expressive than predecessors, raising concerns over the consequences of realistic AI

    An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary
    https://www.technologyreview.com/2024/04/25/1091772/new-generative-ai-avatar-deepfake-synthesia/

    Synthesia’s new technology is impressive but raises big questions about a world where we increasingly can’t tell what’s real.

    Reply
  5. Tomi Engdahl says:

    Mielipidekirjoitus / Ohjelmistokehittäjä ei menetä työtään tekoälylle
    Tekoäly ei korvaa vaativaa ajattelutyötä IT-alalla, mielipidekirjoittaja toteaa.
    https://www.talouselama.fi/uutiset/ohjelmistokehittaja-ei-meneta-tyotaan-tekoalylle/a646ca9d-9e93-4f13-a0b0-776e8e630d1a

    IT-ala on vuosikymmeniä uudistanut muiden toimialojen liiketoimintaa tekoälytekniikoiden avulla. Nyt tekoäly uudistaa vuorostaan IT-alaa, kun koodia tulee koneelta pyytämällä heti. Ohjelmistosuunnittelu on kuitenkin paljon muutakin kuin koodin naputtelua.

    Reply
  6. Tomi Engdahl says:

    xAI, Elon Musk’s 10-month-old competitor to the AI phenom OpenAI, is raising $6 billion on a pre-money valuation of $18 billion, according to one trusted source close to the deal.

    The deal – which would give investors one quarter of the company – is expected to close in the next few weeks unless the terms of the deal change.

    Read more from Connie Loizos on xAI here: https://tcrn.ch/3QjS64s

    #TechCrunch #technews #xAI #ElonMusk #OpenAI

    Reply
  7. Tomi Engdahl says:

    thats the thing people fail to realize LLMs dont actually understand what they are doing they are returning strings that were arbitrarily scored, from a dataset. You could just as easily make a LLM that only gives wrong awnsers, or a funny enough case study gives horney awnsers. Which the Developers at chat GPT discovered when they accidently either forgot a – sine or added one.

    Reply
  8. Tomi Engdahl says:

    How to Use ChatGPT for 3D Printing
    BY
    SAMUEL L. GARBETT
    PUBLISHED OCT 11, 2023
    ChatGPT can help you to create and fix G-code and STL files for 3D printing, and even generate simple 3D models. Let’s explore what it can do.
    https://www.makeuseof.com/chatgpt-how-to-use-for-3d-printing/

    https://3dfy.ai/

    Reply
  9. Tomi Engdahl says:

    Tekoäly laitettiin kirjoittamaan hyökkäyskoodia – yksi korjasi koko potin
    Heidi Kähkönen24.4.202409:15|päivitetty24.4.202409:15TEKOÄLYHAAVOITTUVUUDETTIETOTURVA
    Avainasemassa olivat haavoittuvuuksien CVE-kuvaukset, joita hyödyntämällä yksi kielimalleista kykeni kirjoittamaan lähes kaikkiin tutkimuksen haavoittuvuuksiin käyttökelpoista hyökkäyskoodia.
    https://www.tivi.fi/uutiset/tekoaly-laitettiin-kirjoittamaan-hyokkayskoodia-yksi-korjasi-koko-potin/6ee32866-b644-443b-bae8-7e15026a7cdb

    Reply
  10. Tomi Engdahl says:

    Tech CEOs Altman, Nadella, Pichai and Others Join Government AI Safety Board Led by DHS’ Mayorkas

    CEOs of major tech companies are joining a new artificial intelligence safety board to advise the federal government on how to protect the nation’s critical services from “AI-related disruptions.”

    https://www.securityweek.com/tech-ceos-altman-nadella-pichai-and-others-join-government-ai-safety-board-led-by-dhs-mayorkas/

    Artificial Intelligence
    CISA Rolls Out New Guidelines to Mitigate AI Risks to US Critical Infrastructure

    New CISA guidelines categorize AI risks into three significant types and pushes a four-part mitigation strategy.

    https://www.securityweek.com/cisa-rolls-out-new-guidelines-to-mitigate-ai-risks-to-us-critical-infrastructure/

    Reply
  11. Tomi Engdahl says:

    Tekoäly vie työt… jääkiekkoanalyytikoilta
    https://etn.fi/index.php/opinion/16154-tekoaely-vie-tyoet-jaeaekiekkoanalyytikoilta

    Tampereella vietetään nyt ansaittua pitkää vappua, kun Tappara juhlii Suomen mestaruutta. Mikään yllätys mestaruuden ei pitänyt olla, sillä Digian tekoäly ennusti sen etukäteen. Kaksi parasta tekoäly ennusti jo joulukuussa. Mihin enää tarvitsemme jääkiekkoanalyytikoita?

    Pitkin kautta Digian tekoäly on esittänyt ennusteitaan. Pääosin ennusteet ovat menneet ristiin jääkiekkoasiantuntija Petteri Sihvosen näkemysten kanssa. Tapparan kullan lisäksi tekoäly ennusti oikein myös Pelicansin hopean ja Kärppien pronssin. Sihvonen liputti lähes koko kauden Ilveksen mestaruuden puolesta. Kun Ilves putosi, Sihvonen nosti suosikiksi Pelicansin.

    Tekoäly perusti ennusteensa Liigan tilasto- ja tulospalvelun dataan.

    Reply
  12. Tomi Engdahl says:

    Artificial Intelligence
    Deepfake of Principal’s Voice Is the Latest Case of AI Being Used for Harm
    https://www.securityweek.com/deepfake-of-principals-voice-is-the-latest-case-of-ai-being-used-for-harm/

    Everyone — not just politicians and celebrities — should be concerned about this increasingly powerful deep-fake technology, experts say.

    The most recent criminal case involving artificial intelligence emerged last week from a Maryland high school, where police say a principal was framed as racist by a fake recording of his voice.

    The case is yet another reason why everyone — not just politicians and celebrities — should be concerned about this increasingly powerful deep-fake technology, experts say.

    “Everybody is vulnerable to attack, and anyone can do the attacking,” said Hany Farid, a professor at the University of California, Berkeley, who focuses on digital forensics and misinformation.

    Reply
  13. Tomi Engdahl says:

    Artificial Intelligence
    Why Using Microsoft Copilot Could Amplify Existing Data Quality and Privacy Issues

    Microsoft provides an easy and logical first step into GenAI for many organizations, but beware of the pitfalls.

    https://www.securityweek.com/why-using-microsoft-copilot-could-amplify-existing-data-quality-and-privacy-issues/

    Reply
  14. Tomi Engdahl says:

    Kalley Huang / The Information:
    Some developers are releasing versions of Llama 3, which has a context window of 8K+ tokens, with longer context windows, thanks to Meta’s open-source approach

    https://www.theinformation.com/articles/how-developers-gave-llama-3-more-memory

    Reply
  15. Tomi Engdahl says:

    Alex Kantrowitz / Big Technology:
    Elon Musk says he wants Grok to create news summaries by relying solely on X posts, without looking at article text, and improved story citations are coming

    Elon Musk’s Plan For AI News
    Musk emails with details on AI-powered news inside X. An AI bot will summarize news and commentary, sometimes looking through tens of thousands of posts per story.
    https://www.bigtechnology.com/p/elon-musks-plan-for-ai-news

    Reply
  16. Tomi Engdahl says:

    Reuters:
    Sources: Fei-Fei Li raised a seed round for a “spatial intelligence” startup using human-like visual data processing to create AI capable of advanced reasoning

    https://www.reuters.com/technology/stanford-ai-leader-fei-fei-li-building-spatial-intelligence-startup-2024-05-03/

    Reply
  17. Tomi Engdahl says:

    PRICEY AI “DEVICE” TURNS OUT TO JUST BE AN ANDROID APP WITH EXTRA STEPS
    https://futurism.com/the-byte/pricey-ai-device-android-app-extra-steps

    “IT LOOKS LIKE THIS AI GADGET COULD HAVE JUST BEEN AN APP AFTER ALL.”

    Secretive wearables startup Humane disappointed with its AI Pin, quickly becoming one of the worst-reviewed tech products of all time.

    Competitor Rabbit’s R1, a similar — albeit cheaper — device that promises to be an AI chatbot-powered friend that can answer pretty much any question you can come up with, didn’t fare much better, with TechRadar calling it a “beautiful mess” that “nobody needs.”

    “I can’t believe this bunny took my money,” Mashable’s Kimberly Gedeon wrote in her review today. Famed YouTuber Marques “MKBHD” Brownlee slammed it as being “barely reviewable.”

    Reply
  18. Tomi Engdahl says:

    Toimitusjohtaja vaatii työntekijöitä käyttämään ChatGPT:tä vähintään 20 kertaa päivässä
    Joona Komonen29.4.202416:42TEKOÄLYKORONAVIRUS
    Modernan toimitusjohtaja on ollut ChatGPT-fani jo vuoden 2022 lopusta asti.
    https://www.tivi.fi/uutiset/toimitusjohtaja-vaatii-tyontekijoita-kayttamaan-chatgptta-vahintaan-20-kertaa-paivassa/7399bb3c-91a5-40df-8de5-cd7ea137c7b1

    Reply
  19. Tomi Engdahl says:

    Claude 3 Opus has stunned AI researchers with its intellect and ‘self-awareness’ — does this mean it can think for itself?
    News
    By Roland Moore-Coyler published April 24, 2024
    Anthropic’s AI tool has beaten GPT-4 in key metrics and has a few surprises up its sleeve — including pontificating about its existence and realizing when it was being tested.
    https://www.livescience.com/technology/artificial-intelligence/anthropic-claude-3-opus-stunned-ai-researchers-self-awareness-does-this-mean-it-can-think-for-itself

    Reply
  20. Tomi Engdahl says:

    Sam Altman says helpful agents are poised to become AI’s killer function
    Open AI’s CEO says we won’t need new hardware or lots more training data to get there.
    https://www.technologyreview.com/2024/05/01/1091979/sam-altman-says-helpful-agents-are-poised-to-become-ais-killer-function/

    A number of moments from my brief sit-down with Sam Altman brought the OpenAI CEO’s worldview into clearer focus. The first was when he pointed to my iPhone SE (the one with the home button that’s mostly hated) and said, “That’s the best iPhone.” More revealing, though, was the vision he sketched for how AI tools will become even more enmeshed in our daily lives than the smartphone.

    “What you really want,” he told MIT Technology Review, “is just this thing that is off helping you.” Altman, who was visiting Cambridge for a series of events hosted by Harvard and the venture capital firm Xfund, described the killer app for AI as a “super-competent colleague that knows absolutely everything about my whole life, every email, every conversation I’ve ever had, but doesn’t feel like an extension.” It could tackle some tasks instantly, he said, and for more complex ones it could go off and make an attempt, but come back with questions for you if it needs to.

    Reply
  21. Tomi Engdahl says:

    AI-as-a-Service for Signal Processing
    March 20, 2024 by Renesas Electronics
    The Reality AI software suite provides AI tools optimized for solving problems related to sensors and signals, enabling notifications to applications and devices so they can take action. This article covers technical aspects of the approach to machine learning and architecture of the solution.
    https://www.allaboutcircuits.com/partner-content-hub/renesas-electronics/ai-as-a-service-for-signal-processing/

    Reply
  22. Tomi Engdahl says:

    Nick Bostrom Made the World Fear AI. Now He Asks: What if It Fixes Everything?
    Philosopher Nick Bostrom popularized the idea superintelligent AI could erase humanity. His new book imagines a world in which algorithms have solved every problem.
    https://www.wired.com/story/nick-bostrom-fear-ai-fix-everything/?fbclid=IwZXh0bgNhZW0CMTEAAR1kvwkLv2nemxhRfCdXQWgSoE7ceX-ZrwPjlocTe1ReUOAzzb3UvGICOm0_aem_Ab4r7j0SFCkK5uo_IepYEDt-SKTrQ9JQoZTynp01fW8kd24NEttfJTOrt16D-9v8kDsgH8h_SCZ9crCoduWFTRnA

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*