3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

5,214 Comments

  1. Tomi Engdahl says:

    In Other News: AI Regulation, Layoffs, US Aerospace Attacks, Post-Quantum Encryption
    https://www.securityweek.com/in-other-news-ai-regulation-layoffs-us-aerospace-attacks-post-quantum-encryption/

    Cybersecurity news that you may have missed this week: AI regulation, layoffs, US aerospace malware attacks, and post-quantum encryption.

    AI regulation still a long way off

    The EU was thought to be close to AI regulation, but progress on the AI Act has stumbled. Blame is being laid on the EPP party for apparently wishing to change the rules. The problem appears to be the detail involved in remote biometric identification. Meanwhile, in the US, MeriTalk reports that “Congress appears to be just lining up at the starting gate with its own efforts to explore possible regulation of the technology.” One obvious complication is whether GPT-speak should be protected under the First Amendment.

    https://www.securityweek.com/in-global-rush-to-regulate-ai-europe-set-to-be-trailblazer/

    Reply
  2. Tomi Engdahl says:

    OWASP Top 10 for Large Language Model applications

    OWASP has published a Top 10 list of security risks associated with large language model (LLM) applications. Vulnerabilities include prompt injections, data leakage, inadequate sandboxing, and unauthorized code execution.

    https://owasp.org/www-project-top-10-for-large-language-model-applications/

    Reply
  3. Tomi Engdahl says:

    Screen Swap Experiment
    https://www.youtube.com/watch?v=_jtfazvwysM

    This video displays the technology used to effortlessly blend a target body into a deepfake without swapping heads or faces.

    Reply
  4. Tomi Engdahl says:

    Äärimmäisen nolo esimerkki tekoälyn vaaroista: emämunauksen tehnyt juristi perustelee tekoaan
    12.6.202311:33
    https://www.mikrobitti.fi/uutiset/aarimmaisen-nolo-esimerkki-tekoalyn-vaaroista-emamunauksen-tehnyt-juristi-perustelee-tekoaan/53190932-aae0-4a53-af5c-0ec3dd004ae4

    ”Toivottavasti tästä oli apua”, tuumi ChatGPT annettuaan juristille nipun tekaistuja oikeustapauksia.

    Toukokuun lopulla otsikoihin nousi kiusallinen tapaus Yhdysvalloista, kun juristi oli käyttänyt työssään apuna OpenAI:n generatiivista tekoälyä, ChatGPT:tä. Vakuuttavalta vaikuttanut tekoäly oli antanut juristille joukon ennakkotapauksia, joita tämä käytti omassa oikeustapauksessaan. Valitettavasti nuo ennakkotapaukset eivät olleet todellisia.

    Asia johti juristin ja tämän kollegan kuulemiseen, josta The New York Times raportoi. Juristi Steven A. Schwartzia hiillostettiin lähes kahden tunnin ajan, ja aluksi hermostuneesti irvistellyt juristi oli session lopuksi silminnähden lyöty, päänsä alas painaneena, kun tuomari oli käynyt läpi erikoista tapausta, NYT kuvailee.

    The ChatGPT Lawyer Explains Himself
    https://www.nytimes.com/2023/06/08/nyregion/lawyer-chatgpt-sanctions.html

    In a cringe-inducing court hearing, a lawyer who relied on A.I. to craft a motion full of made-up case law said he “did not comprehend” that the chat bot could lead him astray.

    Reply
  5. Tomi Engdahl says:

    People Are Pirating GPT-4 By Scraping Exposed API Keys
    Why pay for $150,000 worth of OpenAI access when you could just steal it?
    https://www.vice.com/en/article/93kkky/people-pirating-gpt4-scraping-openai-api-keys

    People on the Discord for the r/ChatGPT subreddit are advertising stolen OpenAI API tokens that have been scraped from other peoples’ code, according to chat logs, screenshots and interviews.

    Reply
  6. Tomi Engdahl says:

    Avram Piltch / Tom’s Hardware:
    Google’s Search Generative Experience seems like an “AI plagiarism engine” that cobbles together snippets of text from a variety of sites, often word-for-word — The Search Generative Experience seems more like a text-copying experience. — Search has always been the Internet’s most important utility.

    Plagiarism Engine: Google’s Content-Swiping AI Could Break the Internet
    By Avram Piltch
    published 2 days ago
    https://www.tomshardware.com/news/google-sge-break-internet

    The Search Generative Experience seems more like a text-copying experience.

    Search has always been the Internet’s most important utility. Before Google became dominant, there were many contenders for the search throne, from Altavista to Lycos, Excite, Zap, Yahoo (mainly as a directory) and even Ask Jeeves. The idea behind the World Wide Web is that there’s power in having a nearly infinite number of voices. But with millions of publications and billions of web pages, it would be impossible to find all the information you want without search.

    Google succeeded because it offered the best quality results, loaded quickly and had less cruft on the page than any of its competitors. Now, having taken over 91 percent of the search market, the company is testing a major change to its interface that replaces the chorus of Internet voices with its own robotic lounge singer. Instead of highlighting links to content from expert humans, the “Search Generative Experience” (SGE) uses an AI plagiarism engine that grabs facts and snippets of text from a variety of sites, cobbles them together (often word-for-word) and passes off the work as its creation. If Google makes SGE the default mode for search, the company will seriously damage if not destroy the open web while providing a horrible user experience.

    A couple of weeks ago, Google made SGE available to the public in a limited beta (you can sign up here). If you are in the beta program like I am, you will see what the company seems to have planned for the near future: a search results page where answers and advice from Google take up the entire first screen, and you have to scroll way below the fold to see the first organic search result.

    For example, when I searched “best bicycle,” Google’s SGE answer, combined with its shopping links and other cruft took up the first 1,360 vertical pixels of the display before I could see the first actual search result.

    For its part, Google says that it’s just “experimenting,” and may make some changes before rolling SGE out to everyone as a default experience. The company says that it wants to continue driving traffic offsite.

    “We’re putting websites front and center in SGE, designing the experience to highlight and drive attention to content from across the web,”

    Reply
  7. Tomi Engdahl says:

    Annie Palmer / CNBC:
    Amazon tests generative AI product review summaries, giving an overview of what customers liked and disliked alongside an “AI-generated” disclaimer — – Amazon is testing the use of artificial intelligence to generate summaries of reviews left on some products.

    Amazon is using generative A.I. to summarize product reviews
    https://www.cnbc.com/2023/06/12/amazon-is-using-generative-ai-to-summarize-product-reviews.html

    Reply
  8. Tomi Engdahl says:

    Tristan Cross / The Guardian:
    A writer who learned to code after losing his job reflects on AI chatbots, which make mistakes and lack lateral thinking, and why human skills are irreplaceable

    When I lost my job, I learned to code. Now AI doom mongers are trying to scare me all over again
    https://www.theguardian.com/commentisfree/2023/jun/12/lost-job-learn-code-ai-humans-skills

    Reply
  9. Tomi Engdahl says:

    Mark Savage / BBC:
    Paul McCartney says AI helped “extricate” John Lennon’s voice from an old demo to complete what he calls “the final Beatles record”, set for release in 2023

    Sir Paul McCartney says artificial intelligence has enabled a ‘final’ Beatles song
    https://www.bbc.com/news/entertainment-arts-65881813

    Sir Paul McCartney says he has employed artificial intelligence to help create what he calls “the final Beatles record”.

    He told BBC Radio 4′s Today programme the technology had been used to “extricate” John Lennon’s voice from an old demo so he could complete the song.

    “We just finished it up and it’ll be released this year,” he explained.

    Sir Paul did not name the song, but it is likely to be a 1978 Lennon composition called Now And Then.

    It had already been considered as a possible “reunion song” for the Beatles in 1995, as they were compiling their career-spanning Anthology series.

    Sir Paul had received the demo a year earlier from Lennon’s widow, Yoko Ono. It was one of several songs on a cassette labelled “For Paul” that Lennon had made shortly before his death in 1980.

    Lo-fi and embryonic, the tracks were largely recorded onto a boombox as the musician sat at a piano in his New York apartment.

    Reply
  10. Tomi Engdahl says:

    Contrary View: Chatbots Don’t Help Programmers
    https://hackaday.com/2023/06/08/contrary-view-chatbots-dont-help-programmers/

    [Bertrand Meyer] is a decided contrarian in his views on AI and programming. In a recent Communications of the ACM blog post, he reveals that — unlike many others — he thinks AI in its current state isn’t very useful for practical programming. He was responding, in part, to another article from the ACM entitled “The End of Programming,” which, like many other articles, is claiming that, soon, no one will write software the way we do and have done for the last few decades. You can see [Matt Welsh] describe his thoughts on this in the video below. But [Bertrand] disagrees.

    As we have also noted, [Bretrand] says:

    “AI in its modern form, however, does not generate correct programs: it generates programs inferred from many earlier programs it has seen. These programs look correct but have no guarantee of correctness.”

    That wasn’t our favorite quote, though. His characterization of an AI programming assistant as “a cocky graduate student, smart and widely read, also quick to apologize, but thoroughly, invariably, sloppy and unreliable” resonated with us, as well.

    AI Does Not Help Programmers
    https://cacm.acm.org/blogs/blog-cacm/273577-ai-does-not-help-programmers/fulltext

    Everyone is blown away by the new AI-based assistants. (Myself included: see an earlier article on this blog which, by the way, I would write differently today.) They pass bar exams and write songs. They also produce programs. Starting with Matt Welsh’s article in Communications of the ACM, many people now pronounce programming dead, most recently The New York Times.

    I have tried to understand how I could use ChatGPT for programming and, unlike Welsh, found almost nothing. If the idea is to write some sort of program from scratch, well, then yes. I am willing to believe the experiment reported on Twitter of how a beginner using Copilot to beat hands-down a professional programmer for a from-scratch development of a Minimum Viable Product program, from “Figma screens and a set of specs.” I have also seen people who know next to nothing about programming get a useful program prototype by just typing in a general specification. I am talking about something else, the kind of use that Welsh touts: a professional programmer using an AI assistant to do a better job. It doesn’t work.

    Precautionary observations:

    Caveat 1: We are in the early days of the technology and it is easy to mistake teething problems for fundamental limitations. (PC Magazine’s initial review of the iPhone: “it’s just a plain lousy phone, and although it makes some exciting advances in handheld Web browsing it is not the Internet in your pocket.”) Still, we have to assess what we have, not what we could get.
    Caveat 2: I am using ChatGPT (version 4). Other tools may perform better.
    Caveat 3: It has become fair game to try out ChatGPT or Bard, etc., into giving wrong answers. We all have great fun when they tell us that Famous Computer Scientist X has received the Turing Award and next (equally wrongly) that X is dead. Such exercises have their use, but here I am doing something different: not trying to trick an AI assistant by pushing it to the limits of its knowledge, but genuinely trying to get help from it for my key purpose, programming. I would love to get correct answers and, when I started, thought I would. What I found through honest, open-minded enquiry is at complete odds with the hype.
    Caveat 4: The title of this article is rather assertive. Take it as a proposition to be debated (“This house believes that…”). I would be interested to be proven wrong. The main immediate goal is not to edict an inflexible opinion (there is enough of that on social networks), but to spur a fruitful discussion to advance our understanding beyond the “Wow!” effect.

    Reply
  11. Tomi Engdahl says:

    What Do You Want In A Programming Assistant?
    https://hackaday.com/2023/06/10/what-do-you-want-in-a-programming-assistant/

    This came to mind because a recent post from ACM has the contrary view that chatbots aren’t able to help real programmers. We’ve also seen that — maybe — it can, in limited ways. We suspect it is like getting a new larger monitor. At first, it seems huge. But in a week, it is just the normal monitor, and your old one — which had been perfectly adequate — seems tiny.

    Reply
  12. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    OpenAI releases new GPT-3.5-turbo and GPT-4 versions that include a new “function calling” feature, and reduces the pricing of the original GPT-3.5-turbo by 25%

    OpenAI intros new generative text features while reducing pricing
    https://techcrunch.com/2023/06/13/openai-intros-new-generative-text-features-while-reducing-pricing/

    As the competition in the generative AI space grows fiercer, OpenAI is upgrading its text-generating models while reducing pricing.

    Today, OpenAI announced the release of new versions of GPT-3.5-turbo and GPT-4, the latter being its latest text-generating AI, with a capability called function calling. As OpenAI explains in a blog post, function calling allows developers to describe programming functions to GPT-3.5-turbo and GPT-4 and have the models create code to execute those functions.

    For example, function calling can help to create chatbots that answer questions by calling external tools, convert natural language into database queries and extract structured data from text. “These models have been fine-tuned to both detect when a function needs to be called … and to respond with JSON that adheres to the function signature,” OpenAI writes. “Function calling allows developers to more reliably get structured data back from the model.”

    Reply
  13. Tomi Engdahl says:

    Steven Levy / Wired:
    Q&A with Satya Nadella on his generative AI “eureka moment”, OpenAI, helping anyone with a phone become a developer, pausing AI work, Bing’s relevancy, and more — The CEO can’t imagine life without artificial intelligence—even if it’s the last thing invented by humankind.

    Microsoft’s Satya Nadella Is Betting Everything on AI
    https://www.wired.com/story/microsofts-satya-nadella-is-betting-everything-on-ai/

    The CEO can’t imagine life without artificial intelligence—even if it’s the last thing invented by humankind.

    Reply
  14. Tomi Engdahl says:

    Mike Wheatley / SiliconANGLE:
    Meta details I-JEPA, a computer vision model that uses common sense world knowledge to create more accurate images, avoiding errors like hands with extra digits — Artificial intelligence researchers from Meta Platforms Inc. say they’re making progress on the vision of its Chief AI Scientist Yann LeCun …

    Meta AI researchers unveil I-JEPA, a computer vision model that learns more like humans do
    https://siliconangle.com/2023/06/13/meta-ai-researchers-unveil-jepa-computer-vision-model-learns-like-humans/

    Reply
  15. Tomi Engdahl says:

    James Vincent / The Verge:
    Stack Overflow survey of ~90K developers in 185 countries: 70% use or plan to use AI coding tools in 2023; only 3% “highly trust” and 39% “somewhat trust” them

    Stack Overflow survey finds developers are ready to use AI tools — even if they don’t fully trust them
    https://www.theverge.com/2023/6/13/23759101/stack-overflow-developers-survey-ai-coding-tools-moderators-strike

    / The coding Q&A site is embracing generative AI, but its moderators are on strike over its policies. Its annual developer survey reflects this tension over AI coding.

    A survey of developers by coding Q&A site Stack Overflow has found that AI tools are becoming commonplace in the industry even as coders remain skeptical about their accuracy. The survey comes at an interesting time for the site, which is trying to work out how to benefit from AI while dealing with a strike by moderators over AI-generated content.

    The survey found that 77 percent of respondents felt favorably about using AI in their workflow and that 70 percent are already using or plan to use AI coding tools this year.

    Reply
  16. Tomi Engdahl says:

    Clothilde Goujard / Politico:
    The Irish DPC says Google must delay Bard’s launch in the EU because it has insufficient information about how it will respect the EU’s data privacy rules
    More: TechCrunch, Gizmodo, Cointelegra

    Google forced to postpone Bard chatbot’s EU launch over privacy concerns
    https://www.politico.eu/article/google-postpone-bard-chatbot-eu-launch-privacy-concern/

    The Irish privacy watchdog said the tech giant has given insufficient information about how it will respect the EU’s data privacy rules.

    Reply
  17. Tomi Engdahl says:

    Tiyashi Datta / Reuters:
    Accenture plans to invest $3B over three years into its data and AI practice, aiming to have 80,000 staff working on AI, after laying off ~19,000 in March 2023

    Accenture looks to power AI efforts with $3 billion investment
    https://www.reuters.com/technology/accenture-looks-power-ai-efforts-with-3-billion-investment-2023-06-13/

    Reply
  18. Tomi Engdahl says:

    Biz Carson / Bloomberg:
    In a first, Larry Ellison passes Bill Gates to become the world’s fourth-richest person, with a $129.8B net worth; ORCL is up 40%+ in 2023 on the AI boom

    https://www.bloomberg.com/news/articles/2023-06-12/larry-ellison-rides-ai-boom-to-highest-wealth-ranking-ever#xj4y7vzkg

    Reply
  19. Tomi Engdahl says:

    STEVE WOZNIAK: IF YOU WANT TO LEARN ABOUT AI KILLING PEOPLE, “GET A TESLA”
    https://futurism.com/the-byte/steve-wozniak-ai-killing-people-tesla?fbclid=IwAR1OchLMNSN24_iMgYEf2_EnFZDOxsFVrwRsOHVAIidIUB9vhVifhf4NRKM

    THE WOZ IS AT IT AGAIN.
    Tesla Killer
    Apple co-founder Steve Wozniak has some harsh words for Tesla. In a new interview, Wozniak argued that the Elon Musk-led company’s self-driving efforts leave a lot to be desired — and are actively making Teslas incredibly unsafe to drive.

    Reply
  20. Tomi Engdahl says:

    The potential economic boost from the shifts brought by AI could be up to $4.4 trillion, McKinsey says.

    Biggest Losers of AI Boom Are Knowledge Workers, McKinsey Says
    Could add the equivalent of $2.6 to $4.4 trillion annually
    May add 0.6% in annual labor productivity growth for 20 years
    https://www.bloomberg.com/news/articles/2023-06-14/biggest-losers-of-ai-boom-are-knowledge-workers-mckinsey-says?utm_campaign=socialflow-organic&utm_content=business&utm_medium=social&cmpid=socialflow-facebook-business&utm_source=facebook&fbclid=IwAR0j3-tFZMTFntu1F-hb-nPqH2LKu57n9KMcWx_FxJ3FPmIzJX3Frfao6to#xj4y7vzkg

    Reply
  21. Tomi Engdahl says:

    Analyysi: EU yrittää tekoälyasetuksella toisintaa Bryssel-efektiä, mutta muu maailma ei välttämättä seuraa nyt perässä
    https://yle.fi/a/74-20036826

    Euroopan unionin tavoitteena on omia sisämarkkinoita säätelemällä luoda globaaleja standardeja. Tietosuoja-asetuksessa näin kävi, mutta tekoälyssä tilanne on toinen, kirjoittaa Ylen teknologiatoimittaja Teemu Hallamaa.

    Reply
  22. Tomi Engdahl says:

    Accenture antaa 19 000 työntekijälle potkut – panostaa 3 miljardia tekoälyyn
    Jori Virtanen14.6.202318:35TEKOÄLYTYÖELÄMÄ
    Kovia lupauksia. Accenture ei pahemmin jarruttele pyrkimyksissään päästä tekoälyboomin aallonharjalle.
    https://www.tivi.fi/uutiset/accenture-antaa-19-000-tyontekijalle-potkut-panostaa-3-miljardia-tekoalyyn/ab68e36c-d5e5-4bb8-9e28-56efad138598

    It-yhtiö Accenture ilmoitti maaliskuussa, että se on laskenut kuluvan tilivuoden ennusteitaan. Samaan syssyyn Accenture kertoi aikovansa leikata noin 2,5 prosenttia työvoimastaan, eli yhtiö on irtisanomassa noin 19 000 henkilöä.

    Reply
  23. Tomi Engdahl says:

    Bloomberg:
    How Microsoft’s bet on OpenAI due to transfer learning, an approach that wasn’t yet commercialized, may help Microsoft leapfrog Google and corner the AI market — There are two ways of looking at ChatGPT, the artificial intelligence chatbot that hundreds of millions of people have tried out since its release late last year.

    Microsoft’s Sudden AI Dominance Is Scrambling Silicon Valley’s Power Structure
    https://www.bloomberg.com/news/features/2023-06-15/microsoft-prepares-to-cash-in-on-openai-partnership-with-copilot#xj4y7vzkg

    The company has quietly cornered the emerging software market, and it’s preparing to cash in.

    Reply
  24. Tomi Engdahl says:

    Sylvia Varnham O’Regan / The Information:
    Sources: Meta is working on ways to make the next version of LLaMA available for commercial use; the model is currently only licensed for research use — Meta Platforms CEO Mark Zuckerberg and his deputies want other companies to freely use and profit from new artificial intelligence software Meta …

    Meta Wants Companies to Make Money Off Its Open-Source AI, in Challenge to Google
    https://www.theinformation.com/articles/meta-wants-companies-to-make-money-off-its-open-source-ai-in-challenge-to-google

    Meta Platforms CEO Mark Zuckerberg and his deputies want other companies to freely use and profit from new artificial intelligence software Meta is developing, a decision that could have big implications for other AI developers and businesses that are increasingly adopting it.

    Meta is working on ways to make the next version of its open-source large-language model—technology that can power chatbots like ChatGPT—available for commercial use, said a person with direct knowledge of the situation and a person who was briefed about it. The move could prompt a feeding frenzy among AI developers eager for alternatives to proprietary software sold by rivals Google and OpenAI. It would also indirectly benefit Meta’s own AI development.

    Reply
  25. Tomi Engdahl says:

    Reuters:
    Sources: Alphabet advises employees not to enter confidential info into chatbots, including Bard, or directly use AI-generated code, following Amazon and others — Alphabet Inc (GOOGL.O) is cautioning employees about how they use chatbots, including its own Bard, at the same time as it markets …

    Google, one of AI’s biggest backers, warns own staff about chatbots
    https://www.reuters.com/technology/google-one-ais-biggest-backers-warns-own-staff-about-chatbots-2023-06-15/

    SAN FRANCISCO, June 15 (Reuters) – Alphabet Inc (GOOGL.O) is cautioning employees about how they use chatbots, including its own Bard, at the same time as it markets the program around the world, four people familiar with the matter told Reuters.

    The Google parent has advised employees not to enter its confidential materials into AI chatbots, the people said and the company confirmed, citing long-standing policy on safeguarding information.

    The chatbots, among them Bard and ChatGPT, are human-sounding programs that use so-called generative artificial intelligence to hold conversations with users and answer myriad prompts. Human reviewers may read the chats, and researchers found that similar AI could reproduce the data it absorbed during training, creating a leak risk.

    Alphabet also alerted its engineers to avoid direct use of computer code that chatbots can generate, some of the people said.

    Asked for comment, the company said Bard can make undesired code suggestions, but it helps programmers nonetheless. Google also said it aimed to be transparent about the limitations of its technology.

    Reply
  26. Tomi Engdahl says:

    “Big companies aren’t so willing to share proprietary data with startups looking to power their large-language models: Generative-AI startups are rolling in billions of dollars of funding, but they could already be headed for failure if they can’t get the right data—and that won’t be an easy feat.

    “We’ve seen lots of pitches from companies who may well be pursuing a brilliant application of AI, but they don’t have access to data that will give them the ability to build a powerful application, let alone proprietary data that will help them have competitive moats in their business,” said Brad Svrluga, co-founder and general partner of venture-capital firm Primary Venture Partners.

    Today, having the right data is more critical than ever for success. Now that building the actual models has become somewhat commoditized, the real value is in the data

    AI Startups Have Tons of Cash, but Not Enough Data. That’s a Problem.
    Big companies aren’t so willing to share proprietary data with startups looking to power their large-language models
    https://www.wsj.com/articles/ai-startups-have-tons-of-cash-but-not-enough-data-thats-a-problem-d69de120?fbclid=IwAR39qBcpaHmhzvvW3xsCUBoMAjIjqwSkWV59QAH384-ZqADvy9ry-MbIRbU

    Generative-AI startups are rolling in billions of dollars of funding, but they could already be headed for failure if they can’t get the right data—and that won’t be an easy feat.

    “We’ve seen lots of pitches from companies who may well be pursuing a brilliant application of AI, but they don’t have access to data that will give them the ability to build a powerful application, let alone proprietary data that will help them have competitive moats in their business,” said Brad Svrluga, co-founder and general partner of venture-capital firm Primary Venture Partners.

    Reply
  27. Tomi Engdahl says:

    Saanko käyttää teko­älyllä tekemääni tekstiä tai kuvaa? Luvassa on hirvittävä sotku – näin vastaavat asian­tuntijat https://www.is.fi/digitoday/art-2000009637849.html

    Tekoäly tulee kaikkialle, halusit tai et. Todennäköisesti sinäkin luot sillä pian tekstiä ja kuvia. Mutta omistatko ne samalla tavalla kuin omat tuotoksesi? Asiantuntijat vastaavat vaikeisiin kysymyksiin.

    Reply
  28. Tomi Engdahl says:

    Stanford CRFM:
    An assessment finds it is currently feasible for major foundation model providers to comply with the draft EU AI Act and doing so would improve transparency

    Do Foundation Model Providers Comply with the EU AI Act?
    https://crfm.stanford.edu/2023/06/15/eu-ai-act.html

    Foundation models like ChatGPT are transforming society with their remarkable capabilities, serious risks, rapid deployment, unprecedented adoption, and unending controversy. Simultaneously, the European Union (EU) is finalizing its AI Act as the world’s first comprehensive regulation to govern AI, and just yesterday the European Parliament adopted a draft of the Act by a vote of 499 in favor, 28 against, and 93 abstentions. The Act includes explicit obligations for foundation model providers like OpenAI and Google.

    Reply
  29. Tomi Engdahl says:

    More News
    Wall Street Journal:
    Meta scrambles to refocus its resources on usable AI products after a decade-long focus on research in its AI division disincentivized work on generative AI

    Mark Zuckerberg Was Early in AI. Now Meta Is Trying to Catch Up.
    The CEO considers artificial intelligence critical to long-term growth and is taking more control over efforts. Many Meta AI researchers have departed in the last year.
    https://www.wsj.com/articles/mark-zuckerberg-was-early-in-ai-now-meta-is-trying-to-catch-up-94a86284?mod=djemalertNEWS

    Reply
  30. Tomi Engdahl says:

    AI vocals will change EVERYTHING for producers..
    https://www.youtube.com/watch?v=q2daW18umPs

    Reply
  31. Tomi Engdahl says:

    AI video generator: 4 AI Text to Video Tools
    https://www.youtube.com/watch?v=CGEaM9jZQCc

    Reply
  32. Tomi Engdahl says:

    https://www.securityweek.com/in-other-news-linux-kernel-exploits-update-on-bec-losses-cybersecurity-awareness-act/

    European Parliament votes in favor of AI Act

    Despite last week’s concerns over the future of the EU AI Act, the European Parliament has voted in favor — by 499 to 28, with 93 abstentions. The details still have to be agreed by the European Council (representing the national governments) and the European Commission — and there is likely to be some pushback from both; for example, in policing areas. As it stands, the law is heavily focused on people (privacy and personal rights), potentially outlawing areas such as emotion detection, and predictive policing. It also provides greater transparency over AI data content; for example, restrictions on the use of copyright material. The Act contrasts with Google’s SAIF proposals: the former concentrates on the content, while the latter concentrates on the technology.

    AI Act enters final phase of EU legislative process
    https://www.euractiv.com/section/artificial-intelligence/news/ai-act-enters-final-phase-of-eu-legislative-process/

    The European Parliament adopted its position on the AI rulebook with an overwhelming majority on Wednesday (14 June), paving the way for the interinstitutional negotiations set to finalise the world’s first comprehensive law on Artificial Intelligence.

    The AI Act is a flagship initiative to regulate this disruptive technology based on its capacity to cause harm. It follows a risk-based approach, banning AI applications that pose an unacceptable risk and imposing a strict regime for high-risk use cases.

    “Is this the right time to regulate AI? My answer is resolutely yes. It is the right time because of the profound impact that AI has,” Dragoș Tudorache, one of the European Parliament’s co-rapporteurs on the AI Act, told his peers ahead of the vote.

    Banned practices

    Where to draw a line on the types of AI applications that should be forbidden was at the centre of the last-minute attempts to change the text adopted at the parliamentary committee level.

    The main point of contention related to Remote Biometric Identification. Liberal and progressive lawmakers sought to ban the real-time use of this technology but allow it for ex-post investigations on serious crimes.

    By contrast, the centre-right European People’s Party tried to introduce derogations to the real-time ban for exceptional circumstances such as terrorist attacks or missing people. This last-minute attempt enraged the other political groups but was eventually unsuccessful.

    However, a parliamentary official told EURACTIV that the point of plenary amendments is not always to modify the text but to send a political message.

    Foundation models & generative AI

    The EU lawmakers introduced a tiered approach for AI models that do not have a specific purpose, so-called General Purpose AI, with a stricter regime for foundation models, large language models on which other AI systems can be built.

    The top layer relates to generative AI like ChatGPT, for which the European Parliament wants to introduce mandatory labelling for AI-generated content and force the disclosure of training data covered by copyright.

    With ChatGPT, generative AI caught mass attention, and the European Commission has launched outreach initiatives attempting to anticipate the AI rules and foster international alignment at the G7 level.

    On these initiatives, leading MEP Brando Benifei warned that they “could become a context where the businesses will act to influence the legislative work,” which is now the focus of the lobbying efforts to water down the regulation. “But if we cooperate correctly between the institutions, this will be prevented.”

    The list of prohibited practices was extended to subliminal techniques, biometric categorisation, predictive policing, internet-scrapped facial recognition databases, and emotion recognition software is forbidden in law enforcement, border management, workplace and education.

    An extra layer was added for AI applications to fall in the high-risk category, whilst the list of high-risk areas and use cases were made more precise and extended in law enforcement and migration control areas. Recommender systems of prominent social media were added as high-risk.

    AI Act: MEPs close in on rules for general purpose AI, foundation models
    https://www.euractiv.com/section/artificial-intelligence/news/ai-act-meps-close-in-on-rules-for-general-purpose-ai-foundation-models/

    Google Introduces SAIF, a Framework for Secure AI Development and Use
    https://www.securityweek.com/google-introduces-saif-a-framework-for-secure-ai-development-and-use/

    The Google SAIF (Secure AI Framework) is designed to provide a security framework or ecosystem for the development, use and protection of AI systems.

    Reply
  33. Tomi Engdahl says:

    Washington Post:
    Hands-on with an Adobe Photoshop beta’s Generative Fill AI feature, which lets users add objects to, remove content from, and expand images using text prompts — A new ‘generative fill’ AI capability can create joyful Photoshop edits — and frightening deepfakes

    Anyone can Photoshop now, thanks to AI’s latest leap
    A new ‘generative fill’ AI capability can create joyful Photoshop edits — and frightening deepfakes
    https://www.washingtonpost.com/technology/2023/06/16/ai-photoshop-generative-fill-review/

    Reply
  34. Tomi Engdahl says:

    The Guardian:
    As part of its new policy on AI, The Guardian says it will use AI only when it contributes to original journalism and to help with corrections and suggestions

    https://www.theguardian.com/help/insideguardian/2023/jun/16/the-guardians-approach-to-generative-ai

    Reply
  35. Tomi Engdahl says:

    Financial Times:
    Sources: media companies like News Corp and The Guardian have discussed LLM copyright issues and a subscription fee with OpenAI, Google, Microsoft, or Adobe — Google and OpenAI are discussing agreements to pay publishers over using content to train generative AI models

    AI and media companies negotiate landmark deals over news content
    https://www.ft.com/content/79eb89ce-cea2-4f27-9d87-e8e312c8601d

    Reply
  36. Tomi Engdahl says:

    Lisa Bannon / Wall Street Journal:
    A look at US hospitals using sometimes flawed AI-based diagnosis tools, as some clinicians say they feel pressure from administrations to defer to the algorithm — Artificial intelligence raises difficult questions about who makes the call in a health crisis: the human or the machine?

    When AI Overrules the Nurses Caring for You
    https://www.wsj.com/articles/ai-medical-diagnosis-nurses-f881b0fe?mod=djemalertNEWS

    The alert correlates elevated white blood cell count with septic infection. It wouldn’t take into account that this particular patient had leukemia, which can cause similar blood counts. The algorithm, which was based on artificial intelligence, triggers the alert when it detects patterns that match previous patients with sepsis. The algorithm didn’t explain its decision.

    Hospital rules require nurses to follow protocols when a patient is flagged for sepsis. While Beebe can override the AI model if she gets doctor approval, she said she faces disciplinary action if she’s wrong. So she followed orders and drew blood from the patient, even though that could expose him to infection and run up his bill. “When an algorithm says, ‘Your patient looks septic,’ I can’t know why. I just have to do it,” said Beebe, who is a representative of the California Nurses Association union at the hospital.

    As she suspected, the algorithm was wrong. “I’m not demonizing technology,” she said. “But I feel moral distress when I know the right thing to do and I can’t do it.”

    Artificial intelligence and other high-tech tools, though nascent in most hospitals, are raising difficult questions about who makes decisions in a crisis: the human or the machine?

    The technologies, which can analyze massive amounts of data with a speed beyond human capacity, are making extraordinary advances in medicine, from improving the diagnosis of heart conditions to predicting protein structures that could speed drug discovery. When it is used alongside humans to help assess, diagnose and treat patients, AI has shown powerful results, academics and tech experts say.

    At the same time, the tools can be flawed and are sometimes implemented without adequate training or flexibility, say nurses and healthcare workers who work with them regularly, putting patient care at risk. Some clinicians say they feel pressure from hospital administration to defer to the algorithm.

    “AI should be used as clinical decision support and not to replace the expert,” said Kenrick Cato, a professor of nursing at the University of Pennsylvania and nurse scientist at the Children’s Hospital of Philadelphia. “Hospital administrators need to understand there are lots of things an algorithm can’t see in a clinical setting.”

    In a survey of 1,042 registered nurses published this month by National Nurses United, a union, 24% of respondents said they had been prompted by a clinical algorithm to make choices they believed “were not in the best interest of patients based on their clinical judgment and scope of practice” about issues such as patient care and staffing.” Of those, 17% said they were permitted to override the decision, while 31% weren’t allowed and 34% said they needed doctor or supervisor’s permission.

    Artificial intelligence raises difficult questions about who makes the call in a health crisis: the human or the machine?

    Reply
  37. Tomi Engdahl says:

    Stanford CRFM:
    Researchers find that major foundation model providers can feasibly comply with the EU’s draft AI Act, which would improve transparency in the entire ecosystem

    Do Foundation Model Providers Comply with the EU AI Act?
    https://crfm.stanford.edu/2023/06/15/eu-ai-act.html

    Reply
  38. Tomi Engdahl says:

    Wall Street Journal:
    Meta scrambles to refocus its resources on usable AI products and features after a decade of research in its AI division disincentivized work on generative AI

    Mark Zuckerberg Was Early in AI. Now Meta Is Trying to Catch Up.
    https://www.wsj.com/articles/mark-zuckerberg-was-early-in-ai-now-meta-is-trying-to-catch-up-94a86284?mod=djemalertNEWS

    The CEO considers artificial intelligence critical to long-term growth and is taking more control over efforts. Many Meta AI researchers have departed in the last year.

    Reply
  39. Tomi Engdahl says:

    Brad Stone / Bloomberg:
    How a seminal 2017 paper by Google researchers laid the groundwork for the 2020s AI boom, causing a Silicon Valley frenzy not seen since the 1990s dot-com fever

    The AI Boom Has Silicon Valley on Another Manic Quest to Change the World
    https://www.bloomberg.com/news/features/2023-06-15/silicon-valley-hopes-ai-hype-can-lead-to-another-tech-boom?leadSource=uverify%20wall

    A guide to the new AI technologies, evangelists, skeptics and everyone else caught up in the flood of cash and enthusiasm reshaping the industry.

    Reply
  40. Tomi Engdahl says:

    AI Does Not Help Programmers
    https://cacm.acm.org/blogs/blog-cacm/273577-ai-does-not-help-programmers/fulltext

    A blog at Communications of the ACM on AI and programming.

    Reply
  41. Tomi Engdahl says:

    Sabrina Ortiz / ZDNet:
    Vimeo announces AI tools to automate parts of the video creation process, including an AI script generator, a teleprompter, and a text-based video editor

    Vimeo adds a suite of AI tools to make video creation significantly easier
    These AI tools can help you reduce how long it takes to produce and edit a video. Here’s how.
    https://www.zdnet.com/article/vimeo-adds-a-suite-of-ai-tools-to-make-video-creation-significantly-easier/

    Have you ever had a great video idea that you just never got to because of how much work was involved? Vimeo, the video-sharing and editing platform, is attempting to solve that problem with the use of AI.

    On Tuesday, Vimeo announced it is adding a suite of AI tools to its platform that will automate a lot of the video-editing process. The features include an AI script generator, a teleprompter, and even a text-based video editor.

    “AI in video opens up a new frontier of accessibility. Any individual or business now has the ability to produce engaging, professional content with no prior production experience, and within mere minutes,” said Ashraf Alkarmi, chief product officer at Vimeo.

    Reply
  42. Tomi Engdahl says:

    TikTok’s new Script Generator tool uses AI to write ad scripts for you in seconds
    Have trouble writing an ad script? Here’s how you can let TikTok write it for you.
    https://www.zdnet.com/article/tiktoks-new-script-generator-tool-uses-ai-to-write-ad-scripts-for-you-in-seconds/

    Reply
  43. Tomi Engdahl says:

    Group-IB Discovers 100K+ Compromised ChatGPT Accounts on Dark Web Marketplaces; Asia-Pacific region tops the list https://www.group-ib.com/media-center/press-releases/stealers-chatgpt-credentials/

    Group-IB has identified 101,134 stealer-infected devices with saved ChatGPT credentials. Group-IB’s experts highlight that more and more employees are taking advantage of the Chatbot to optimize their work, be it software development or business communications. By default, ChatGPT stores the history of user queries and AI responses. Consequently, unauthorized access to ChatGPT accounts may expose confidential or sensitive information, which can be exploited for targeted attacks against companies and their employees.

    https://www.bitdefender.com/blog/hotforsecurity/100-000-hacked-chatgpt-accounts-up-for-sale-on-the-dark-web/
    https://www.theregister.com/2023/06/20/stolen_chatgpt_accounts/

    Reply
  44. Tomi Engdahl says:

    Biden Discusses Risks and Promises of Artificial Intelligence With Tech Leaders in San Francisco
    https://www.securityweek.com/biden-discusses-risks-and-promises-of-artificial-intelligence-with-tech-leaders-in-san-francisco/

    The Biden administration wants to figure out how to regulate AI, looking for ways to nurture its potential for economic growth and national security and protect against its potential dangers.

    Reply
  45. Tomi Engdahl says:

    Meta is worried about the “potential risks of misuse” for its new AI tool

    Meta says its new speech-generating AI tool is too dangerous to release
    By Christian Guyton published 1 day ago
    The Facebook owner admitted its new AI could cause ‘unintended harm’
    https://www.techradar.com/news/meta-says-its-new-speech-generating-ai-tool-is-too-dangerous-to-release?utm_campaign=socialflow&utm_source=facebook.com&utm_medium=social&utm_content=techradar&fbclid=IwAR3DbER_-FvVpUg5K14DlSLPknpY1yjMeEeFen34DBa-MQJ582d_R06wJZI

    Reply
  46. Tomi Engdahl says:

    People are already using ChatGPT to create workout plans
    Fitness advice from OpenAI’s large language model is impressively presented—but don’t take it too seriously.
    https://www.technologyreview.com/2023/01/26/1067299/chatgpt-workout-plans/

    Reply
  47. Tomi Engdahl says:

    ”Jatkossa ihmismuusikon on tehtävä vähintään 20 prosenttia teoksesta, jotta se voi päästä Grammy-ehdokkaaksi.” Onko 20% ihmisosuus uusi nyrkkisääntö tekoälyavusteisuudelle laajemminkin?

    Grammy-palkintogaala päivitti sääntöjään: kokonaan tekoälyn tuottama kappale ei enää voi voittaa
    https://www.hs.fi/kulttuuri/art-2000009669104.html?share=8810de94051b24eca667d1728293ffe7&fbclid=IwAR0qkkrQ9cfMNYypCGgScHK3GdtBsED2bS5aGDgaI2By1tyctWC5CbdfESg

    Sääntöjen mukaan artistien on todistettava, että ihminen on osallistunut musiikin tekemiseen.

    MUSIIKKIALAN Grammy-palkintogaala on muuttanut tekoälyn käyttöä koskevia sääntöjään. Jatkossa ihmismuusikon on tehtävä vähintään 20 prosenttia teoksesta, jotta se voi päästä Grammy-ehdokkaaksi.

    Tekoälyä saa siis edelleen käyttää. Palkinnoista vastaavan Recording Academyn uuden linjauksen mukaan artistien on kuitenkin todistettava, että ihminen on osallistunut musiikin tekemiseen.

    And the award goes to AI ft. humans: the Grammys outline new rules for AI use
    https://www.npr.org/2023/06/18/1183013852/grammys-ai-music-awards

    Reply
  48. Tomi Engdahl says:

    https://www.hs.fi/kulttuuri/art-2000009669104.html?share=8810de94051b24eca667d1728293ffe7&fbclid=IwAR0qkkrQ9cfMNYypCGgScHK3GdtBsED2bS5aGDgaI2By1tyctWC5CbdfESg
    TEKOÄLY ravistelee viihdealaa.

    Hollywoodin käsikirjoittajat ovat tällä hetkellä lakossa, sillä käsikirjoittajien ammattiyhdistys Writers Guild of America (WGA) haluaa muun muassa selkeyttä tekoälyä koskeviin sääntöihin, jotta studiot eivät voisi luoda tekoälyn avulla uusia käsikirjoituksia kirjoittajien aiempien töiden pohjalta.

    Käsi­kirjoittajat menivät lakkoon – näin se vaikuttaa tv-tuotantoihin ja elokuviin
    https://www.hs.fi/kulttuuri/art-2000009556719.html

    Käsikirjoittajat ovat huolissaan suoratoiston ja tekoälyn vaikutuksista käsikirjoittajien työhön.

    Reply
  49. Tomi Engdahl says:

    Scott Wong / NBC News:
    US Senate Majority Leader Chuck Schumer unveils an AI regulatory framework and says that “Congress must join the AI revolution” before it’s too late to regulate — The Senate majority leader released his long-awaited framework for regulating artificial intelligence …

    ‘A moment of revolution’: Schumer unveils strategy to regulate AI amid dire warnings
    The Senate majority leader released his long-awaited framework for regulating artificial intelligence and said he would launch AI forums this fall featuring a variety of experts.
    https://www.nbcnews.com/politics/congress/schumer-call-hands-deck-approach-regulating-ai-rcna90193

    enate Majority Leader Chuck Schumer unveiled his long-awaited legislative framework for regulating artificial intelligence in a speech Wednesday, warning that “Congress must join the AI revolution” now or risk losing its only chance to regulate the rapidly moving technology.

    Schumer, D-N.Y., also said that starting in the fall he would launch a series of “AI Insight Forums” featuring top AI developers, executives, scientists, community leaders, workers, national security experts and others. The discussions, he said, will form the foundation for more detailed policy proposals for Congress.

    Jeremy Diamond / CNN:
    A look at the White House’s urgent push to regulate AI, as President Biden meets with AI experts for a non-industry perspective on the risks and opportunities

    From ChatGPT to executive orders: Inside the White House’s urgent push to regulate AI
    https://edition.cnn.com/2023/06/20/politics/joe-biden-artificial-intelligence/

    President Joe Biden huddled in the Oval Office with several of his top advisers in early April as an aide typed prompts into ChatGPT: Summarize the Supreme Court’s New Jersey v. Delaware ruling and turn it into a Bruce Springsteen song.

    Weeks earlier, Biden had joked with Springsteen at the National Medal of Arts ceremony that the case, which centered on rights to the Delaware River, also gave his home state a claim to The Boss. Now, before the president’s eyes, the AI chatbot instantaneously began composing the lyrics in Springsteen’s style.

    Like many Americans who have toyed with ChatGPT, the president was wowed.

    By the end of the meeting, which also focused on AI’s impact on cybersecurity and jobs, he reminded the aides in the room – including his chief of staff Jeff Zients, deputy chief of staff Bruce Reed and top science adviser Dr. Arati Prabhakar – of what had already been clear inside the West Wing for weeks: AI should be a top priority.

    Weeks earlier, explosion of ChatGPT propelled artificial intelligence into the public consciousness, triggering a flurry of hearings on Capitol Hill as AI industry leaders touted its revolutionary potential, but also warned of “the risk of extinction from AI.”

    At the White House, the surge of interest in ChatGPT moved AI from the margins to a central priority.

    That urgency is being welcomed in AI policy circles. Multiple people who have advised the White House on AI policy said that while the White House laid an important foundation last year with its Blueprint for an AI Bill of Rights, they were concerned that the administration was not devoting sufficient attention to AI policy. Those same people say it’s now clear the White House has shifted into a higher gear to meet the moment.

    “If we had this conversation six months ago, my responses would be very different than today,” said a member of the National AI Advisory Committee, who pointed to a “wake-up call” inside the federal government since the explosion of ChatGPT.

    Reply
  50. Tomi Engdahl says:

    Maria Streshinsky / Wired:
    Q&A with Christopher Nolan, the director of Oppenheimer, on AI improving images in filmmaking, the parallels between nuclear weapon development and AI, and more

    How Christopher Nolan Learned to Stop Worrying and Love AI
    The Oppenheimer director says AI is not the bomb. His new movie might still scare you shitless.
    https://www.wired.com/story/christopher-nolan-oppenheimer-ai-apocalypse/

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*