3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

5,159 Comments

  1. Tomi Engdahl says:

    Nyt näyttää pahalta: jopa 55 000 työntekijää saa potkut – yli 10 000 työpaikkaa voitaisiin säästää, mutta yhtiö antaa duunit tekoälylle
    Jori Virtanen29.5.202314:13|päivitetty29.5.202314:13TYÖELÄMÄTEKOÄLYOPERAATTORIT
    Kaiken kaikkiaan yhtiö irtisanoo jopa 40 prosenttia työvoimastaan.
    https://www.tivi.fi/uutiset/nyt-nayttaa-pahalta-jopa-55000-tyontekijaa-saa-potkut-yli-10000-tyopaikkaa-voitaisiin-saastaa-mutta-yhtio-antaa-duunit-tekoalylle/ce4cafa2-8c08-4f30-b731-7ec3afd1d93d

    Reply
  2. Tomi Engdahl says:

    Opinion: Tech lords threatening to pull services should stop crying wolf
    Just do it, Sam — I dare ya
    https://thenextweb.com/news/openai-ceo-sam-altman-reverse-threat-pull-services-europe-regulators-ai-act

    Sam Altman, the CEO of OpenAI, really wants AI regulation. Truly, madly, deeply, he wants it. Because of safety and stuff. Unless, of course, it’s the type of regulation that he doesn’t want. If that’s the case, he’ll threaten to withdraw his services instead.

    Altman issued the warning this week during a tour of European regulators. He said OpenAI could “cease operating” in the EU if it can’t comply with the bloc’s impending AI Act.

    Reply
  3. Tomi Engdahl says:

    Tekoäly saattaa tuhota ihmissivilisaation, mutta keinona eivät (todennäköisesti) ole ”tappajarobotit”
    3.6.202310:01
    Tekoäly voi olla ihmiskunnan sivilisaation ja historiamme loppu, pelkää kuulu historioitsija ja tietokirjailija Yuval Noah Harari. Ulkopuolinen äly on ilmaantunut keskuuteemme maapallolle, hän kirjoittaa The Economist -lehdessä.
    https://www.mikrobitti.fi/uutiset/tekoaly-saattaa-tuhota-ihmissivilisaation-mutta-keinona-eivat-todennakoisesti-ole-tappajarobotit/13a44e4f-d314-482a-b4e0-58296f1450ed

    Reply
  4. Tomi Engdahl says:

    Getty asks London court to stop UK sales of Stability AI system
    https://www.reuters.com/technology/getty-asks-london-court-stop-uk-sales-stability-ai-system-2023-06-01/

    LONDON, June 1 (Reuters) – Stock photo provider Getty Images has asked London’s High Court for an injunction to prevent artificial intelligence company Stability AI from selling its AI image-generation system in Britain, court filings show.

    The Seattle-based company accuses the company of breaching its copyright by using its images to “train” its Stable Diffusion system, according to the filing dated May 12.

    Stability AI has yet to file a defence to Getty’s lawsuit, but filed a motion to dismiss Getty’s separate U.S. lawsuit last month. It did not immediately respond to a request for comment.

    Reply
  5. Tomi Engdahl says:

    The generative AI revolution has begun—how did we get here?
    A new class of incredibly powerful AI models has made recent breakthroughs possible.
    https://arstechnica.com/gadgets/2023/01/the-generative-ai-revolution-has-begun-how-did-we-get-here/

    Reply
  6. Tomi Engdahl says:

    Kriisihenkilöstölle potkut, tilalle tekoäly
    3.6.202307:22
    Tekoälyllä korvattiin kourallinen työntekijöitä sekä noin 200 vapaaehtoista.
    https://www.mikrobitti.fi/uutiset/kriisihenkilostolle-potkut-tilalle-tekoaly/ed854fc7-5bb5-48d6-9798-432827920fe1

    Syömishäiriöistä kärsivien ihmisten auttamiseen erikoistunut yhdysvaltalainen National Eating Disorders Association -organisaatio on päätynyt antamaan potkut kriisihenkilöstölleen.

    Reply
  7. Tomi Engdahl says:

    Mikko Hyppösen mieli­pide teko­älystä on jäätävää kuultavaa: ”Meillä on yksi ainoa mahdollisuus” https://www.is.fi/digitoday/art-2000009622920.html

    Maailmalla varoitellaan tekoälyn vaaroista. Mikko Hyppönen ei yhdy tuomiopäivän kuoroon.

    Hyppösen mielipide on melkein päinvastainen kuin teknologiasta varoittavien äänien. Hän ei kannata puolen vuoden taukoa tekoälyn kehitykseen, jolloin tekoälyn kehitys pysäytettäisiin ja aika käytettäisiin eettisten sääntöjen sorvaamiseen.

    – Se on sama kuin olisi pyydetty internetin kehityksen pysäyttämistä vuonna 1993. Se antaisi vain pahiksille aikaa kuroa etumatkaa kiinni, Hyppönen sanoo.

    – Jos jonkun on kisa voitettava, niin mielellään OpenAI:n. Meillä on yksi ainoa mahdollisuus, ja meidän on onnistuttava siinä, Hyppönen sanoi.

    OpenAI tekee myös selväksi, että investoinnit siihen ovat riskisijoituksia, Hyppönen huomauttaa. Sijoittaja ei todennäköisesti saa rahojaan koskaan takaisin.

    Hyppönen sanoo pitävänsä OpenAI:n toimintamallia eettisesti kestävänä.

    SUURI osa ihmisistä ei ole vielä ymmärtänyt tekoälyn potentiaalia. Sen oppimiskyky ei ole ihmisten rajoissa.

    – GPT [ChatGPT:n käyttämä kielimalli] oppi kaikki kielet, luki kaikki kirjat, Wikipedian ja kaiken internetissä olevan tekstin ja GitHubin ohjelmakoodin. Ja sen kehitys on vasta alkuvaiheessa, Hyppönen sanoo.

    Hyppönen muistuttaa väärinkäsityksestä, joka monilla ihmisillä on: tekoälyn kehitys ei pysähdy ihmisen tasolle. Se menee ihmisistä ohi ja kehityksen vauhti vain kiihtyy.

    – Kun se oppii tekemään itsestään paremman version ja seuraava tekee taas itsestään paremman ja niin edelleen, kehitys räjähtää.

    – Kun planeetta on kuollut, niin tekoäly jää, Hyppönen sanoi.

    Luovan tekoälyn lupaukset ovat Hyppösen mielestä kiistattomia. Se voisi muun muassa löytää parannuskeinon syöpään, ratkaista ilmastokriisin, lopettaa nälän ja köyhyyden sekä viedä ihmiset vieraille planeetoille.

    Jo nyt ChatGPT:n luovuudesta on nähty hämmästyttäviä esimerkkejä. Se ei ole toistaiseksi onnistunut ratkaisemaan testeissä internetin captcha-robotintunnistustestejä. Se kuitenkin ratkaisi ongelman tekeytymällä ihmiseksi ja pyytämällä ihmistä tekemään testit puolestaan muka näkövammaan vedoten.

    – Kenellä on tämä teknologia, se voittaa kaiken. Siksi sen on tultava oikealta taholta.

    HYPPÖNEN muistutti, että meidän sukupolvemme tullaan muistamaan ensimmäisenä, joka otti tekoälyn käyttöönsä. Se on suurempi muutos kuin internet.

    Se myös muuttaa elämäämme tavoilla, joista osan huomaa ja osaa ei. Ensi vuonna hissimusiikit voivat olla sen säveltämiä, millä päästään mahdollisesti luistamaan tekijänoikeuskorvauksista.

    – Vielä meidän elinaikanamme olemme todennäköisesti tilanteessa, jossa voimme valita Netflixistä tai mistä suoratoistopalvelusta haluammekaan, haluamamme elokuvan ja valita, keitä siinä näyttelee. Fast & Furious 24:n pääosissa voi olla vaikka Marilyn Monroe ja minä itse. Tekoäly toteuttaa tämän reaaliaikaisesti, Hyppönen sanoo.

    F-Secure kehitti tekoälyä jo vuosia sitten osaksi virusten ja muiden tietoturvauhkien tunnistamisprosessia.

    – Itsehän me kutsuimme sitä laboratorioautomaatioksi, suomalaisia kun olimme. Sen jälkeen tulivat markkinoille kilpailijat, jotka sanoivat tekevänsä tekoälyä, vaikka tekivät paskaa. Mutta markkinat tykkäsivät, Hyppönen sanoi vuonna 2018.

    Reply
  8. Tomi Engdahl says:

    YOLO (and then an AI kills you).

    GOOGLE STAFF WARNED ITS AI WAS A “PATHOLOGICAL LIAR” BEFORE THEY RELEASED IT ANYWAY
    https://futurism.com/the-byte/google-staff-warned-ai-pathological-liar

    “AI ETHICS HAS TAKEN A BACK SEAT.”

    Reply
  9. Tomi Engdahl says:

    Artificial Intelligence
    OpenAI Unveils Million-Dollar Cybersecurity Grant Program
    https://www.securityweek.com/openai-unveils-million-dollar-cybersecurity-grant-program/

    OpenAI plans to shell out $1 million in grants for projects that empower defensive use-cases for generative AI technology.

    Reply
  10. Tomi Engdahl says:

    The AI Founder Taking Credit For Stable Diffusion’s Success Has A History Of Exaggeration
    https://www.forbes.com/sites/kenrickcai/2023/06/04/stable-diffusion-emad-mostaque-stability-ai-exaggeration/

    Stability AI became a $1 billion company with the help of a viral AI text-to-image generator and — per interviews with more than 30 people — some misleading claims from founder Emad Mostaque.

    Reply
  11. Tomi Engdahl says:

    ChatGPT took their jobs. Now they walk dogs and fix air conditioners.
    https://www.washingtonpost.com/technology/2023/06/02/ai-taking-jobs/

    Technology used to automate dirty and repetitive jobs. Now, artificial intelligence chatbots are coming after high-paid ones.

    Over the next few months, Lipkin’s assignments dwindled. Managers began referring to her as “Olivia/ChatGPT” on Slack. In April, she was let go without explanation, but when she found managers writing about how using ChatGPT was cheaper than paying a writer, the reason for her layoff seemed clear.

    “Whenever people brought up ChatGPT, I felt insecure and anxious that it would replace me,” she said. “Now I actually had proof that it was true, that those anxieties were warranted and now I was actually out of a job because of AI.”

    Some economists predict artificial intelligence technology like ChatGPT could replace hundreds of millions of jobs, in a cataclysmic reorganization of the workforce mirroring the industrial revolution.

    For some workers, this impact is already here. Those who write marketing and social media content are in the first wave of people being replaced with tools such as chatbots, which are seemingly able to produce plausible alternatives to their work.

    Experts say that even advanced AI doesn’t match the writing skills of a human: It lacks personal voice and style, and it often churns out wrong, nonsensical or biased answers. But for many companies, the cost-cutting is worth a drop in quality.

    “We’re really in a crisis point,” said Sarah T. Roberts, an associate professor at the University of California in Los Angeles specializing in digital labor. “[AI] is coming for the jobs that were supposed to be automation-proof.”

    But the recent wave of generative artificial intelligence — which uses complex algorithms trained on billions of words and images from the open internet to produce text, images and audio — has the potential for a new stage of disruption. The technology’s ability to churn out human-sounding prose puts highly paid knowledge workers in the crosshairs for replacement, experts said.

    “In every previous automation threat, the automation was about automating the hard, dirty, repetitive jobs,” said Ethan Mollick, an associate professor at the University of Pennsylvania’s Wharton School of Business. “This time, the automation threat is aimed squarely at the highest-earning, most creative jobs that … require the most educational background.”

    In March, Goldman Sachs predicted that 18 percent of work worldwide could be automated by AI, with white-collar workers such as lawyers at more risk than those in trades such as construction or maintenance. “Occupations for which a significant share of workers’ time is spent outdoors or performing physical labor cannot be automated by AI,” the report said.

    The White House also sounded the alarm, saying in a December report that “AI has the potential to automate ‘nonroutine’ tasks, exposing large new swaths of the workforce to potential disruption.”

    But Mollick said it’s too early to gauge how disruptive AI will be to the workforce. He noted that jobs such as copywriting, document translation and transcription, and paralegal work are particularly at risk, because they include tasks that are easily done by chatbots. High-level legal analysis, creative writing or art may not be as easily replaceable, he said, because humans still outperform AI in those areas.

    “Think of AI as generally acting as a high-end intern,” he said. “Jobs that are mostly designed as entry-level jobs to break you into a field where you do something kind of useful, but it’s also sort of a steppingstone to the next level — those are the kinds of jobs under threat.”

    One by one, Fein’s nine other contracts were canceled for the same reason. His entire copywriting business was gone nearly overnight.

    “It wiped me out,” Fein said. He urged his clients to reconsider, warning that ChatGPT couldn’t write content with his level of creativity, technical precision and originality.

    Fein was rehired by one of his clients, who wasn’t pleased with ChatGPT’s work. But it isn’t enough to sustain him and his family

    Now, Fein has decided to pursue a job that AI can’t do, and he has enrolled in courses to become an HVAC technician. Next year, he plans to train to become a plumber.

    “A trade is more future-proof,” he said

    Reply
  12. Tomi Engdahl says:

    The tech industry was deflating. Then came ChatGPT.
    Last year, Silicon Valley was drowning in layoffs and dour predictions. Artificial intelligence made the gloom go away.
    https://www.washingtonpost.com/technology/2023/06/04/ai-bubble-tech-industry-outlook/

    Reply
  13. Tomi Engdahl says:

    Will Douglas Heaven / MIT Technology Review:
    A look at The Frost, a 12-minute movie created by Waymark using DALL-E 2, as startups like Waymark and Runway make AI tools for fast and cheap video production — Exclusive: Watch the world premiere of the AI-generated short film The Frost. — The Frost nails its uncanny, disconcerting vibe in its first few shots.

    Welcome to the new surreal. How AI-generated video is changing film.
    https://www.technologyreview.com/2023/06/01/1073858/surreal-ai-generative-video-changing-film/

    Exclusive: Watch the world premiere of the AI-generated short film The Frost.

    The Frost nails its uncanny, disconcerting vibe in its first few shots. Vast icy mountains, a makeshift camp of military-style tents, a group of people huddled around a fire, barking dogs. It’s familiar stuff, yet weird enough to plant a growing seed of dread. There’s something wrong here.

    “Pass me the tail,” someone says. Cut to a close-up of a man by the fire gnawing on a pink piece of jerky. It’s grotesque. The way his lips are moving isn’t quite right. For a beat it looks as if he’s chewing on his own frozen tongue.

    Welcome to the unsettling world of AI moviemaking. “We kind of hit a point where we just stopped fighting the desire for photographic accuracy and started leaning into the weirdness that is DALL-E,” says Stephen Parker at Waymark, the Detroit-based video creation company behind The Frost.

    The Frost is a 12-minute movie in which every shot is generated by an image-making AI. It’s one of the most impressive—and bizarre—examples yet of this strange new genre. You can watch the film below in an exclusive reveal from MIT Technology Review.

    Reply
  14. Tomi Engdahl says:

    ChatGPT is going to change education, not destroy it
    https://www.technologyreview.com/2023/04/06/1071059/chatgpt-change-not-destroy-education-openai/

    The narrative around cheating students doesn’t tell the whole story. Meet the teachers who think generative AI could actually make learning better.

    Reply
  15. Tomi Engdahl says:

    Sorry, folks. Society may be too stupid to deal with AI.

    IDIOTS FOOLED BY AI-GENERATED PICS OF SATANIC MERCHANDISE AT TARGET
    https://futurism.com/the-byte/idiots-ai-generated-satanic-merchandise-target

    Far-right Facebook was sent spiraling last week over so-called images of children wearing what looked to be Target merchandise covered in Satanic imagery. These people, however, are idiots, and all of the images in question — which also included depictions of Satanic horned mannequins, because that’s also something that a major retailer would definitely put in suburban shopping centers — were AI-generated fakes.

    “Is this for real?” asked one skeptical commenter.

    “Unfortunately it is,” the page’s admin responded.

    But again, unfortunately for these good ol’ Christian Patriots, it isn’t for real. The images were AI-generated by a Facebook user named Dan Reese,

    Reply
  16. Tomi Engdahl says:

    AI Text To Speech Showdown: Top 5 Voice Generation Tools In-Depth Review
    https://www.youtube.com/watch?v=N_UA79p0GEQ

    Text to Speech tools Used
    Play
    https://www.play.ht/?via=darren

    Speechify
    https://speechify.com/?source=fb-for-

    Murf
    https://murf.ai//?lmref=3sAmcA

    Eleven Labs
    http://elevenlabs.io/

    Listnr
    https://www.listnr.tech/?gr_pk=JmWQ&g

    Avatars were created in HeyGen
    https://app.heygen.com/guest?sid=rewa

    Chapters
    0:00 Intro & Summary
    1:33 Eleven Labs
    4:00 Murf
    7:00 Listnr
    8:20 Speechify
    10:17 Play
    13:32 Price
    15:23 Winner

    We provide several examples of voice character and determine which tool offers the most natural and human-like voice. Along with the voice quality, we also delve into pricing and features for each tool. Spoiler alert: one tool emerges as the clear winner. Don’t miss out on this informative and entertaining video!

    Reply
  17. Tomi Engdahl says:

    Let’s build GPT: from scratch, in code, spelled out.
    https://www.youtube.com/watch?v=kCc8FmEb1nY

    We build a Generatively Pretrained Transformer (GPT), following the paper “Attention is All You Need” and OpenAI’s GPT-2 / GPT-3. We talk about connections to ChatGPT, which has taken the world by storm. We watch GitHub Copilot, itself a GPT, help us write a GPT (meta :D!) . I recommend people watch the earlier makemore videos to get comfortable with the autoregressive language modeling framework and basics of tensors and PyTorch nn, which we take for granted in this video.

    Chapters:
    00:00:00 intro: ChatGPT, Transformers, nanoGPT, Shakespeare
    baseline language modeling, code setup
    00:07:52 reading and exploring the data
    00:09:28 tokenization, train/val split
    00:14:27 data loader: batches of chunks of data
    00:22:11 simplest baseline: bigram language model, loss, generation
    00:34:53 training the bigram model
    00:38:00 port our code to a script
    Building the “self-attention”
    00:42:13 version 1: averaging past context with for loops, the weakest form of aggregation
    00:47:11 the trick in self-attention: matrix multiply as weighted aggregation
    00:51:54 version 2: using matrix multiply
    00:54:42 version 3: adding softmax
    00:58:26 minor code cleanup
    01:00:18 positional encoding
    01:02:00 THE CRUX OF THE VIDEO: version 4: self-attention
    01:11:38 note 1: attention as communication
    01:12:46 note 2: attention has no notion of space, operates over sets
    01:13:40 note 3: there is no communication across batch dimension
    01:14:14 note 4: encoder blocks vs. decoder blocks
    01:15:39 note 5: attention vs. self-attention vs. cross-attention
    01:16:56 note 6: “scaled” self-attention. why divide by sqrt(head_size)
    Building the Transformer
    01:19:11 inserting a single self-attention block to our network
    01:21:59 multi-headed self-attention
    01:24:25 feedforward layers of transformer block
    01:26:48 residual connections
    01:32:51 layernorm (and its relationship to our previous batchnorm)
    01:37:49 scaling up the model! creating a few variables. adding dropout
    Notes on Transformer
    01:42:39 encoder vs. decoder vs. both (?) Transformers
    01:46:22 super quick walkthrough of nanoGPT, batched multi-headed self-attention
    01:48:53 back to ChatGPT, GPT-3, pretraining vs. finetuning, RLHF
    01:54:32 conclusions

    Reply
  18. Tomi Engdahl says:

    Jonathan Blow on ChatGPT Style Things at Producing Software
    https://www.youtube.com/watch?v=c1TTqHUxIF8

    Reply
  19. Tomi Engdahl says:

    Tekoälyyn sisältyy isoja eettisiä ongelmia
    https://etn.fi/index.php/13-news/15045-tekoaelyyn-sisaeltyy-isoja-eettisiae-ongelmia

    Business Finland järjesti tänään webinaarin, jossa käsiteltiin laajoihin kielimalleihin perustuvien tekoälymallien hyödyntämistä. Kehitys on ollut nopeaa, mutta moni erityisesti eettinen kysymys on edelleen ratkaisematta.

    Sääntöjä tekoälylle haetaan regulaation kautta. EU:ssa kehitetään omaa yksityishenkilöiden vapauksiin keskittyvää AI-lainsäädäntöä, Yhdysvalloissa omaansa ja Iso-Britanniassa laaditaan enemmän yritystoiminnan kannalta laadittua sääntelyään. Enne Analyticsin toimitusjohtaja Ilkka Raiskinen muistutti myös, että monet maat eivät tule reguloimaan tekoälyä ollenkaan.

    LLM- eli laajoihin kielimalleihin perustuva tekoäly sisältä itse asiassa hyvin vähän älyä. – Malli ennustaa, mitkä sanat seuraavat tietyssä kontekstissa. Jokaisen sanan todennäköisyys lasketaan GPT:n tapauksessa 75 miljardin parametrin avulla.

    - Tekoälyn ”äly” perustuu siihen, että se on koulutettu valtavalla määrällä tekstiä. Teksti pitää ensin muuntaa numeroiksi. Sanojen osat ja lauseet kuvataan vektoreina, jotka säilyttävät semantiikkansa, Raiskinen selitti kielimallin logiikkaa.

    Reply
  20. Tomi Engdahl says:

    What if the Current AI Hype Is a Dead End?
    https://www.securityweek.com/what-if-the-current-ai-hype-is-a-dead-end/

    If we should face a Dead-End AI future, the cybersecurity industry will continue to rely heavily on traditional approaches, especially human-driven ones. It won’t quite be business as usual though.

    AI Future #1: Dead End AI

    The hype man’s job is to get everybody out of their seats and on the dance floor to have a good time.

    Flavor Flav

    This week we posit a future we’re calling “Dead End AI”, where AI fails to live up to the hype surrounding it. We consider two possible scenarios in such a future. Both have similar near to mid term outcomes, so we can discuss them together.

    Scenario #1: AI ends up another hype like crypto, NFT’s and the Metaverse.

    Scenario #2: AI is overhyped and the resulting disappointment leads to defunding and a new AI winter.
    Advertisement. Scroll to continue reading.
    CISO Forum

    In a Dead-End AI future, the hype currently surrounding artificial intelligence ultimately proves to be unfounded. The excitement and investment in AI dwindle as the reality of the technology’s limitations sets in. The AI community experiences disillusionment, leading to a new AI winter where funding and research are significantly reduced.

    Economic factors

    Investors are rushing into Generative AI, with early-stage startup investors investing $2.2B in 2022 (contrast this with $5.8B for the whole of Europe). But if AI fails to deliver the expected return on investment it will be catastrophic for further funding for AI research and development.

    The venture capital firm Andreessen-Horowitz (a16z) for example published a report stating that a typical startup working with large language models is spending 80% of their capital on compute costs. The report authors also state that a single GPT-3 training cost ranges from $500,000 to $4.6 million, depending on hardware assumptions.

    Paradoxically, investing these moonshot amounts of money won’t necessarily guarantee economic success or viability, with a leaked Google report recently arguing that there is no moat against general and open-source adoption of these sort of models. Others, like Snapchat, rushed to market prematurely with an offering, only to crash and burn.

    High development costs like that, together with the absence of profitable applications will not make investors or shareholders happy. It also results in capital destruction on a massive scale, making only a handful of cloud and hardware providers happy.

    Limited progress in practical applications

    While we have made significant advancements in narrow AI applications, we have not seen progress towards true artificial general intelligence (AGI), despite unfounded claims that it may somehow arise emergently. Generative AI models have displayed uncanny phenomena, but they are entirely explainable, including their limitations.

    In between the flood of articles gushing about how AI is automating everything in marketing, development and design, there is also a growing trickle of evidence that the field of application for these sort of models may be quite narrow. Automation in real-world scenarios requires a high degree of accuracy and precision, for example when blocking phishing attempts, that LLM’s aren’t designed for.

    Some technical experts are already voicing concern about the vast difference in what the current models actually do compared with how they are being described and more importantly, sold, and are already sounding the alarm about a new AI winter.

    Privacy and ethical concerns

    Another set of growing signals is for the increasing concerns around privacy, ethics, and the potential misuse of AI systems. There are surprisingly many voices arguing for stricter regulations, which could hinder AI development and adoption, resulting in a dead-end AI scenario.

    Geoffrey Hinton, one of the pioneers in artificial neural networks, recently quit his job with Google to be able to warn the world about what he feels are the risks and danger of uncontrolled AI without any conflicts of interest. The Whitehouse called a meeting with executives from Google, Microsoft, OpenAI, and Anthropic to discuss the future of AI. The biggest surprise is probably a CEO asking to be regulated, something that OpenAI’s Sam Altman urged US Congress to do. One article even goes as far as advocating that we need to evaluate the beliefs of people in control of such technologies, suggesting that they may be more willing to accept existential risks.

    Environmental Impact

    The promise of AI is not just based on automation – it is also has to be cheap, readily available and increasingly, sustainable. AI may be technically feasible, but it may be uneconomic, or even bad for the environment.

    A lot of the data that is available indicates that AI technology like LLM’s have a considerable environmental impact. A recent study, “Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models”, calculated that a typical conversation of 20 – 50 questions consumes 500ml of water, and that it may have needed up to 700,000 liters of water just to train GPT-3.

    Stanford’s Institute for Human-Centered Artificial Intelligence (HAI) 2023 Artificial Intelligence Index Report concluded that a single training run for GPT3 put out the equivalent of 502 tons of CO2, with even the most energy efficient model, BLOOM, emitting more carbon than the average American uses per year (25 tons for BLOOM, versus 18 for a human).

    Implications for Security Operations

    If the current wave of AI technologies is being woefully overhyped and turns out to be a dead end, the implications for security operations could be as follows:

    Traditional methods will come back into focus.

    With AI failing to deliver on its promise of intelligent automation and analytics, cybersecurity operations will continue to rely on human-driven processes and traditional security measures.

    This means that security professionals will have to keep refining existing techniques like zero-trust and cyber hygiene. They will also have to continue to create and curate an endless stream of up-to-date detections, playbooks, and threat intelligence to keep pace with the ever-evolving threat landscape.

    Automation will plateau.

    Without more intelligent machine automation, organizations will continue to struggle with talent shortages in the cybersecurity field. For analysts the manual workload will remain high. Organizations will need to find other ways to streamline operations.

    Automation approaches like SOAR will remain very manual, and still be based on static and preconfigured playbooks. No- and Low-code automation may help make automation easier and accessible, but automation will remain essentially scripted and dumb.

    However – even today’s level of LLM capability is already sufficient to automate basic log parsing, event transformation, and some classification use-cases. These sorts of capabilities will be ubiquitous by the end of 2024 in almost all security solutions.

    Threat detection and response will remain slow

    In the absence of AI-driven solutions, threat detection and response times can improve only marginally. Reducing the window of opportunity that hackers must exploit vulnerabilities and cause damage will mean becoming operationally more effective. Organizations will have to focus on enhancing their existing systems and processes to minimize the impact of slow detection and response times. Automation will be integrated more selectively but aggressively.

    Threat intelligence will continue to be hard to manage.

    With the absence of AI-driven analysis, it will continue to be difficult to gather and curate threat intelligence for vendors and remain challenging to use more strategically for most end users. Security teams will have to rely on manual processes to gather, analyze, and contextualize threat information, potentially leading to delays in awareness of and response to new and evolving threats. The ability to disseminate and analyze large amounts of threat intelligence will have to be enhanced using simpler means, for example with visualizations and graph analysis. Collective and collaborative intelligence sharing will also need to be revisited and revised.

    Renewed emphasis on human expertise

    If AI fails to deliver, the importance of human expertise in cybersecurity operations will become even more critical. Organizations will need to continue to prioritize hiring, training, and retaining skilled cybersecurity professionals to protect their assets and minimize risks.

    Reply
  21. Tomi Engdahl says:

    Marc Andreessen / Andreessen Horowitz:
    A look at why AI will “save the world”, such as by augmenting human intelligence, a case against the moral panic about AI, and the China risk of not pursuing AI — The era of Artificial Intelligence is here, and boy are people freaking out. — Fortunately, I am here to bring

    Why AI Will Save the World
    https://a16z.com/2023/06/06/ai-will-save-the-world/

    The era of Artificial Intelligence is here, and boy are people freaking out.

    Fortunately, I am here to bring the good news: AI will not destroy the world, and in fact may save it.

    First, a short description of what AI is: The application of mathematics and software code to teach computers how to understand, synthesize, and generate knowledge in ways similar to how people do it. AI is a computer program like any other – it runs, takes input, processes, and generates output. AI’s output is useful across a wide range of fields, ranging from coding to medicine to law to the creative arts. It is owned by people and controlled by people, like any other technology.

    A shorter description of what AI isn’t: Killer software and robots that will spring to life and decide to murder the human race or otherwise ruin everything, like you see in the movies.

    An even shorter description of what AI could be: A way to make everything we care about better.

    Reply
  22. Tomi Engdahl says:

    Rachel Metz / Bloomberg:
    Microsoft plans to bring OpenAI’s GPT-3 and GPT-4 to Azure Government, including a variety of US agencies; the DOD’s DTIC plans to experiment with the LLMs

    Microsoft Is Bringing OpenAI’s GPT-4 AI model to US Government Agencies
    https://www.bloomberg.com/news/articles/2023-06-07/microsoft-offers-powerful-openai-technology-to-us-government-cloud-customers

    Microsoft Corp. will make it possible for users of its Azure Government cloud computing service, which include a variety of US agencies, to access artificial intelligence models from ChatGPT creator OpenAI.

    Microsoft, which is the largest investor in OpenAI and uses its technology to power its Bing chatbot, plans to announce Wednesday that Azure Government customers can now use two of OpenAI’s large language models: The startup’s latest and most powerful model, GPT-4, and an earlier one, GPT-3, via Microsoft’s Azure OpenAI service.

    Reply
  23. Tomi Engdahl says:

    Ivan Mehta / TechCrunch:
    Automattic launches Jetpack AI Assistant for WordPress and Jetpack-powered sites, letting users generate content, translate text into 12 languages, and more — Automattic, the company behind WordPress.com and the main contributor to the open-source WordPress project, launched an AI assistant …

    Automattic launches an AI writing assistant for WordPress
    https://techcrunch.com/2023/06/07/automattic-launches-an-ai-writing-assistant-for-wordpress/

    Automattic, the company behind WordPress.com and the main contributor to the open-source WordPress project, launched an AI assistant for the popular content management system on Tuesday.

    The company said that the assistant easily integrates with WordPress.com and all Jetpack-powered sites. When you’re writing a post or a page, you can add an ‘AI Assistant’ block to your content. Users can then type in a prompt in natural language, the AI assistant will start generating text based on this prompt. Apart from generating content ideas, the AI assistant can create structured lists and tables within a blog post.

    Additionally, it can change the tonality of a post and make it more informal, skeptical, humorous, confident, or empathetic. The assistant can also create a summary for the post and suggest titles for it.

    Automattic said that the new AI assistant supports 12 languages including Spanish, French, Chinese, Korean, and Hindi. So writers can translate their content into multiple languages — they can write in their native language and translate it later to English for instance. The assistant also offers better spelling and grammar correction features than WordPress’s built-in tools.

    Jetpack AI Assistant block will let users send 20 requests as a free trial. After that, they have to pay $10 per month to access the feature.

    In the past few months, numerous writing tools have introduced a different set of AI-powered features. Google and Microsoft both are integrating different AI features into their professional application suites — including Microsoft Word and Google Docs. Separately, Google introduced Project Tailwind, an AI-powered note-taking experience, at Google I/O last month. Other writing solutions like Notion and Grammarly have also introduced AI-aided tools into their apps.

    Reply
  24. Tomi Engdahl says:

    Cecilia Kang / New York Times:
    Sources: OpenAI CEO Sam Altman has met with over 100 members of Congress, alongside VP Harris and cabinet members, to discuss AI regulation in recent months

    https://www.nytimes.com/2023/06/07/technology/sam-altman-ai-regulations.html

    Reply
  25. Tomi Engdahl says:

    To Teach Computers Math, Researchers Merge AI Approaches
    By
    KEVIN HARTNETT
    February 15, 2023
    https://www.quantamagazine.org/to-teach-computers-math-researchers-merge-ai-approaches-20230215/

    Large language models still struggle with basic reasoning tasks. Two new papers that apply machine learning to math provide a blueprint for how that could change.

    Reply
  26. Tomi Engdahl says:

    Asus will offer local ChatGPT-style AI servers for office use
    “AFS Appliance” will avoid the cloud and place an AI language model on premises.
    https://arstechnica.com/information-technology/2023/06/asus-plans-on-site-chatgpt-like-ai-server-rentals-for-privacy-and-data-control/

    Taiwan’s Asustek Computer (known popularly as “Asus”) plans to introduce a rental business AI server that will operate on-site to address security concerns and data control issues from cloud-based AI systems, Bloomberg reports. The service, called AFS Appliance, will feature Nvidia chips and run an AI language model called “Formosa” that Asus claims is equivalent to OpenAI’s GPT-3.5.

    Reply
  27. Tomi Engdahl says:

    Sextortionists are making AI nudes from your social media images https://www.bleepingcomputer.com/news/security/sextortionists-are-making-ai-nudes-from-your-social-media-images/

    The Federal Bureau of Investigation (FBI) is warning of a rising trend of malicious actors creating deepfake content to perform sextortion attacks.

    Sextortion is a form of online blackmail where malicious actors threaten their targets with publicly leaking explicit images and videos they stole (through
    hacking) or acquired (through coercion), typically demanding money payments for withholding the material.

    In many cases of sextortion, compromising content is not real, with the threat actors only pretending to have access to scare victims into paying an extortion demand.

    ***

    Reply
  28. Tomi Engdahl says:

    “The thing that I lose the most sleep over is that we already have done something really bad.”

    OPENAI CEO SAYS HE LOSES SLEEP OVER DECISION TO RELEASE CHATGPT
    https://futurism.com/the-byte/openai-ceo-sam-altman-loses-sleep?fbclid=IwAR1725YDjtJl1Pzh-nTdmZb6NELrLI0w2EeSe11mO29q1edQAFziwjENZ_U
    DOES ALTMAN DREAM OF ELECTRIC SHEEP?

    OpenAI CEO Sam Altman is back at it again, with yet another admission that he’s fearful of the artificial intelligence he hath wrought.

    During a live panel interview with the Times of India, Altman said that he’s been stressed out enough about the release of ChatGPT that he’s lost sleep.

    “The thing that I lose the most sleep over is that we already have done something really bad,” he told reporters. “I don’t think we have, but the hypothetical that we, by launching ChatGPT into the world, shot the industry out of a railgun and we now don’t get to have much impact anymore.”

    The CEO went on to add that he is concerned that “there’s gonna be an acceleration” in creating new AI systems that could contain complexities that he and his peers didn’t understand before launching. Us too, buddy!

    For months, the CEO has been making public statements about his fears surrounding the future of AI, from concerns about competitors that may make evil algorithms to his decision to be a signatory on an open letter warning about AI causing an “extinction” event.

    “I think it’s weird when people think it’s like a big dunk that I say, I’m a little bit afraid,” Altman told podcaster Lex Fridman earlier this year. “And I think it’d be crazy not to be a little bit afraid, and I empathize with people who are a lot afraid.”

    Both Sides Now
    While it is indeed bizarre that the guy making money hand over fist from AI is scared of it, it is in line with other things we know about Altman — specifically, that he’s a doomsday prepper who has bragged about having a stash of guns and gas masks in the event of an AI-driven catastrophe.

    Notably, Altman never admits in any of these statements that OpenAI maybe shouldn’t have launched ChatGPT, or that the disruptiveness it’s already brought may not be a good thing.

    Indeed, earlier in the talk, the CEO waxes prolific about the “job change” that his company’s chatbot will bring, avoiding the fact that real people are already losing their livelihoods as short-sighted CEOs choose to believe it’s better than human labor.

    As always, we have to temper our perplexion at Altman’s strange AI worries with the friendly reminder that he’s a CEO with a product to sell — and even as he admits in bits and pieces that his software is dangerous, business is still booming.

    Reply
  29. Tomi Engdahl says:

    The main benefit of this system is that its training doesn’t have to involve any code examples—instead, the system generates its own code examples and then evaluates them.

    AI system devises first optimizations to sorting code in over a decade
    Writing efficient code was turned into a game, and the AI played to win.
    https://arstechnica.com/science/2023/06/googles-deepmind-develops-a-system-that-writes-efficient-algorithms/?utm_social-type=owned&utm_brand=ars&utm_source=facebook&utm_medium=social&fbclid=IwAR0xEEFLidiqACYFICoawi7FtZwhxXSqxPrGF5fCll5WQBrRc0uzBFtZagw

    Anyone who has taken a basic computer science class has undoubtedly spent time devising a sorting algorithm—code that will take an unordered list of items and put them in ascending or descending order. It’s an interesting challenge because there are so many ways of doing it and because people have spent a lot of time figuring out how to do this sorting as efficiently as possible.

    Sorting is so basic that algorithms are built into most standard libraries for programming languages. And, in the case of the C++ library used with the LLVM compiler, the code hasn’t been touched in over a decade.

    But Google’s DeepMind AI group has now developed a reinforcement learning tool that can develop extremely optimized algorithms without first being trained on human code examples. The trick was to set it up to treat programming as a game.

    Reply
  30. Tomi Engdahl says:

    This is likely the first defamation lawsuit resulting from ChatGPT’s so-called “hallucinations,” where the chatbot completely fabricates information.

    OpenAI faces defamation suit after ChatGPT completely fabricated another lawsuit
    https://arstechnica.com/tech-policy/2023/06/openai-sued-for-defamation-after-chatgpt-fabricated-yet-another-lawsuit/?utm_source=facebook&utm_brand=ars&utm_medium=social&utm_social-type=owned&fbclid=IwAR3vUp15rLLYwalZTjsZzkLFIsO6B2A7l73xBtOEc3oT92Gs2y_jpLlW0fo

    ChatGPT continues causing trouble by making up lawsuits.

    Now, Walters is suing ChatGPT owner OpenAI in a Georgia state court for unspecified monetary damages in what’s likely the first defamation lawsuit resulting from ChatGPT’s so-called “hallucinations,” where the chatbot completely fabricates information.

    The misinformation was first uncovered by journalist Fred Riehl, who asked ChatGPT to summarize a complaint that SAF filed in federal court.

    That SAF complaint actually accused Washington attorney general Robert Ferguson of “misuse of legal process to pursue private vendettas and stamp out dissent.” Walters was never a party in that case or even mentioned in the suit, but ChatGPT disregarded that and all the actual facts of the case when prompted to summarize it, Walters’ complaint said. Instead, it generated a wholly inaccurate response to Riehl’s prompt, falsely claiming that the case was filed against Walters for embezzlement that never happened while serving at an SAF post that he never held.

    “Every statement of fact” in ChatGPT’s SAF case summary “pertaining to Walters is false,” Walters’ complaint said.

    Is OpenAI responsible when ChatGPT lies?
    It’s not the first time that ChatGPT has completely fabricated a lawsuit. A lawyer is currently facing harsh consequences in court after ChatGPT made up six cases that the lawyer cited without first verifying case details that a judge called obvious “legal gibberish,” Fortune reported.

    Although the sophisticated chatbot is used by many people—from students researching essays to lawyers researching case law—to search for accurate information, ChatGPT’s terms of use make it clear that ChatGPT cannot be trusted to generate accurate information. It says:

    Artificial intelligence and machine learning are rapidly evolving fields of study. We are constantly working to improve our Services to make them more accurate, reliable, safe and beneficial. Given the probabilistic nature of machine learning, use of our Services may in some situations result in incorrect Output that does not accurately reflect real people, places, or facts. You should evaluate the accuracy of any Output as appropriate for your use case, including by using human review of the Output.

    Walters’ lawyer John Monroe told Ars that “while research and development in AI are worthwhile endeavors, it is irresponsible to unleash a platform on the public that knowingly makes false statements about people.”

    OpenAI was previously threatened with a defamation lawsuit by an Australian mayor, Brian Hood, after ChatGPT generated false claims that Hood had been imprisoned for bribery. In that case, Hood asked OpenAI to remove the false information as a meaningful remedy; otherwise, the official could suffer reputation damage that he said could negatively impact his political career.

    A law professor familiar with the legal liability of AI systems, Eugene Volokh, told The Verge that Walters’ case could be weakened by any failure to ask OpenAI to remove false information or to prove that actual damages have already resulted from ChatGPT’s inaccurate responses.

    Monroe confirmed that Walters has not asked OpenAI to remove the false information but told Ars that he doesn’t agree with Volokh’s legal analysis.

    “I don’t know of any reasons why libel principles would not apply to companies that publish defamatory statements via AI,” Monroe told Ars.

    Volokh told Ars that Section 230 may not apply, however, because Section 230 “doesn’t immunize defendants who ‘materially contribut[e] to [the] alleged unlawfulness’ of online content.”

    “An AI company, by making and distributing an AI program that creates false and reputation-damaging accusations out of text that entirely lacks such accusations, is surely ‘materially contribut[ing] to [the] alleged unlawfulness’ of that created material,” Volokh said.

    “Ars Technica can’t immunize itself from defamation liability by merely saying, on every post, ‘this post may contain inaccurate information’—likewise with OpenAI,” Volokh told Ars.

    Reply
  31. Tomi Engdahl says:

    ChatGPT creates mutating malware that evades detection by EDR
    https://www.csoonline.com/article/3698516/chatgpt-creates-mutating-malware-that-evades-detection-by-edr.html

    Mutating, or polymorphic, malware can be built using the ChatGPT API at runtime to effect advanced attacks that can evade endpoint detections and response (EDR) applications.

    Reply
  32. Tomi Engdahl says:

    Uusi työkalu tunnistaa tekoälyn kirjoittaman tekstin jopa 99 % tarkkuudella
    Antti Kailio8.6.202311:18|päivitetty8.6.202311:18TEKOÄLYKOULUTUSTIEDE
    OpenAI:n oma tunnistustyökalu on hämmentävän huono.
    https://www.tivi.fi/uutiset/uusi-tyokalu-tunnistaa-tekoalyn-kirjoittaman-tekstin-jopa-99-tarkkuudella/f2ab0075-66c6-4b37-9000-a2cc6e871991

    Yhdysvalloissa Kansasin yliopiston tutkijat ovat kehittäneet ohjelmiston, joka tunnistaa tekoälyn kirjoittaman tutkimusartikkelin peräti 99 prosentin tarkkuudella. Asiasta uutisoi talouslehti Forbes.

    Reply
  33. Tomi Engdahl says:

    Whoops, Samsung workers accidentally leaked trade secrets via ChatGPT
    ChatGPT doesn’t keep secrets.
    https://mashable.com/article/samsung-chatgpt-leak-details

    Reply
  34. Tomi Engdahl says:

    Samsung Bans Staff’s AI Use After Spotting ChatGPT Data Leak
    Employees accidentally leaked sensitive data via ChatGPT
    Company preparing own internal artificial intelligence tools
    https://www.bloomberg.com/news/articles/2023-05-02/samsung-bans-chatgpt-and-other-generative-ai-use-by-staff-after-leak#xj4y7vzkg

    Reply
  35. Tomi Engdahl says:

    How to use AI ChatGPT to design an electronic circuit
    Unleash Your Creative Potential with AI ChatGPT-Assisted Electronic Circuit Design
    https://www.udemy.com/course/how-to-use-ai-chatgpt-to-design-an-electronic-circuit/

    Reply
  36. Tomi Engdahl says:

    Charlie Brooker used ChatGPT to write a Black Mirror episode – and says it was a disaster
    Series creator was ultimately unimpressed with the results
    https://www.independent.co.uk/arts-entertainment/tv/news/charlie-brooker-black-mirror-chatgpt-b2353039.html#Echobox=1686153252

    Reply
  37. Tomi Engdahl says:

    Roope Lipastin kolumni: Miksi yrityksissä uskotaan, että asiakkaat tykkäävät, kun joku teeskentelee kuuntelevansa heitä?
    https://yle.fi/a/74-20034546

    Oliko kauppareissusi elämys? Oliko katharttista punnita kurkkua vihannesosastolla? Asiakaspalautetta pyydetään joka paikassa jo niin paljon, ettei se merkitse enää mitään, Roope Lipasti kirjoittaa.

    Reply
  38. Tomi Engdahl says:

    Lukio-opettaja tarkisti oppilaiden tehtäviä ja ällistyi – ”Hälytys­kellot soivat” https://www.is.fi/digitoday/art-2000009633742.html

    TEKOÄLYN käyttö on muuttanut opiskelijoiden lähestymistapaa koulutehtäviin, tarjoten uusia mahdollisuuksia oppimiseen ja tiedonhankintaan. Nykyään opiskelijat hyödyntävät tekoälyä tehokkaasti tehtävien suorittamisessa ja tiedon analysoinnissa.

    Myös tämän jutun alun on kirjoittanut ChatGPT. Tekoälyn käyttäminen ei kuitenkaan ole kouluissa yhtä ongelmatonta kuin tekoäly antaa ymmärtää.

    Tekoälysovellus ChatGPT on kevään aikana löytänyt tiensä myös oppilaiden kirjallisiin tuotoksiin.

    HS uutisoi viime viikolla tekoälyn avulla tuotettujen koulutöiden määrän ”räjähtäneen”

    Reply
  39. Tomi Engdahl says:

    AI and Cybersecurity: How Mandiant Consultants and Analysts are Leveraging AI Today https://www.mandiant.com/resources/blog/mandiant-leveraging-ai

    With the increasing focus on the potential for generative AI, there are many use cases envisioned in how this technology will impact enterprises. The impact to cybersecurity—to the benefit of both defenders and adversaries—will likely reshape the landscape for organizations.
    This blog post highlights just a few of the recent examples across Mandiant’s consulting and analysis teams that have used Bard within their workflow.

    Reply
  40. Tomi Engdahl says:

    Lawyers blame ChatGPT for tricking them into citing bogus case law https://apnews.com/article/artificial-intelligence-chatgpt-courts-e15023d7e6fdf4f099aa122437dbb59b

    Two apologetic lawyers responding to an angry judge in Manhattan federal court blamed ChatGPT Thursday for tricking them into including fictitious legal research in a court filing.

    Reply
  41. Tomi Engdahl says:

    Uudenlainen huijaus­tapa tulossa – varoitus USA:sta: “Meidän pitää olla hereillä”
    https://www.is.fi/digitoday/art-2000009633713.html

    Yhdysvaltain kauppakomission puheenjohtaja on huolissaan tekoälyn avulla tehdyistä rikoksista.

    Reply
  42. Tomi Engdahl says:

    Google Introduces SAIF, a Framework for Secure AI Development and Use
    https://www.securityweek.com/google-introduces-saif-a-framework-for-secure-ai-development-and-use/

    The Google SAIF (Secure AI Framework) is designed to provide a security framework or ecosystem for the development, use and protection of AI systems.

    The Google SAIF (Secure AI Framework) is designed to provide a security framework or ecosystem for the development, use and protection of AI systems.

    All new technologies bring new opportunities, threats, and risks. As business concentrates on harnessing opportunities, threats and risks can be overlooked. With AI, this could be disastrous for business, business customers, and people in general. SAIF offers six core elements to ensure maximum security in AI.

    Expand strong security foundations to the AI ecosystem
    Many existing security controls can be expanded and/or focused on AI risks. A simple example is protection against injection techniques, such as SQL injection. “Organizations can adapt mitigations, such as input sanitization and limiting, to help better defend against prompt injection style attacks,” suggests SAIF.

    Traditional security controls will often be relevant to AI defense but may need to be strengthened or expanded. Data governance and protection becomes critical to protect the integrity of the learning data used by AI systems. The old concept of ‘rubbish in, rubbish out’ is magnified manyfold by AI, but made critical where business and people decisions are based on that rubbish.

    . If a data pool is poisoned without knowledge of that poisoning, AI outputs will be adversely and possibly invisibly affected.

    It will be necessary to monitor AI output to detect algorithmic errors and adversarial input. “Organizations that use AI systems must have a plan for detecting and responding to security incidents and mitigate the risks of AI systems making harmful or biased decisions,” says Google.

    Automate defenses to keep pace with existing and new threats
    This is the most common advice used in the face of AI-based attacks – automate defenses with AI to counter the increasing speed and magnitude of adversarial AI-based attacks. But Google warns that humans must be kept in the loop for important decisions, such as determining what constitutes a threat and how to respond to it.

    AI-based automation goes beyond the automated detection of threats and can also be used to decrease the workload and increase the efficiency of the security team.

    Reduce overlapping frameworks for security and compliance controls to help reduce fragmentation. Fragmentation increases complexity, costs, and inefficiencies. Reducing fragmentation will, suggests Google, “provide a ‘right fit’ approach to controls to mitigate risk.”

    Adapt controls to adjust mitigations and create faster feedback loops for AI deployment

    Contextualize AI system risks in surrounding business processes
    This involves a thorough understanding of how AI will be used within business processes, and requires a complete inventory of AI models in use. Assess their risk profile based on the specific use cases, data sensitivity, and shared responsibility when leveraging third-party solutions and services.

    Google has based its SAIF framework on the experience of 10-years in the development and use of AI in its own products. The company hopes that making public its own experience in AI will lay the groundwork for secure AI – just as its BeyondCorp access model led to the zero trust principles which are industry standard today.

    Introducing Google’s Secure AI Framework
    https://blog.google/technology/safety-security/introducing-googles-secure-ai-framework/

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*