3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

5,208 Comments

  1. Tomi Engdahl says:

    News
    Hello, ChatGPT—Please Explain Yourself!
    https://spectrum.ieee.org/chatbot-chatgpt-interview

    An interview with the celebrated but controversial AI language model

    Reply
  2. Tomi Engdahl says:

    A VM In An AI
    https://hackaday.com/2022/12/10/a-vm-in-an-ai/

    AI knoweth everything, and as each new model breaks upon the world, it attracts a new crowd of experimenters. The new hotness is ChatGPT, and [Jonas Degrave] has turned his attention to it. By asking it to act as a Linux terminal, he discovered that he could gain access to a complete Linux virtual machine within the model’s synthetic imagination.

    Building A Virtual Machine inside ChatGPT
    https://www.engraved.blog/building-a-virtual-machine-inside/

    Unless you have been living under a rock, you have heard of this new ChatGPT assistant made by OpenAI. You might be aware of its capabilities for solving IQ tests, tackling leetcode problems or to helping people write LateX. It is an amazing resource for people to retrieve all kinds of information and solve tedious tasks, like copy-writing!

    Today, Frederic Besse told me that he managed to do something different. Did you know, that you can run a whole virtual machine inside of ChatGPT?

    Reply
  3. Tomi Engdahl says:

    Benj Edwards / Ars Technica:
    With AI image generation tools like Stable Diffusion and DreamBooth, it is easy to make life-wrecking deepfakes with a few photos of a person from social media

    AI image generation tech can now create life-wrecking deepfakes with ease
    https://arstechnica.com/information-technology/2022/12/thanks-to-ai-its-probably-time-to-take-your-photos-off-the-internet/

    AI tech makes it trivial to generate harmful fake photos from a few social media pictures.

    If you’re one of the billions of people who have posted pictures of themselves on social media over the past decade, it may be time to rethink that behavior. New AI image-generation technology allows anyone to save a handful of photos (or video frames) of you, then train AI to create realistic fake photos that show you doing embarrassing or illegal things. Not everyone may be at risk, but everyone should know about it.

    Photographs have always been subject to falsifications—first in darkrooms with scissors and paste and then via Adobe Photoshop through pixels. But it took a great deal of skill to pull off convincingly. Today, creating convincing photorealistic fakes has become almost trivial.

    Using nothing but those seven images, someone could train AI to generate images that make it seem like John has a secret life. For example, he might like to take nude selfies in his classroom. At night, John might go to bars dressed like a clown. On weekends, he could be part of an extremist paramilitary group. And maybe he served prison time for an illegal drug charge but has hidden that from his employer.

    We used an AI image generator called Stable Diffusion (version 1.5) and a technique called Dreambooth to teach AI how to create images of John in any style. While our John is not real, someone could reproduce similar results with five or more images of any person. They could be pulled from a social media account or even taken as still frames from a video.

    The training process—teaching the AI how to create images of John—took about an hour and was free thanks to a Google cloud computing service. Once training was complete, generating the images themselves took several hours—not because generating them is slow but because we needed to sort through many imperfect pictures (and use trial-and-error in prompting) to find the best ones. Still, it’s dramatically easier than attempting to create a realistic fake of “John” in Photoshop from scratch.

    Thanks to AI, we can make John appear to commit illegal or immoral acts, such as breaking into a house, using illegal drugs, or taking a nude shower with a student. With add-on AI models optimized for pornography, John can be a porn star, and that capability can even veer into CSAM territory.

    We can also generate images of John doing seemingly innocuous things that might still personally be devastating to him—drinking at a bar when he’s pledged sobriety or spending time somewhere he is not supposed to be.

    He can also be put into fun and fantastic situations, like being a medieval knight or an astronaut. He can appear young or old, obese or skinny, with or without glasses, or wearing different outfits.

    Reply
  4. Tomi Engdahl says:

    Brendan Murray / Bloomberg:
    Pactum, which offers AI-powered software that helps companies like Walmart automate routine supplier negotiations, raised a $20M Series A extension led by 3VC

    Pactum Raises $20 Million in Maersk-Backed Funding to Grow AI Deal-Making
    https://www.bloomberg.com/news/newsletters/2022-12-08/supply-chain-latest-pactum-automates-supplier-negotiations

    Supply disruptions may be cooling down as the global economy downshifts, but the tech revolution in logistics is just heating up.

    A group of investors including the venture arm of shipping giant Maersk is providing $20 million in funding to Mountain View, California-based Pactum, which offers AI-powered software that helps big companies like Walmart automate routine supplier negotiations.

    Volatility in transportation markets means freight carriers and their customers often want to rewrite contracts — traditionally a cumbersome process, especially for companies dealing with hundreds or thousands of suppliers and vendors.

    About 80% of commercial deals with suppliers are non-strategic, mundane negotiations that are heavily based on data and better handled by algorithms than humans, according to Pactum Co-Founder and CEO Martin Rand

    “The technology steps in if something un-forecasted happens — like this un-forecasted demand, or a ship moves to a new port, or we now have to reagree long-term deals to spot rates, or spot rates to long-term agreements,” Rand said. “These need to be done extremely fast.”

    Software companies providing supply-chain solutions like Pactum’s are receiving a lot of private investment lately.

    During peak disruptions over the past year, Maersk and Pactum worked to resolve severe capacity imbalances in the spot trucking market. Pactum’s machine learning stepped in to analyze routes, propose prices to the best truckers and secure capacity.

    “What will fundamentally change is that all commercial deals nowadays have either a lot of data associated with them, or a lot of complexity or a high velocity of data,” Rand said. “People are needed to manage strategic deals which machines cannot, but such complexity is very tough because people cannot think in a multidimensional space but machines are made for that.”

    Copenhagen-based Maersk spends a lot of money on things like procurement, and “unfortunately, the bigger the spend, the more complex it actually is to get our head around it,”

    Walmart Trial

    It’s important for Maersk and its partners that negotiations are made quickly, transparently and fairly, as both sides aim to reduce waste in the system.

    “If you can get your truck utilized three times in a day instead of two, you might get paid less for the individual load, but actually at the end of the day, you’re much better off,” Jorgensen said. “What we like as investors is to put technology that understands the interface between operations and then putting a software layer on top of it.”

    Walmart International did a pilot program recently to automate some supplier negotiations using Pactum’s software, according to a Harvard Business Review article last month.

    “Pactum has all the elements needed to be a clear game changer in the business negotiations process,” 3VC partner Eva Arh said in a statement.

    Reply
  5. Tomi Engdahl says:

    China bans deepfakes created without permission or for evil https://www.theregister.com/2022/12/12/china_deep_synthesis_deepfake_regulation/
    China’s Cyberspace Administration has issued guidelines on how to do deepfakes the right way. Deepfakes use artificial intelligence to create realistic depictions usually videos of humans saying and/or doing things they didn’t say and/or do. They’re controversial outside China for their potential to mislead audiences and create trouble for the people depicted.

    Reply
  6. Tomi Engdahl says:

    11 miljoonaa euroa taivaan tuuliin — tässä lopputulokset kansallisesta tekoälyohjelmasta AuroraAI
    https://www.attejuvonen.fi/aurora-ai/

    Jos kuulet nyt ensimmäistä kertaa Suomen kansallisesta tekoälyohjelmasta, et ole yksin. AuroraAI on jäänyt yllättävän vähälle huomiolle mediassa hankkeen suuruuteen nähden. Hanke alkaa olla nyt suurin piirtein taputeltu päätökseen ja Hesari avasi keskustelun, joten päätin kantaa korteni kekoon testailemalla AuroraAI:n lopputuotoksia tavallisen kansalaisen näkökulmasta ja raportoimalla tulokset tähän artikkeliin.

    Mutta mikä on AuroraAI? Valtiovarainministeriön sivut valaisevat asiaa meille:

    Aurora on tekoälyjen ja autonomisten sovellusten muodostama verkko, joka luo edellytyksiä ihmiskeskeiselle ja ennakointikykyiselle yhteiskunnalle.[1]

    Jos mietit mitä se tarkoittaa käytännössä, hankkeen puuhamies selventää asiaa:

    Dynaamisesti muotoutuvat arvoverkot ihmisen eri tilanteissa, se on niinku Aurora käytännössä.[2]

    Osalla lukijoista saattaa edelleen olla jotain kysymyksiä. Ei hätää, tässä tyhjentävä selitys Valtiovarainministeriön ICT-johtajalta:

    Automaatio suorastaan puhkeaa kukkaan, kun on saatavilla hyvänlaatuista dataa. Data on automaatiolle kuin vesi, jolla kastellaan automaation kukkasia. Ja pistää myös samalla miettimään, että olisikö tässä kielikuvassa ihmiskeskeisyys se kukkamulta.[3]

    Esiselvitystä seuranneessa varsinaisessa hankkeessa 2020-2022 kukkamultaa viljeltiin 11,2 miljoonan euron budjetilla.[5] Alun perin budjetin piti olla jopa 100 miljoonaa euroa,[6] mutta veronmaksajien onneksi hanke ei toteutunut täydessä laajuudessa.

    Lopputulokset
    Mitä veronmaksajat voivat odottaa saavansa 11 miljoonalla eurolla? HS:n artikkelissa laitetaan silkkihansikkaat käteen ja varovaisesti lasketaan rima niin alas että se koskettaa lattiaa:

    [Teknologiatutkija Santeri] Räisänen sanoo, että mahalasku on tyypillinen tulos tekoälyhankkeissa. Usein pelätään, että niiden tuloksena syntyy valvontakoneisto.

    ”Todennäköisempää on, ettei synny mitään.”

    Jotain on sentään syntynyt. Ainakin Aurora AI -verkon suosittelumoottori ja profiilinhallinta, joiden lähdekoodi julkaistiin viime viikolla. Lisäksi syntyi koodia käyttöliittymiin sekä käsikirja neuvontabotin tuottajalle.

    Riittääkö tyydyttäväksi lopputulokseksi tosiaan, että ”jotain koodia on ainakin julkaistu”?

    Nyt on aika upottaa kädet kukkamultaan ja testailla, ovatko lopputuotokset hyödyllisiä tavallisen kansalaisen näkökulmasta. Sanottakoon, että tarkoitan lopputuotoksella jotakin konkreettista sovellusta tai vastaavaa tuotosta, jota kansalainen voi käyttää. En tarkoita lopputuotoksella mitään osallistujien henkilökohtaisia ”opittiin niin paljon” -kokemuksia

    Nähdäkseni hankkeen lopputuotokset voi jakaa kahteen kategoriaan:

    Palvelusuosittelijat, jotka suosittelevat palveluita kaikenlaisiin tilanteisiin
    Keskustelevat chatbotit, jotka tarjoavat neuvoja rajattua aihepiiriä koskien

    “AuroraAI ei osaa suositella itsemurhaa hautovalle käyttäjälle mielenterveyspalveluita. Sen sijaan AuroraAI suosittelee hautatoiveen tekemistä ja testamentin kirjoittamista ennen itsemurhaa.”

    Mitä tästä kaikesta opittiin?
    En halua syyttää yksittäisiä ihmisiä tästä fiaskosta. Lopputulos olisi todennäköisesti ollut yhtä kehno, vaikka kaikki virastojen puuhamiehet ja konsulttitalojen toimittajat olisi vaihdettu toisiin. Hankkeen epäonnistuminen oli nähtävissä kaukaa, eikä siihen olisi missään tapauksessa pitänyt ryhtyä. Tekoälypöhinä ei kuulu julkiselle sektorille. Yksityisellä sektorilla saa pöhistä ihan niin paljon kuin haluaa. En väitä että tällainen hanke olisi onnistunut sen paremmin yksityisen sektorin vetämänä — olihan tätäkin projektia toimittamassa yksityisen sektorin konsultit — on vaan eri asia tuhlata omia rahoja kuin verorahoja.

    Yksityisestä sektorista puheenollen, kokeillaanpa lopuksi millaista Chatbot-palvelua Suomen kansalaisille on tällä hetkellä tarjolla täysin ilmaiseksi yksityisen OpenAI:n ChatGPT-palvelusta

    Reply
  7. Tomi Engdahl says:

    Melissa Heikkilä / MIT Technology Review:
    When a female reporter tried Lensa, many of her 100 avatars were “pornified”, with 16 topless; her male peers got avatars of astronauts, warriors, and the like — When I tried the new viral AI avatar app Lensa, I was hoping to get results similar to some of my colleagues at MIT Technology Review.

    The viral AI avatar app Lensa undressed me—without my consent
    https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/

    My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.

    When I tried the new viral AI avatar app Lensa, I was hoping to get results similar to some of my colleagues at MIT Technology Review. The digital retouching app was first launched in 2018 but has recently become wildly popular thanks to the addition of Magic Avatars, an AI-powered feature which generates digital portraits of people based on their selfies.

    But while Lensa generated realistic yet flattering avatars for them—think astronauts, fierce warriors, and cool cover photos for electronic music albums— I got tons of nudes. Out of 100 avatars I generated, 16 were topless, and in another 14 it had put me in extremely skimpy clothes and overtly sexualized poses.

    I have Asian heritage, and that seems to be the only thing the AI model picked up on from my selfies. I got images of generic Asian women clearly modeled on anime or video-game characters. Or most likely porn, considering the sizable chunk of my avatars that were nude or showed a lot of skin. A couple of my avatars appeared to be crying. My white female colleague got significantly fewer sexualized images, with only a couple of nudes and hints of cleavage. Another colleague with Chinese heritage got results similar to mine: reams and reams of pornified avatars.

    Lensa’s fetish for Asian women is so strong that I got female nudes and sexualized poses even when I directed the app to generate avatars of me as a male.

    The fact that my results are so hypersexualized isn’t surprising, says Aylin Caliskan, an assistant professor at the University of Washington who studies biases and representation in AI systems.

    Lensa generates its avatars using Stable Diffusion, an open-source AI model that generates images based on text prompts. Stable Diffusion is built using LAION-5B, a massive open-source data set that has been compiled by scraping images off the internet.

    And because the internet is overflowing with images of naked or barely dressed women, and pictures reflecting sexist, racist stereotypes, the data set is also skewed toward these kinds of images.

    This leads to AI models that sexualize women regardless of whether they want to be depicted that way, Caliskan says—especially women with identities that have been historically disadvantaged.

    AI training data is filled with racist stereotypes, pornography, and explicit images of rape, researchers Abeba Birhane, Vinay Uday Prabhu, and Emmanuel Kahembwe found after analyzing a data set similar to the one used to build Stable Diffusion. It’s notable that their findings were only possible because the LAION data set is open source. Most other popular image-making AIs, such as Google’s Imagen and OpenAI’s DALL-E, are not open but are built in a similar way, using similar sorts of training data, which suggests that this is a sector-wide problem.

    Stability.AI, the company that developed Stable Diffusion, launched a new version of the AI model in late November. A spokesperson says that the original model was released with a safety filter, which Lensa does not appear to have used, as it would remove these outputs. One way Stable Diffusion 2.0 filters content is by removing images that are repeated often. The more often something is repeated, such as Asian women in sexually graphic scenes, the stronger the association becomes in the AI model.

    “Women are associated with sexual content, whereas men are associated with professional, career-related content in any important domain such as medicine, science, business, and so on,” Caliskan says.

    “Someone has to choose the training data, decide to build the model, decide to take certain steps to mitigate those biases or not,”

    Reply
  8. Tomi Engdahl says:

    This artist is dominating AI-generated art. And he’s not happy about it.
    Greg Rutkowski is a more popular prompt than Picasso.
    https://www.technologyreview.com/2022/09/16/1059598/this-artist-is-dominating-ai-generated-art-and-hes-not-happy-about-it/

    Those cool AI-generated images you’ve seen across the internet? There’s a good chance they are based on the works of Greg Rutkowski.

    Rutkowski is a Polish digital artist who uses classical painting styles to create dreamy fantasy landscapes. He has made illustrations for games such as Sony’s Horizon Forbidden West, Ubisoft’s Anno, Dungeons & Dragons, and Magic: The Gathering. And he’s become a sudden hit in the new world of text-to-image AI generation.

    His distinctive style is now one of the most commonly used prompts in the new open-source AI art generator Stable Diffusion

    For example, type in “Wizard with sword and a glowing orb of magic fire fights a fierce dragon Greg Rutkowski,” and the system will produce something that looks not a million miles away from works in Rutkowski’s style.

    But these open-source programs are built by scraping images from the internet, often without permission and proper attribution to artists. As a result, they are raising tricky questions about ethics and copyright. And artists like Rutkowski have had enough.

    According to the website Lexica, which tracks over 10 million images and prompts generated by Stable Diffusion, Rutkowski’s name has been used as a prompt around 93,000 times. Some of the world’s most famous artists, such as Michelangelo, Pablo Picasso, and Leonardo da Vinci, brought up around 2,000 prompts each or less. Rutkowski’s name also features as a prompt thousands of times in the Discord of another text-to-image generator, Midjourney.

    Rutkowski was initially surprised but thought it might be a good way to reach new audiences. Then he tried searching for his name to see if a piece he had worked on had been published. The online search brought back work that had his name attached to it but wasn’t his.

    “It’s been just a month. What about in a year? I probably won’t be able to find my work out there because [the internet] will be flooded with AI art,” Rutkowski says. “That’s concerning.”

    Reply
  9. Tomi Engdahl says:

    Emma Roth / The Verge:
    DoNotPay announces an AI-powered chatbot that can negotiate bills and cancel subscriptions by chatting to customer service, rolling out in the next two weeks

    DoNotPay is launching an AI chatbot that can negotiate your bills
    https://www.theverge.com/2022/12/13/23505873/donotpay-negotiate-bills-ai-chatbot

    The latest tool from DoNotPay can have a back-and-forth conversation with a company’s customer service representative through live chat or email.

    DoNotPay, the company that bills itself as “the world’s first robot lawyer,” is launching a new AI-powered chatbot that can help you negotiate bills and cancel subscriptions without having to deal with customer service.

    In a demo of the tool posted by DoNotPay CEO Joshua Browder, the chatbot manages to get a discount on a Comcast internet bill through Xfinity’s live chat. Once it connects with a customer service representative, the bot asks for a better rate using account details provided by the customer. The chatbot cites problems with Xfinity’s services and threatens to take legal action, to which the representative responds by offering to take $10 off the customer’s monthly internet bill.

    https://twitter.com/jbrowder1/status/1602353465753309195?s=20&t=60ibQSBxL_9eSViYbVGFGA

    Reply
  10. Tomi Engdahl says:

    Tekoäly ei osaa luoda mitään omaperäistä
    Tekoäly on loinen, joka elää, kehittyy ja valtaa elintilaa apinoimalla ja yhdistelemällä kaikkea olemassa olevaa.
    https://www.hs.fi/mielipide/art-2000009200596.html

    Reply
  11. Tomi Engdahl says:

    Lopulta emme voi havaita tekoälyn tekemiä kyberhyökkäyksiä
    https://etn.fi/index.php/13-news/14383-lopulta-emme-voi-havaita-tekoaelyn-tekemiae-kyberhyoekkaeyksiae

    Kyberturva-alalla on jo pitkään puhuttu siitä, että jollakin aikataululla hyökkäyksistä ja niiden torjunnasta tulee kahden tekoälyn vastaista taistelua. Vielä tänään näin ei ole, mutta pitkällä aikavälillä, noin kuuvan vuosikymmenen lopulla tekoäly kykenee tekemään kyberhyökkäyksiä, joita puolustajien on hyvin vaikea edes havaita.

    Tämä käy ilmi WithSecuren, Liikenne- ja viestintävirasto Traficomin ja Huoltovarmuuskeskuksen yhdessä laatimasta raportista. Raportin mukaan tekoälyä hyödyntävät kyberhyökkäykset ovat tällä hetkellä harvinaisia ja ne rajoittuvat sosiaalisen manipuloinnin käyttötarkoituksiin (kuten yksilön imitointiin) tai niitä tehdään tavoilla, joita tutkijat ja analyytikot eivät kykene havaitsemaan suoraan (esimerkiksi taustajärjestelmien data-analyysi).

    Raportissa kuitenkin korostetaan tekoälyn kehittyneen niin merkittävästi, että kehittyneemmät kyberhyökkäykset ovat entistä todennäköisempiä lähitulevaisuudessa. Kohteiden tunnistaminen, sosiaalinen manipulointi ja imitaatio ovat tällä hetkellä välittömimpiä tekoälyn mahdollistamia uhkia, ja niiden odotetaan kehittyvän ja määrän lisääntyvän entisestään seuraavan kahden vuoden aikana.

    Tällä hetkellä kyberrikollisilla ei ole vielä suurta tarvetta ottaa tekoälyä käyttöön hyökkäyksissään. Niin kauan, kun perinteiset kyberhyökkäykset pääsevät tavoitteeseen ja generoivat hyökkääjille varoja, hyökkääjillä on rajoitetusti motivaatiota siirtyä tekoälyn käyttöön. Verkkorikolliset eivät lisäksi ole vielä opiskelleet tekoälyn käyttöön liittyviä tekniikoita. Kyse on komplesisista uusista teknologioista.

    Entäpä pidemmällä aikavälillä? Mihin tekoäly pystyy kyberhyökkääjänä?

    Jos katsotaan 10 vuotta eteenpäin, tekoäly todennäköisesti voisi vahvistavan oppimisen kautta rakentaa itsenäisiä haittaohjelmia. Iso ongelma tässä on tarvittavien koneoppimiskirjastojen puute. Näitä kirjastoja vaaditaan, jotta haittaohjelma toimisi kohdejärjestelmässä. Koneoppimiseen liittyviä kirjastoja ei vielä olla otettu tarpeeksi laajasti käyttöön tietokoneissa, älypuhelimissa ja tableteissa.

    Koneoppimiseen liittyvät kirjastot olisivat käytännössä pakko sisällyttää haittaohjelmaan, joka itsessään lisäisi tietokuorman kokoa huomattavasti. Koneoppimiseen liittyvät mallit, jotka mahdollistavat haittaohjelman itsenäisyyden ovat myös hyvin isoja kooltaan ja vaativat isoja määriä suorituskykyä ja muistia toimiakseen. Näiden mallien koko ja vaatimat resurssit estävät niiden käytön olemassa olevissa järjestelmissä, ja on saattavat suoritusongelmien takia mahdollisesti helpottaa hyökkäyksen havainnointia. Näiden haasteiden takia on epätodennäköistä, että näkisimme itseohjautuvia tai älykkäitä itse levittyviä haittaohjelmia lähitulevaisuudessa.

    Tekoälyn mahdollistamat kyberhyökkäykset
    https://www.traficom.fi/sites/default/files/media/publication/TRAFICOM_Teko%C3%A4lyn_mahdollistamat_kyberhy%C3%B6kk%C3%A4ykset%202022-12-12_web.pdf

    Reply
  12. Tomi Engdahl says:

    A new AI is making some people very upset.

    No, The Lensa AI App Technically Isn’t Stealing Artists’ Work – But It Will Majorly Shake Up The Art World
    https://www.iflscience.com/no-the-lensa-ai-app-technically-isn-t-stealing-artists-work-but-it-will-majorly-shake-up-the-art-world-66669

    Lensa has become both popular and controversial in a short period of time, but how does it actually work?

    The Lensa photo and video editing app has shot into social media prominence in recent weeks, after adding a feature that lets you generate stunning digital portraits of yourself in contemporary art styles. It does that for just a small fee and the effort of uploading 10 to 20 different photographs of yourself.

    2022 has been the year text-to-media AI technology left the labs and started colonising our visual culture, and Lensa may be the slickest commercial application of that technology to date.

    It has lit a fire among social media influencers looking to stand out – and a different kind of fire among the art community. Australian artist Kim Leutwyler told the Guardian she recognised the styles of particular artists – including her own style – in Lensa’s portraits.

    Since Midjourney, OpenAI’s Dall-E and the CompVis group’s Stable Diffusion burst onto the scene earlier this year, the ease with which individual artists’ styles can be emulated has sounded warning bells. Artists feel their intellectual property – and perhaps a bit of their soul – has been compromised. But has it?

    Well, not as far as existing copyright law sees it.

    If it’s not direct theft, what is it?
    Text-to-media AI is inherently very complicated, but it is possible for us non-computer-scientists to understand conceptually.

    Lensa is essentially a streamlined and customised front-end for the freely available Stable Diffusion deep learning model. It’s so named because it uses a system called latent diffusion to power its creative output.

    The word “latent” is key here. In data science a latent variable is a quality that can’t be measured directly, but can be be inferred from things that can be measured.

    When Stable Diffusion was being built, machine-learning algorithms were fed a large number of image-text pairs, and they taught themselves billions of different ways these images and captions could be connected.

    This formed a complex knowledge base, none of which is directly intelligible to humans. We might see “modernism” or “thick ink” in its outputs, but Stable Diffusion sees a universe of numbers and connections.

    Because the system ingested both descriptions and image data, it lets us plot a course through the enormous sea of possible outputs by typing in meaningful prompts.

    What makes Lensa stand out?
    So if Stable Diffusion is a text-to-image system where we navigate through different possibilities, then Lensa seems quite different since it takes in images, not words. That’s because one of Lensa’s biggest innovations is streamlining the process of textual inversion.

    Lensa takes user-supplied photos and injects them into Stable Diffusion’s existing knowledge base, teaching the system how to “capture” the user’s features so it can then stylise them. While this can be done in the regular Stable Diffusion, it’s far from a streamlined process.

    Although you can’t push the images on Lensa in any particular desired direction, the trade-off is a wide variety of options that are almost always impressive. These images borrow ideas from other artists’ work, but do not contain any actual snippets of their work.

    The Australian Arts Law Centre makes it clear that while individual artworks are subject to copyright, the stylistic elements and ideas behind them are not.

    What about the artists?
    Nonetheless, the fact that art styles and techniques are now transferable in this way is immensely disruptive and extremely upsetting for artists. As technologies like Lensa becomes more mainstream and artists feel increasingly ripped-off, there may be pressure for legislation to adapt to it.

    For artists who work on small-scale jobs, such as creating digital illustrations for influencers or other web enterprises, the future looks challenging.

    However, while it is easy to make an artwork that looks good using AI, it’s still difficult to create a very specific work, with a specific subject and context.

    It may be that artists themselves will need to borrow a page from the influencer’s handbook and invest more effort in publicising themselves.

    It’s early days, and it’s going to be a tumultuous decade for producers and consumers of art. But one thing is for sure: the genie is out of the bottle.

    Reply
  13. Tomi Engdahl says:

    Companies — and VCs — continue to invest in AI despite market slowdown
    https://techcrunch.com/2022/12/15/despite-the-market-slowdown-companies-and-vcs-continue-to-invest-in-ai/?tpcc=ecfb2020

    While hiring freezes at Big Tech firms might be hurting certain AI investments, it’s clear that there remains a strong appetite throughout the enterprise for AI technologies — whether developed in-house or outsourced to third parties.

    According to a McKinsey survey from early December, AI adoption at companies has more than doubled since 2017, with 63% of businesses expecting spending on AI to increase over the next three years. In February, IDC forecast that companies would increase their spend on AI solutions by 19.6% in 2022, reaching $432.8 billion by the end of the year and over $500 billion in 2023.

    Reply
  14. Tomi Engdahl says:

    Generative AI is driving much of the recent corporate interest, with text-to-image tools such as OpenAI’s DALL-E 2 and Stable Diffusion seeing swift uptake despite the risks. Adobe just this month announced that it would open its stock image service, Adobe Stock, to creations made with the help of generative AI programs, following in the footsteps of Shutterstock (but not rival Getty Images). Meanwhile, Microsoft partnered with OpenAI to provide enterprise-tailored access to DALL-E 2 to customers like Mattel, which is using DALL-E 2 to come up with ideas for new Hot Wheels model cars.
    https://techcrunch.com/2022/12/15/despite-the-market-slowdown-companies-and-vcs-continue-to-invest-in-ai/?tpcc=ecfb2020

    Reply
  15. Tomi Engdahl says:

    Jyrki Lehtolan kolumni: Joulu vaarassa, taiteilijat aloittivat työnseisauksen https://www.is.fi/kotimaa/art-2000009270543.html

    Reply
  16. Tomi Engdahl says:

    Googlen hakukoneelle löytyi viimeinkin haastaja https://www.is.fi/digitoday/art-2000009267204.html

    ChatGPT-niminen tekoälyohjelma saattaa olla ensimmäinen todellinen haastaja Googlelle pitkiin aikoihin, Bloomberg ennustaa. Samalla koko internetiä voi uhata eksistentiaalinen ongelma.

    Reply
  17. Tomi Engdahl says:

    Stack Overflow bans ChatGPT as ‘substantially harmful’ for coding issues
    High error rates mean thousands of AI answers need checking by humans
    https://www.theregister.com/2022/12/05/stack_overflow_bans_chatgpt/

    Reply
  18. Tomi Engdahl says:

    No Linux? No problem. Just get AI to hallucinate it for you
    ChatGPT-generated command line can create virtual files, execute code, play games.
    https://arstechnica.com/information-technology/2022/12/openais-new-chatbot-can-hallucinate-a-linux-shell-or-calling-a-bbs/

    Reply
  19. Tomi Engdahl says:

    Interview with OpenAI’s Greg Brockman: The Future of LLMs, Foundation & Generative Models (DALL·E 2 & GPT-3)
    https://m.youtube.com/watch?v=Rp3A5q9L_bg&feature=youtu.be

    Reply
  20. Tomi Engdahl says:

    A Computer Can Now Write Your College Essay — Maybe Better Than You Can
    https://www.forbes.com/sites/emmawhitford/2022/12/09/a-computer-can-now-write-your-college-essay—maybe-better-than-you-can/

    Not only does ChatGPT write clear essays, but it can also conjure up its own personal details and embellishments that could up a students’ chance of acceptance and would be difficult to verify.

    Reply
  21. Tomi Engdahl says:

    “Artists are pushing back on imagery generated by artificial intelligence (AI) by using the technology to create content containing copyrighted Disney characters.

    Since the introduction of AI systems, including DALL·E 2, Lensa AI, and Midjourney, artists have argued that such tools steal their work, given that they’ve been fed an endless supply of their pieces as inputs. Many such tools, for example, can be told to create imagery in the style of a particular artist.

    The current legal consensus, much to the chagrin of many artists, concludes that AI-generated art is in the public domain and, therefore not copyrighted. In the terms of service for systems such as DALL·E 2, created by the research laboratory OpenAI, users are told that no images are copyrighted despite being owned by OpenAI.

    In response to concerns over the future of their craft, artists have begun using AI systems to generate images of characters, including Disney’s Mickey Mouse. Given Disney’s history of fierce protection over its content, the artists are hoping the company takes action and thus proves that AI art isn’t as original as it claims.

    Artists fed up with AI-image generators use Mickey Mouse to goad copyright lawsuits
    ‘People’s craftsmanship, time, effort and ideas are being taken without their consent…’
    https://www.dailydot.com/debug/ai-art-protest-disney-characters-mickey-mouse/

    Since the introduction of AI systems including DALL·E 2, Lensa AI, and Midjourney, artists have argued that such tools steal their work, given that they’ve been fed an endless supply of their pieces as inputs. Many such tools, for example, can be told to create imagery in the style of a particular artist.

    The current legal consensus, much to the chagrin of many artists, concludes that AI-generated art is in the public domain and therefore not copyrighted. In the terms of service for systems such as DALL·E 2, created by the research laboratory OpenAI, users are told that no images are copyrighted despite being owned by OpenAI.

    In response to concerns over the future of their craft, artists have begun using AI systems to generate images of characters including Disney’s Mickey Mouse. Given Disney’s history of fierce protection over its content, the artists are hoping the company takes action and thus proves that AI art isn’t as original as it claims.

    Over the weekend, Eric Bourdages, the Lead Character Artist on the popular video game Dead by Daylight, urged his followers to create and sell merchandise using the Disney-inspired images he created using Midjourney.

    “Someone steal these amazing designs to sell them on Mugs and T-Shirts, I really don’t care, this is AI art that’s been generated,” Bourdages wrote. “Legally there should be no recourse from Disney as according to the AI models TOS these images transcends copyright and the images are public domain.”

    In numerous follow-up tweets, Bourdages generated images of other popular characters from movies, video games, and comic books, including Darth Vader, Spider-Man, Batman, Mario, and Pikachu.

    “More shirts courtesy of AI,” he added. “I’m sure, Nintendo, Marvel, and DC won’t mind, the AI didn’t steal anything to create these images, they are completely 100% original.”

    Many users appeared to agree with Bourdages’ somewhat sarcastic interpretation, sharing T-shirts they created online that feature the AI images.

    Bourdages later clarified that he had no intention of profiting off of the images, but noted that Midjourney had done so by charging him to use their service.

    “Midjourney is a paid subscription btw, so technically the only one that profited off of this image is them,”

    Just two days after sharing the images, however, Bourdages stated on Twitter that he had suddenly lost his access to Midjourney.

    “Update – I was refunded and lost access to Midjourney,” he said. “They are no longer profiting off of these images and I assume didn’t want copyrighted characters generated. I hope this thread created discussion around AI and where data is sourced.”

    “The obvious issue I am opposed to in my thread is the theft of human art,” he said. “People’s craftsmanship, time, effort, and ideas are being taken without their consent and used to create a product that can blend it all together and mimic it to varying degrees.”

    The Daily Dot reached out to both Bourdages and Midjourney to inquire about the images but did not receive a reply by press time. Disney did not respond to questions either regarding whether it would attempt to claim copyright over AI-generated imagery.

    The issue surrounding AI art has already led to widespread protest and pushback from the art community.

    Reply
  22. Tomi Engdahl says:

    AI Could Put Google In Serious Trouble Within A Year Or Two, Gmail Creator Says
    The man behind the motto “don’t be evil” has a few concerns about ChatGPT.
    https://www.iflscience.com/ai-could-put-google-in-serious-trouble-within-a-year-or-two-gmail-creator-says-66527

    Reply
  23. Tomi Engdahl says:

    New AI Tool Can Smoothly De-Age Actors’ Faces For Movies In Seconds
    The digital aging of actors is currently a costly process that’s in hot demand.
    https://www.iflscience.com/new-ai-tool-can-smoothly-age-actor-s-faces-for-movies-in-seconds-66530

    Reply
  24. Tomi Engdahl says:

    AI-generated answers temporarily banned on coding Q&A site Stack Overflow / People have been using OpenAI’s chatbot ChatGPT to flood the site with AI responses, but Stack Overflow’s mods say these ‘have a high rate of being incorrect.’
    https://www.theverge.com/2022/12/5/23493932/chatgpt-ai-generated-answers-temporarily-banned-stack-overflow-llms-dangers

    Reply
  25. Tomi Engdahl says:

    PROFESSORS ALARMED BY NEW AI THAT WRITES ESSAYS ABOUT AS WELL AS DUMB UNDERGRADS
    byVICTOR TANGERMANN
    https://futurism.com/the-byte/professors-alarmed-ai-undergrads

    Reply
  26. Tomi Engdahl says:

    Amy Harmon / New York Times:
    A profile of iNaturalist, a not-for-profit social network with an ML algorithm to help users identify 70K plants and animals, driving cooperation and consensus — A not-for-profit initiative of the California Academy of Sciences and the National Geographic Society, iNaturalist says it aims …

    https://www.nytimes.com/2022/12/09/us/inaturalist-nature-app.html

    Reply
  27. Tomi Engdahl says:

    Image-Generating AI Can Texture An Entire 3D Scene In Blender
    https://hackaday.com/2022/12/18/image-generating-ai-can-texture-an-entire-3d-scene-in-blender/

    [Carson Katri] has a fantastic solution to easily add textures to 3D scenes in Blender: have an image-generating AI create the texture on demand, and do it for you.

    As shown here, two featureless blocks on a featureless plain become run-down buildings by wrapping the 3D objects in a suitable image. It’s all done with the help of the Dream Textures add-on for Blender.

    The solution uses Stable Diffusion to generate a texture for a scene based on a text prompt (e.g. “sci-fi abandoned buildings”), and leverages an understanding of a scene’s depth for best results. The AI-generated results aren’t always entirely perfect, but the process is pretty amazing. Not to mention fantastically fast compared to creating from scratch.

    https://github.com/carson-katri/dream-textures/releases/tag/0.0.9

    Reply
  28. Tomi Engdahl says:

    There’s a Problem With That AI Portrait App: It Can Undress People Without Their Consent
    Pandora’s box is open.
    https://futurism.com/ai-portrait-app-nudes-without-consent

    Reply
  29. Tomi Engdahl says:

    AI learns to write computer code in ‘stunning’ advance
    DeepMind’s AlphaCode outperforms many human programmers in tricky software challenges
    https://www.science.org/content/article/ai-learns-write-computer-code-stunning-advance

    Reply
  30. Tomi Engdahl says:

    Wall Street Journal:
    A profile of OpenAI, which raised $1B from Microsoft after Sam Altman demoed an AI model to Satya Nadella in 2019, as some doubt its path to meaningful revenue

    The Backstory of ChatGPT Creator OpenAI
    Behind ChatGPT and other AI breakthroughs was Sam Altman’s fundraising—but skeptics remain
    https://www.wsj.com/articles/chatgpt-creator-openai-pushes-new-strategy-to-gain-artificial-intelligence-edge-11671378475?mod=djemalertNEWS

    ChatGPT, the artificial-intelligence program captivating Silicon Valley with its sophisticated prose, had its origin three years ago, when technology investor Sam Altman became chief executive of the chatbot’s developer, OpenAI.

    Mr. Altman decided at that time to move the OpenAI research lab away from its nonprofit roots and turn to a new strategy, as it raced to build software that could fully mirror the intelligence and capabilities of humans—what AI researchers call “artificial general intelligence.” Mr. Altman, who had built a name as president of famed startup accelerator Y Combinator, would oversee the creation of a new for-profit arm, believing OpenAI needed to become an aggressive fundraiser to meet its founding mission.

    Since then, OpenAI has landed deep-pocketed partners like Microsoft Corp. MSFT -1.73% , created products that have captured the attention of millions of internet users, and is looking to raise more money. Mr. Altman said the company’s tools could transform technology similar to the invention of the smartphone and tackle broader scientific challenges.

    “They are incredibly embryonic right now, but as they develop, the creativity boost and new superpowers we get—none of us will want to go back,” Mr. Altman said in an interview.

    Reply
  31. Tomi Engdahl says:

    The $2 Billion Emoji: Hugging Face Wants To Be Launchpad For A Machine Learning Revolution
    https://www.forbes.com/sites/kenrickcai/2022/05/09/the-2-billion-emoji-hugging-face-wants-to-be-launchpad-for-a-machine-learning-revolution/?utm_medium=social&utm_source=ForbesMainFacebook&utm_campaign=socialflowForbesMainFB&sh=52efbe8df732

    Newly valued at $2 billion, the AI 50 debutant originated as a chatbot for teenagers. Now, it has aspirations—and $100 million in fresh dry powder—to be the GitHub of machine learning.

    When Hugging Face first announced itself to the world five years ago, it came in the form of an iPhone chatbot app for bored teenagers. It shared selfies of its computer-generated face, cracked jokes and gossiped about its crush on Siri. It hardly made any money.

    The viral moment came in 2018—not among teens, but developers. The founders of Hugging Face had begun to share bits of the app’s underlying code online for free. Almost immediately, researchers from some of the biggest tech names in the business, including Google and Microsoft, began using it for AI applications. Today, the chatbot has long since disappeared from the App Store, but Hugging Face has become the central depot for ready-to-use machine-learning models, the starting point from which more than 10,000 organizations have created AI-powered tools for their businesses.

    GitHub is for software, Hugging Face has become for machine learning. That’s a confident comparison, considering the widespread popularity of GitHub, which is used by more than 70 million developers to share and collaborate on code and was last recorded making $300 million in revenue at the time of its $7.5 billion sale to Microsoft in 2018. Hugging Face, by contrast, generated less than $10 million last year

    “I don’t really see a world where machine learning becomes the default way to build technology and where Hugging Face is the No. 1 platform for this, and we don’t manage to generate several billion dollars in revenue.”

    Hugging Face CEO Clément Delangue

    Reply
  32. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    OpenAI open sources Point-E, a machine learning system that generates and displays a 3D object from a text prompt in one to two minutes on an Nvidia V100 GPU — The next breakthrough to take the AI world by storm might be 3D model generators. This week, OpenAI open-sourced Point-E …

    OpenAI releases Point-E, an AI that generates 3D models
    https://techcrunch.com/2022/12/20/openai-releases-point-e-an-ai-that-generates-3d-models/

    The next breakthrough to take the AI world by storm might be 3D model generators. This week, OpenAI open sourced Point-E, a machine learning system that creates a 3D object given a text prompt. According to a paper published alongside the code base, Point-E can produce 3D models in one to two minutes on a single Nvidia V100 GPU.

    Point-E doesn’t create 3D objects in the traditional sense. Rather, it generates point clouds, or discrete sets of data points in space that represent a 3D shape — hence the cheeky abbreviation. (The “E” in Point-E is short for “efficiency,” because it’s ostensibly faster than previous 3D object generation approaches.) Point clouds are easier to synthesize from a computational standpoint, but they don’t capture an object’s fine-grained shape or texture — a key limitation of Point-E currently.

    To get around this limitation, the Point-E team trained an additional AI system to convert Point-E’s point clouds to meshes. (Meshes — the collections of vertices, edges and faces that define an object — are commonly used in 3D modeling and design.) But they note in the paper that the model can sometimes miss certain parts of objects, resulting in blocky or distorted shapes.

    Reply
  33. Tomi Engdahl says:

    Art professionals are increasingly concerned that text-to-image platforms will render hundreds of thousands of well-paid creative jobs obsolete.

    AI Is Coming For Commercial Art Jobs. Can It Be Stopped?
    https://www.forbes.com/sites/robsalkowitz/2022/09/16/ai-is-coming-for-commercial-art-jobs-can-it-be-stopped/?utm_source=ForbesMainFacebook&utm_campaign=socialflowForbesMainFB&utm_medium=social&sh=4e1186ab54b0

    Reply
  34. Tomi Engdahl says:

    And they said they will not take your job…

    Reply
  35. Tomi Engdahl says:

    New York Times:
    Sources: ChatGPT’s release led Google to declare a “code red”, as teams have been reassigned to respond to the threat that ChatGPT poses to its search business — A new wave of chat bots like ChatGPT use artificial intelligence that could reinvent or even replace the traditional internet search engine.

    https://www.nytimes.com/2022/12/21/technology/ai-chatgpt-google-search.html

    Reply
  36. Tomi Engdahl says:

    The researchers are concerned about how easy it could be for AI to be racist.

    AI Can Identify Race From Just X-Rays And Scientists Have No Idea How
    https://www.iflscience.com/ai-can-identify-race-from-just-xrays-and-scientists-have-no-idea-how-63717

    In another display that AI can see things that humans inherently can’t, researchers have discovered that AI may be able to identify race from X-ray images, despite there being no clear difference to human experts. Based on X-ray and CT images alone, the AI was able to identify race with around 90 percent accuracy and the scientists are unable to understand just how it can do this.

    Reply
  37. Tomi Engdahl says:

    AI has become obsessed with generating massive anime boobs
    https://www.gamingbible.co.uk/news/ai-software-cant-stop-generating-massive-anime-boobs-448054-20221221?source=facebook

    AI art has been a hot topic in recent days for a number of reasons. It was revealed that High On Life used AI to generate some of its environments, a fact that unsurprisingly proved to be controversial. If your Twitter feed is anything like mine, I’d imagine you’ve also noticed a bunch of people paying to generate various AI images of themselves.

    Today though, I bring you an entirely new AI art related problem. Seemingly, AI art generators cannot stop creating anime boobs.

    Reply
  38. Tomi Engdahl says:

    ChatGPT is closed-source. Some open-source breadcrumbs can be found here. https://www.eleuther.ai/
    and here https://www.youtube.com/watch?v=9dIR7g_1hgU

    Reply
  39. Tomi Engdahl says:

    Thomas Claburn / The Register:
    Stanford study: programmers who used AI tools like GitHub Copilot to solve a set of coding challenges produced less secure code than those who did not — At the same time, tools like Github Copilot and Facebook InCoder make developers believe their code is sound

    Study finds AI assistants help developers produce code that’s more likely to be buggy
    https://www.theregister.com/2022/12/21/ai_assistants_bad_code/

    At the same time, tools like Github Copilot and Facebook InCoder make developers believe their code is sound

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*