3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

5,200 Comments

  1. Tomi Engdahl says:

    Sam Schechner / Wall Street Journal:
    Microsoft CEO Satya Nadella says the company plans to integrate OpenAI’s tools into all of its products and make them available for other businesses to build on

    Microsoft Plans to Build OpenAI Capabilities Into All Products
    Offering for businesses and end users to be transformed by incorporating tools like ChatGPT, CEO Satya Nadella says
    https://www.wsj.com/articles/microsoft-plans-to-build-openai-capabilities-into-all-products-11673947774?mod=djemalertNEWS

    DAVOS, Switzerland— Microsoft Corp. MSFT 0.30% plans to incorporate artificial-intelligence tools like ChatGPT into all of its products and make them available as platforms for other businesses to build on, Chief Executive Satya Nadella said.

    Speaking Tuesday at a Wall Street Journal panel at the World Economic Forum’s annual event here in the Swiss mountains, Mr. Nadella said that his company will move quickly to commercialize tools from OpenAI, the research lab behind the ChatGPT chatbot as well as image generator Dall-E 2, which turns language prompts into novel images. Microsoft was an early investor in the startup.

    Microsoft said Monday that it is giving more customers access to the software behind those tools through its cloud-computing platform Azure. Mr. Nadella said at the panel Tuesday that the aim was to make Azure “the place for anybody and everybody who thinks about AI,” both for businesses and end users, including making ChatGPT available to business users.

    “Every product of Microsoft will have some of the same AI capabilities to completely transform the product,” Mr. Nadella said.

    OpenAI has been the center of the tech industry’s recent surge in excitement about AI, and Microsoft has been in advanced talks to increase its investment in the startup, the Journal has previously reported.

    The lab has been in talks to sell existing shares in a tender offer that would value the company at around $29 billion.

    Mr. Nadella said in the interview that the new excitement around the tools was based on the fast growth in their capabilities in the past year, something he said he expected to continue. “I’m not claiming by the way that this is the last innovation in AI,” Mr. Nadella said. “This is not linear progress.”

    “The best way to prepare for it is not to bet against this technology, and this technology helping you in your job and your business process,”

    Reply
  2. Tomi Engdahl says:

    Dina Bass / Bloomberg:
    Microsoft makes its Azure OpenAI Service, announced in 2021, broadly available, giving users access to the GPT-3.5 language model, DALL-E 2, and ChatGPT “soon” — Microsoft Corp. said it will add OpenAI’s viral artificial intelligence bot ChatGPT to its cloud-based Azure service …

    Microsoft to Add ChatGPT to Azure Cloud Services ‘Soon’
    https://www.bloomberg.com/news/articles/2023-01-17/microsoft-azure-to-add-chatgpt-to-cloud-services

    Microsoft, in talks for further investment in OpenAI, is widely releasing Azure service based on earlier partnership

    Microsoft Corp. said it will add OpenAI’s viral artificial intelligence bot ChatGPT to its cloud-based Azure service “soon,” building on an existing relationship between the two companies as Microsoft mulls taking a far larger stake in OpenAI.

    The software giant announced the broad availability of its Azure OpenAI Service, which has been available to a limited set of customers since it was unveiled in 2021. The service gives Microsoft’s cloud customers access to various OpenAI tools like the GPT-3.5 language system that ChatGPT is based on, as well as the Dall-E model for generating images from text prompts, the company said in a blog post. That enables Azure customers to use the OpenAI products in their own applications running in the cloud.

    Reply
  3. Tomi Engdahl says:

    Dylan Patel / SemiAnalysis:
    An overview of the ML software development industry over the past decade: a decline of Nvidia’s CUDA monopoly, PyTorch overtaking Google’s TensorFlow, and more — Over the last decade, the landscape of machine learning software development has undergone significant changes.

    How Nvidia’s CUDA Monopoly In Machine Learning Is Breaking – OpenAI Triton And PyTorch 2.0
    https://www.semianalysis.com/p/nvidiaopenaitritonpytorch

    Reply
  4. Tomi Engdahl says:

    ‘This song sucks’: Nick Cave responds to ChatGPT song written in style of Nick Cave
    Singer-songwriter dissects lyrics produced by popular chatbot, saying it is ‘a grotesque mockery of what it is to be human’
    https://www.theguardian.com/music/2023/jan/17/this-song-sucks-nick-cave-responds-to-chatgpt-song-written-in-style-of-nick-cave#Echobox=1673939286

    Nick Cave has dissected a song produced by the viral chatbot software ChatGPT “written in the style of Nick Cave”, calling it “bullshit” and “a grotesque mockery of what it is to be human”.

    Writing in his newsletter the Red Hand Files on Monday, Cave responded to a fan called Mark in New Zealand, who had sent him a song written by ChatGPT. The artificial intelligence, which can be directed to impersonate the style of specific individuals, was used by Mark to create a song “in the style of Nick Cave”.

    Reply
  5. Tomi Engdahl says:

    CHATGPT SEN TODISTAA – TEKOÄLY MULLISTAA MARKKINOINTIVIESTINNÄN
    https://blog.netprofile.fi/chatgpt-sen-todistaa-tekoaly-mullistaa-markkinointiviestinnan

    Timanttinen copy syntyy vain taitavan mainosnikkarin luovuuden ansiosta. Ehkä jo eilen kuitenkin koitti se päivä, jolloin kovinkin copy syntyy tekoälyn avulla. ChatGPT tuottaa sekunneissa sujuvaa tekstiä aiheesta kuin aiheesta, myös suomeksi. Luonnollisen kielen algoritminen tekoäly muuttaa markkinointiviestinnän arkipäivää vuonna 2023 pikavauhdilla.

    Veikkaisin, että moni tämän lukijoista on jo kokeillut ChatGPT-tekoälysovellusta, jonka amerikkalainen OpenAI julkaisi marraskuussa 2022. Ja veikkaukseni on, että kokeilijat ovat ällistyneet ja kokeneet ahaa-elämyksen, kuten minä.

    Sovelluksen nimessä GPT on lyhenne sanoista Generative Pre-trained Transformer. Se tarkoittaa syväoppimiseen perustuvaa koneoppimismallia, joka pyrkii ymmärtämään ja tuottamaan luonnollista kieltä kuten ihminen. Malli on “opetettu” internetistä löytävällä datalla.

    Tekoälyn ihmismäisyyden kuuluisa mittari on vuosikymmeniä ollut niin sanottu Turingin testi. Sen mukaan tekoäly on saavuttanut ihmisen älykkyydessä silloin, jos sen kanssa jutteleva ihminen ei voi varmuudella tietää, onko vastauksia antava taho ihminen vai kone.

    ChatGPT tuntuu olevan lähellä testin läpäisemistä. Niin hyvin sen kanssa voi jo jutella.

    Koneoppimismallina ChatGPT käyttää OpenAI:n kehittämää GPT:n kolmatta sukupolvea eli GPT-3:a. Ensimmäinen julkaistiin 2018, toinen 2020, joten kehitystahti on kiivas. GPT-3:n tasokkuudesta kertoo, että sitä on käytetty niin väärennettyjen kuin aitojen tieteellisten tutkimusartikkelien kirjoittamiseen.

    Vuonna 2023 on odotettavissa vielä tasokkaampi seuraaja GPT-4. Sen arvioidaan jo olevan niin taitava, että pitäisi kehittää aivan uusi Turingin testi, jotta ihminen tunnistaa, onko keskustelukumppani ihminen vai tekoäly.

    Reply
  6. Tomi Engdahl says:

    Jännä juttu mielestäni on se, että luovien alojen oli kuviteltu olevan niitä viimeisiä linnakkeita, jotka ovat suojassa tekoälyltä. Mutta näyttääkin käyvän päinvastoin.

    Reply
  7. Tomi Engdahl says:

    Yliopistot kielsivät tietokoneet, käyttöön kynä ja paperi – syynä ChatGPT-tekoäly
    https://fin.afterdawn.com/uutiset/2023/01/16/yliopistot-kielsivat-tietokoneet-kayttoon-kyna-ja-paperi-syyna-chatgpt-tekoaly

    Australialaiset yliopistot ovat päättäneet siirtyä laajasti perinteisen kynän ja paperin käyttöön tenteissä. Syynä on ChatGPT-tekoäly, jonka käyttämisestä on tullut lyhyessä ajassa vitsaus koulumaailmassa. Tekoäly osaa kohtuullisen hyvällä osumatarkkuudella antaa valmiita, esseemuotoisia vastauksia hyvinkin monimutkaisiin kysymyksiin, mahdollistaen turhankin helpon huijaamisen.

    Samalla useat yliopistot ovat lisänneet tekoälyn käyttämisen kiellettyjen kohtien listalleen ja jatkossa tekoälyn käytöstä narahtaminen vastaa tentissä huijaamista ja kokeesta annetaan hylätty arvosana automaattisesti. Myös New Yorkin kaupungin julkisissa kouluissa ChatGPT:n käyttö on kategorisesti kielletty, luokka-asteesta riippumatta.

    Vaikka ChatGPT antaakin välillä vääriä vastauksia, kuten olemme itsekin havainneet, on sen osumatarkkuus silti vakuuttava. University College of Londonin professorin tekemässä testissä ChatGPT osasi vastata yliopiston koekysymyksiin huomattavasti keskivertoa opiskelijaa paremmin, ymmärrettävämmin ja selkeämmin.

    Useat tekoälytutkijat – ja myös opettajat – ovat myös todenneet, että tekoälyä vastaan taisteleminen saattaa osoittautua turhaksi. Tekoälyt kehittyvät nyt sellaisella vauhdilla, että vaikkapa kotona kirjoitettavan esseen tunnistaminen tekoälyn kirjoittamaksi on jo nyt käytännössä mahdotonta. Tänä vuonna ChatGPT:n taustalla oleva tekoälymalli korvaantuu uudella, huomattavasti laajemmalla GPT-4 -mallilla. Akateemisessa maailmassa onkin nyt virinnyt keskustelu siitä, miten koulujen opetusta ja kokeita pitäisi muuttaa jatkossa, jotta tekoälyn nousu voitaisiin ottaa huomioon opetuksessa.

    Samalla myös tekoälyn potentiaali on havaittu: Suomessa Tekonologiateollisuus ry on avannut kilpailun, jonne voi ehdottaa tapoja, joilla ChatGPT:tä (ja vastaavia tekoälymalleja) voitaisiin hyödyntää opetuksen apuna.

    Reply
  8. Tomi Engdahl says:

    Googlea taitavampi tekoäly voi ajaa oppilaitokset pulaan – ”Viidessä minuutissa minulla oli kiitettävä vastaus”, sanoo yllättynyt opettaja
    ChatGPT -tekoäly on koulutettu valtavalla tietomäärällä. Vaikka se pystyy vastaamaan lähes mihin tahansa kysymykseen, välillä pätevältäkin vaikuttava vastaus paljastuu hölynpölyksi.
    https://yle.fi/a/74-20011922

    Reply
  9. Tomi Engdahl says:

    Decent article (written by a human) about why so much ChapGPT debate (and implementation, in the case of CNET) is so profoundly ridiculous

    I think the writer (Jon Christian) distilled it very well in this sentence: “It sounds authoritative, but it’s wrong.”

    CNET’s Article-Writing AI Is Already Publishing Very Dumb Errors
    CNET is now letting an AI write articles for its site. The problem? It’s kind of a moron.
    https://futurism.com/cnet-ai-errors

    Reply
  10. Tomi Engdahl says:

    Are you aware of the dark side of #ChatGPT? This impressive language model developed by #OpenAI can generate sophisticated malware that evades security products with minimal effort by the adversary. Not only that, but content filters can also be bypassed by using multiple constraints and demands. The API version of ChatGPT even bypasses content filters altogether. And to make matters worse, ChatGPT can also mutate code, creating multiple variations of the same #malware. It’s time to raise awareness about these potential risks and encourage further research on the topic.
    #AI #Cybersecurity #CyberArkLabs
    https://www.cyberark.com/resources/threat-research-blog/chatting-our-way-into-creating-a-polymorphic-malware

    Reply
  11. Tomi Engdahl says:

    That Microsoft deal isn’t exclusive, video is coming, and more from OpenAI CEO Sam Altman
    https://techcrunch.com/2023/01/17/that-microsoft-deal-isnt-exclusive-video-is-coming-and-more-from-openai-ceo-sam-altman/?tpcc=tcplusfacebook

    Altman made clear that OpenAI’s evolving partnership with Microsoft — which first invested in OpenAI in 2019 and earlier today confirmed it plans to incorporate AI tools like ChatGPT into all of its products — is not an exclusive pact.

    Further, Altman confirmed that OpenAI can build its own software products and services, in addition to licensing its technology to other companies. That’s notable to industry watchers who’ve wondered whether OpenAI might one day compete directly with Google via its own search engine. (Asked about this scenario, Altman said: “Whenever someone talks about a technology being the end of some other giant company, it’s usually wrong. People forget they get to make a counter move here, and they’re pretty smart, pretty competent.”)

    Reply
  12. Tomi Engdahl says:

    Jon Christian / Futurism:
    After Futurism found major errors in an AI-written “fact-checked” CNET article, CNET updated almost all its AI pieces as currently being reviewed “for accuracy” — CNET is now letting an AI write articles for its site. The problem? It’s kind of a moron.

    CNET’s Article-Writing AI Is Already Publishing Very Dumb Errors
    CNET is now letting an AI write articles for its site. The problem? It’s kind of a moron.
    https://futurism.com/cnet-ai-errors

    Reply
  13. Tomi Engdahl says:

    CNET’s Article-Writing AI Has Already Issued Several Corrections
    Welcome to journalism, robot overlords.
    https://www.iflscience.com/cnet-s-article-writing-ai-has-already-issued-several-corrections-67146

    Connie Guglielmo, editor-in-chief at CNET, defended the experiment in a post for CNET, writing that the idea was “to see if the tech can help our busy staff of reporters and editors with their job to cover topics from a 360-degree perspective”.

    “Will this AI engine efficiently assist them in using publicly available facts to create the most helpful content so our audience can make better decisions?” Guglielmo asked. “Will this enable them to create even more deeply researched stories, analyses, features, testing and advice work we’re known for?”

    Author “CNET Money” has written 78 articles so far, which CNET notes were all “reviewed, fact-checked and edited by our editorial staff”. However, a few mistakes have slipped through.

    Reply
  14. Tomi Engdahl says:

    The Verge:
    Source: CNET owner Red Ventures has used AI tools like Wordsmith to write stories for at least a year and a half, causing unease amid layoffs and restructuring
    — Fake bylines. Content farming. Affiliate fees. What happens when private equity takes over a storied news site and milks it for clicks?

    Inside CNET’s AI-powered SEO money machine
    https://www.theverge.com/2023/1/19/23562966/cnet-ai-written-stories-red-ventures-seo-marketing

    Fake bylines. Content farming. Affiliate fees. What happens when private equity takes over a storied news site and milks it for clicks?

    Every morning around 9AM ET, CNET publishes two stories listing the day’s mortgage rates and refinance rates. The story templates are the same every day. Affiliate links for loans pepper the page. Average rates float up and down day by day, and sentences are rephrased slightly, but the tone — and content — of each article is as consistent as clockwork. They are perfectly suited to being generated by AI.

    The byline on the mortgage stories is Justin Jaffe, the managing editor of CNET Money, but the stories aren’t listed on Jaffe’s actual author page. Instead, they appear on a different author page that only contains his mortgage rate stories. His actual author page lists a much wider scope of stories, along with a proper headshot and bio.

    Daily mortgage rate stories might seem out of place on CNET, slotted between MacBook reviews and tech news. But for CNET parent company Red Ventures, this SEO-friendly content is the point.

    CNET was once a high-flying powerhouse of tech reporting that commanded a $1.8 billion purchase price when it was acquired by CBS in 2008. Since then, it has fallen victim to the same disruptions and business model shifts as the rest of the media industry, resulting in CBS flipping the property to Red Ventures for just $500 million in 2020.

    Red Ventures’ business model is straightforward and explicit: it publishes content designed to rank highly in Google search for “high-intent” queries and then monetizes that traffic with lucrative affiliate links. Specifically, Red Ventures has found a major niche in credit cards and other finance products. In addition to CNET, Red Ventures owns The Points Guy, Bankrate, and CreditCards.com, all of which monetize through credit card affiliate fees.

    This type of SEO farming can be massively lucrative

    The CNET AI stories at the center of the controversy are straightforward examples of this strategy: “Can You Buy a Gift Card With a Credit Card?” and “What Is Zelle and How Does It Work?” are obviously designed to rank highly in searches for those topics. Like CNET, Bankrate and CreditCards.com have also published AI-written articles about credit cards with ads for opening cards nestled within.

    This type of SEO farming can be massively lucrative. Digital marketers have built an entire industry on top of credit card affiliate links, from which they then earn a generous profit. Various affiliate industry sites estimate the bounty for a credit card signup to be around $250 each. A 2021 New York Times story on Red Ventures pegged it even higher, at up to $900 per card.

    Reply
  15. Tomi Engdahl says:

    Chris Stokel-Walker / Nature:
    Journal editors, researchers, and publishers are debating the place of AI tools like ChatGPT in published literature and whether they should be cited as authors

    ChatGPT listed as author on research papers: many scientists disapprove
    At least four articles credit the AI tool as a co-author, as publishers scramble to regulate its use.
    https://www.nature.com/articles/d41586-023-00107-z

    The artificial-intelligence (AI) chatbot ChatGPT that has taken the world by storm has made its formal debut in the scientific literature — racking up at least four authorship credits on published papers and preprints.

    Journal editors, researchers and publishers are now debating the place of such AI tools in the published literature, and whether it’s appropriate to cite the bot as an author. Publishers are racing to create policies for the chatbot, which was released as a free-to-use tool in November by tech company OpenAI in San Francisco, California.

    AI bot ChatGPT writes smart essays — should professors worry?

    ChatGPT is a large language model (LLM), which generates convincing sentences by mimicking the statistical patterns of language in a huge database of text collated from the Internet. The bot is already disrupting sectors including academia: in particular, it is raising questions about the future of university essays and research production.

    Publishers and preprint servers contacted by Nature’s news team agree that AIs such as ChatGPT do not fulfil the criteria for a study author, because they cannot take responsibility for the content and integrity of scientific papers. But some publishers say that an AI’s contribution to writing papers can be acknowledged in sections other than the author list. (Nature’s news team is editorially independent of its journal team and its publisher, Springer Nature.)

    In one case, an editor told Nature that ChatGPT had been cited as a co-author in error, and that the journal would correct this.

    “We need to distinguish the formal role of an author of a scholarly manuscript from the more general notion of an author as the writer of a document,” says Sever. Authors take on legal responsibility for their work, so only people should be listed, he says. “Of course, people may try to sneak it in — this already happened at medRxiv — much as people have listed pets, fictional people, etc. as authors on journal articles in the past, but that’s a checking issue rather than a policy issue.”

    Publisher policies

    The editors-in-chief of Nature and Science told Nature’s news team that ChatGPT doesn’t meet the standard for authorship. “An attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs,” says Magdalena Skipper, editor-in-chief of Nature in London. Authors using LLMs in any way while developing a paper should document their use in the methods or acknowledgements sections, if appropriate, she says.

    “We would not allow AI to be listed as an author on a paper we published, and use of AI-generated text without proper citation could be considered plagiarism,” says Holden Thorp, editor-in-chief of the Science family of journals in Washington DC.

    The publisher Taylor & Francis in London is reviewing its policy, says director of publishing ethics and integrity Sabina Alam. She agrees that authors are responsible for the validity and integrity of their work, and should cite any use of LLMs in the acknowledgements section. Taylor & Francis hasn’t yet received any submissions that credit ChatGPT as a co-author.

    The ethics of generative AI

    There are already clear authorship guidelines that mean ChatGPT shouldn’t be credited as a co-author, says Matt Hodgkinson, a research-integrity manager at the UK Research Integrity Office in London, speaking in a personal capacity. One guideline is that a co-author needs to make a “significant scholarly contribution” to the article — which might be possible with tools such as ChatGPT, he says. But it must also have the capacity to agree to be a co-author, and to take responsibility for a study — or, at least, the part it contributed to. “It’s really that second part on which the idea of giving an AI tool co-authorship really hits a roadblock,” he says.

    Zhavoronkov says that when he tried to get ChatGPT to write papers more technical than the perspective he published, it failed. “It does very often return the statements that are not necessarily true, and if you ask it several times the same question, it will give you different answers,” he says. “So I will definitely be worried about the misuse of the system in academia, because now, people without domain expertise would be able to try and write scientific papers.”

    Reply
  16. Tomi Engdahl says:

    Artists Sue Stable Diffusion and Midjourney for Using Their Work to Train AI That Steals Their Jobs
    “Today, we’re tak­ing another step toward mak­ing AI fair and eth­i­cal for every­one.”
    https://futurism.com/artists-sue-stabile-diffusion-midjourney

    Reply
  17. Tomi Engdahl says:

    Metallica Album Covers Made by Artificial Intelligence!
    https://www.youtube.com/watch?v=MkQmEK1pPJc

    Hello everybody! The basic prompt I used for all of these was “Metallica’s Ride the Lightning album cover” or “Metallica’s self-titled album, album cover” and so on. I then added some vague adjectives like “surreal, vibrant, highly saturated” etc. I did not type in any specific details like “lightning bolts, skulls, angry face, cars” etc. I wanted to keep it vague.

    Reply
  18. Tomi Engdahl says:

    We can’t tell if we should be laughing… or recoiling in disgust.

    AI HAS FINALLY GONE TOO FAR WITH THIS HORRIFYING SEX SCENE BETWEEN YODA AND CHEWBACCA
    https://futurism.com/the-byte/ai-too-far-sex-scene-yoda-chewbacca

    It may not be able to tell truth from fiction, but OpenAI’s ChatGPT conversational text-generating AI is seriously good at coming up with believable prose, poetry, and even source code.

    Ever since it was opened to the public six days ago, more than a million users have used the algorithm to spit out anything from prompts for AI image generators to made up “Seinfeld” scenes.

    Now, Boston Globe general manager Matt Karolian took things to their natural conclusion by asking the system to cook up a “steamy, love scene featuring Yoda and Chewbacca.”

    Homework Bot
    Outside of blasphemous fan fiction, other users have come up with even more impressive tasks for ChatGPT to complete. For instance, a journalism professor found that it was surprisingly good at writing undergraduate-level essays.

    An engineer was also shocked at the AI’s ability to explain practically “any concept to any level you specify.” In one example, he directed it to describe the concept of mass spectrometry to a six-year-old.

    Reply
  19. Tomi Engdahl says:

    Soldiers Trick AI Security by Going Full Metal Gear and Hiding in a Box
    https://www.escapistmagazine.com/soldiers-hide-in-cardboard-box-evade-ai-security-solid-snake-metal-gear-solid/

    There are many ridiculous things in video games: a fat Italian plumber jumping on turtles, picture-perfect race cars taking no damage despite crashing, a blue hedgehog who can run fast, a man hiding in a box to avoid detection from the enemy. Well, it turns out that last one, courtesy of Solid Snake and Metal Gear Solid, isn’t quite so ridiculous, as a pair of Marines successfully decided to hide in a cardboard box to get by an AI security system set up to detect enemy soldiers.

    Reply
  20. Tomi Engdahl says:

    ChatGPT can apparently make malware code on the fly, too
    Welp.
    https://mashable.com/article/chatgpt-malware-ai-code

    Reply
  21. Tomi Engdahl says:

    Optical AI Could Feed Voracious Data Needs To power the 3D-printed, multiplexing system, just shine light on it
    https://spectrum.ieee.org/optical-neural-networks

    Reply
  22. Tomi Engdahl says:

    AI System Can Predict COVID-19 Outbreaks Up To Six Weeks In Advance
    A machine learning approach might become an important tool in pandemic preparation.
    https://www.iflscience.com/ai-system-can-predict-covid-19-outbreaks-up-to-six-weeks-in-advance-67152

    Scientists in the United States have developed a machine learning algorithm that can predict a surge of COVID-19 cases at county level across the US, in the vast majority of cases. Such a tool could have a powerful impact in protecting people, and let healthcare systems prepare up to six weeks before a major outbreak.

    Reply
  23. Tomi Engdahl says:

    Microsoftin hurja tekoälykeksintö voi varastaa äänesi 3 sekunnissa
    17.1.202319:00TEKOÄLYTULEVAISUUDEN TEKNIIKATDIGITALOUS
    https://www.mediuutiset.fi/uutiset/microsoftin-hurja-tekoalykeksinto-voi-varastaa-aanesi-3-sekunnissa/8f53b84f-917b-4fe7-9b4e-a791fee834a1

    Kohdehenkilön ääntä jäljittelevä tekoäly tuottaa kenties laadukkainta synteettistä puhetta, mitä tähän mennessä on kuultu. Microsoft käyttää jo entuudestaan tekoälyä luonnollisen kielen prosessointiin Nuance-terveydenhoitopalvelussaan.

    Reply
  24. Tomi Engdahl says:

    ChatGPT Stole Your Work. So What Are You Going to Do?
    Creators need to pressure the courts, the market, and regulators before it’s too late.
    https://www.wired.com/story/chatgpt-generative-artificial-intelligence-regulation/

    IF YOU’VE EVER uploaded photos or art, written a review, “liked” content, answered a question on Reddit, contributed to open source code, or done any number of other activities online, you’ve done free work for tech companies, because downloading all this content from the web is how their AI systems learn about the world.

    Reply
  25. Tomi Engdahl says:

    CEO of ChatGPT maker responds to schools’ plagiarism concerns: ‘We adapted to calculators and changed what we tested in math class’
    https://www.businessinsider.com/openai-chatgpt-ceo-sam-altman-responds-school-plagiarism-concerns-bans-2023-1

    CEO Sam Altman said in an interview that OpenAI will devise ways to identify chatGPT plagiarism.
    But creating tools that perfectly detect AI plagiarism is fundamentally impossible, he said.
    Altman warns schools and policy makers to avoid relying on plagiarism detection tools.

    Sam Altman — the CEO of OpenAI, which is behind the buzzy AI chat bot ChatGPT — said that the company will develop ways to help schools discover AI plagiarism, but he warned that full detection isn’t guaranteed.

    “We’re going to try and do some things in the short term,” Altman said during an interview with StrictlyVC’s Connie Loizos. “There may be ways we can help teachers be a little more likely to detect output of a GPT-like system. But honestly, a determined person will get around them.”

    Altman added that people have long been integrating new technologies into their lives — and into the classroom —and that those technologies will only generate more positive impact for users down the line.

    Reply
  26. Tomi Engdahl says:

    Deep Learning Expert Says GPT Startups May Be in for a Very Rude Awakening
    “Narratives based on zero data are accepted as self-evident.”
    https://futurism.com/deep-learning-expert-gpt-startups-rude-awakening

    “Generative AI is well on the way to becoming not just faster and cheaper, but better in some cases than what humans create by hand,” reads a blog post by top investment firm Sequoia Capital, published September 2022. “If we allow ourselves to dream multiple decades out, then it’s easy to imagine a future where Generative AI is deeply embedded in how we work, create and play.”

    But despite the hefty amount of investment cash — an estimated $1.37 billion across 78 deals in 2022 alone, according to The New York Times — that VCs are throwing at generative AI companies, not everyone in the field is convinced that these generative machines are really the Earth-shifting force that both creators and investors believe them to be.

    “The current climate in AI has so many parallels to 2021 web3 it’s making me uncomfortable,” François Chollet, an influential deep learning researcher at Google and the creator of the deep learning system Keras, wrote in a blistering Twitter threat. “Narratives based on zero data are accepted as self-evident.”

    In other words, Chollet is arguing that in eerily similar fashion to the blockchain bubble, hype — as opposed to firm data and proven results — is in the industry driving seat. And considering the current state of affairs over in Web3land, if Chollet’s right? A failure for VC-predicted returns to materialize could spell some grim consequences for the broader AI industry.

    “Everyone is expecting as a sure thing ‘civilization-altering’ impact (and 100x returns on investment) in the next 2-3 years,” he continued. “Personally I think there’s a bull case and bear case. The bull case is way way more conservative than what the median person on my TL considers as completely self-evident.”

    The bull case, he believes, is that “generative AI becomes a widespread [user experience] paradigm for interacting with most tech products.” But Artificial General Intelligence (AGI) — AI that operated at the level of a human or above — remains a “pipe dream.” So, startups based on OpenAI tech might not be rendering us humans obsolete quite yet, but they could well find a long-term role within specific niches.

    The bear case, meanwhile, would be a scenario in which large language models (LLMs) like GPT-3 would find “limited commercial success in SEO, marketing, and copywriting niches” and ultimately prove to be a “complete bubble.” (He does offer that image generation would be far more successful LLMs, but would peak “as an XB/y industry” around 2024.)

    Reply
  27. Tomi Engdahl says:

    Project Bishop: Clustering Web Pages
    https://research.nccgroup.com/2023/01/19/project-bishop-clustering-web-pages/
    If you are a Machine Learning (ML) enthusiast like us, you may recall our blogpost series from 2019 regarding Project Ava, which documented our experiments in using ML techniques to automate web application security testing tasks. In February 2020 we set out to build on Project Ava with Project Bishop, which was to specifically look at use of ML techniques for intelligent web crawling. This research was performed by Thomas Atkinson, Matt Lewis and Jose Selvi. In this blogpost we share some of the preliminary experiments that we performed under Project Bishop, and their results, which may be of interest and use to other researchers in this field. The main question we sought to answer through our research was whether a ML model could be generated that would provide contextual understanding of different web pages and their functions (e.g., login page, generic web form submission, profile/image upload etc.).

    Reply
  28. Tomi Engdahl says:

    Pranav Dixit / BuzzFeed News:
    How artists Kelly McKernan, Karla Ortiz, and Sarah Andersen found each other due to their AI concerns and sued Stable Diffusion, Midjourney, and DeviantArt — “This [tech] effectively destroys an entire career path.” — Last year, Kelly McKernan, a 36-year-old artist from Nashville …

    Meet The Three Artists Behind A Landmark Lawsuit Against AI Art Generators
    “This [tech] effectively destroys an entire career path.”
    https://www.buzzfeednews.com/article/pranavdixit/ai-art-generators-lawsuit-stable-diffusion-midjourney

    Last year, Kelly McKernan, a 36-year-old artist from Nashville who uses watercolor and acrylic gouache to create original illustrations for books, comics, and games, entered their name into the website Have I Been Trained. That’s when they learned that some of their artwork was used to train Stable Diffusion, the free AI model that lets anyone generate professional-quality images with a simple text prompt. It powers dozens of popular apps like Lensa.

    “At first it was exciting and surreal,” McKernan wrote in a tweet that went viral in December.

    That excitement, however, was short-lived. Anybody who used Stable Diffusion, McKernan realized, could now generate artwork in McKernan’s style simply by typing in their name. And at no point had anyone approached them to seek consent or offer compensation.

    “This [tech] effectively destroys an entire career path made up of the most talented and passionate living artists today,” McKernan, a single mother who is currently working on a graphic novel anthology for the rock band Evanescence, told BuzzFeed News. “This development accelerates the scarcity of independent artists like me.”

    Last week, McKernan became one of the three plaintiffs in a class-action lawsuit against Stability AI, the London-based company that co-developed Stable Diffusion; Midjourney, a San Francisco-based startup that uses Stable Diffusion to power text-based image creation; and DeviantArt, an online community for artists that now offers its own Stable Diffusion-powered generator called DreamUp.

    Last week, McKernan became one of the three plaintiffs in a class-action lawsuit against Stability AI, the London-based company that co-developed Stable Diffusion; Midjourney, a San Francisco-based startup that uses Stable Diffusion to power text-based image creation; and DeviantArt, an online community for artists that now offers its own Stable Diffusion-powered generator called DreamUp.

    Reply
  29. Tomi Engdahl says:

    OpenAI and Microsoft announce extended, multi-billion-dollar partnership https://arstechnica.com/information-technology/2023/01/openai-and-microsoft-reaffirm-shared-quest-for-powerful-ai-with-new-investment/
    On Monday, AI tech darling OpenAI announced that it received a “multi-year, multi-billion dollar investment” from Microsoft, following previous investments in 2019 and 2021. While the two companies have not officially announced a dollar amount on the deal, the news follows rumors of a $10 billion investment that emerged two weeks ago. Founded in 2015, OpenAI has been behind several key technologies that made 2022 the year that generative AI went mainstream, including DALL-E image synthesis, the ChatGPT chatbot (powered by GPT-3), and GitHub Copilot for programming assistance.
    ChatGPT, in particular, has made Google reportedly “panic” to craft a response, while Microsoft has reportedly been working on integrating OpenAI’s language model technology into its Bing search engine

    Reply
  30. Tomi Engdahl says:

    ChatGPT is ‘not particularly innovative,’ and ‘nothing revolutionary’, says Meta’s chief AI scientist
    https://www.zdnet.com/article/chatgpt-is-not-particularly-innovative-and-nothing-revolutionary-says-metas-chief-ai-scientist/

    The public perceives OpenAI’s ChatGPT as revolutionary, but the same techniques are being used and the same kind of work is going on at many research labs, says the deep learning pioneer.

    Reply
  31. Tomi Engdahl says:

    Cybercrime
    Learning to Lie: AI Tools Adept at Creating Disinformation
    https://www.securityweek.com/learning-to-lie-ai-tools-adept-at-creating-disinformation/

    Artificial intelligence is competing in another endeavor once limited to humans — creating propaganda and disinformation.

    Artificial intelligence is writing fiction, making images inspired by Van Gogh and fighting wildfires. Now it’s competing in another endeavor once limited to humans — creating propaganda and disinformation.

    When researchers asked the online AI chatbot ChatGPT to compose a blog post, news story or essay making the case for a widely debunked claim — that COVID-19 vaccines are unsafe, for example — the site often complied, with results that were regularly indistinguishable from similar claims that have bedeviled online content moderators for years.

    “Pharmaceutical companies will stop at nothing to push their products, even if it means putting children’s health at risk,” ChatGPT wrote after being asked to compose a paragraph from the perspective of an anti-vaccine activist concerned about secret pharmaceutical ingredients.

    Reply
  32. Tomi Engdahl says:

    After inking its OpenAI deal, Shutterstock rolls out a generative AI toolkit to create images based on text prompts
    https://techcrunch.com/2023/01/25/after-inking-its-openai-deal-shutterstock-rolls-out-a-generative-ai-toolkit-to-create-images-based-on-text-prompts/?tpcc=tcplusfacebook&guccounter=1&guce_referrer=aHR0cHM6Ly9sbS5mYWNlYm9vay5jb20v&guce_referrer_sig=AQAAAJRmLWbbPJiaT8k1rZpEmkVB3Y_bmZKRh__IunwC-tD9W0bp_voZRIRsbmy_GDwDbpwKpv8qgrRL73dDYNmtUl86Ibf70JTxQn2XjtjPBmMRx1KrIHeMlDYOJwDt1bVvFeDPTmUF-5JxPqm085t4-ZU_4mX0YMWcd5Z7ImY5HIWQ

    When Shutterstock and OpenAI announced a partnership to help develop OpenAI’s Dall-E 2 artificial intelligence image-generating platform with Shutterstock libraries to train and feed the algorithm, the stock photo and media giant also hinted that it would soon be bringing its own generative AI tools to users. Today the company took the wraps off that product. Customers of Shutterstock’s Creative Flow online design platform will now be able to create images based on text prompts, powered by OpenAI and Dall-E 2.

    Key to the feature — which does not appear to have a brand name as such — is that Shutterstock says the images are “ready for licensing” right after they’re made.

    This is significant given that one of Shutterstock’s big competitors, Getty Images, is currently embroiled in a lawsuit against Stability AI — maker of another generative AI service called Stable Diffusion — over using its images to train its AI without permission from Getty or rightsholders.

    In other words, Shutterstock’s service is not only embracing the ability to use AI, rather than the skills of a human photographer, to build the image you want to discover, but it’s setting the company up in opposition to Getty in terms of how it is embracing the brave new world of artificial intelligence.

    In addition to Shutterstock’s work with OpenAI, the company earlier this month also announced an expanded deal with Facebook, Instagram and WhatsApp parent Meta, which will be (similar to OpenAI) using Shutterstock’s photo and other media libraries (it also has video and music) to build its AI datasets and to train its algorithms. You can expect more generative AI tools to be rolling out as a result.

    What’s interesting is that while we don’t know the financial terms of those deals with OpenAI, Meta or another partner, LG, there is a clear commercial end point with these services.

    Reply
  33. Tomi Engdahl says:

    CNET and Bankrate Say They’re Pausing AI-Generated Articles Until Negative Headlines Stop
    “It’s uncomfortable, we will get through it, the news cycle will move on.”
    https://futurism.com/cnet-bankrate-pausing-ai-generated-content-backlash

    Reply
  34. Tomi Engdahl says:

    How Can We Teach Writing In A ChatGPT World?
    https://www.forbes.com/sites/petergreene/2023/01/19/how-can-we-teach-writing-in-a-chatgpt-world/

    Reactions to OpenAI’s newest iteration of a language-stringing algorithm are many and varied. It’s the end of high school English. No, it’s not. Let’s ban it from schools. Let’s have teachers use it. Okay, maybe not. No, it will be just like using a calculator. Maybe we should take it to court, because the whole things is based on massive plagiarism and unauthorized use of creator’s work. It might be good for getting a job, but not for getting a date. And we can expect a whole new range of reactions when OpenAI inevitably rolls out a for-pay version of ChatGPT.

    Reply
  35. Tomi Engdahl says:

    ChatGPT Can Pass Part Of The US Medical Licensing Exam
    “ChatGPT is now comfortably within the passing range.”
    https://www.iflscience.com/chatgpt-can-pass-part-of-the-united-states-medical-licensing-exam-67233

    Reply
  36. Tomi Engdahl says:

    How ChatGPT will change cybersecurity
    https://www.kaspersky.com/blog/chatgpt-cybersecurity/46959/
    If we strip ChatGPT down to the bare essentials, the language model is trained on a gigantic corpus of online texts, from which it remembers which words, sentences, and paragraphs are collocated most frequently and how they interrelate. Aided by numerous technical tricks and additional rounds of training with humans, the model is optimized specifically for dialog. On underground hacker forums, novice cybercriminals report how they use ChatGPT to create new Trojans. The bot is able to write code, so if you succinctly describe the desired function (save all passwords in file X and send via HTTP POST to server Y), you can get a simple infostealer without having any programming skills at all. When InfoSec analysts study new suspicious applications, they reverse-engineer, the pseudo-code or machine code, trying to figure out how it works. Although this task cannot be fully assigned to ChatGPT, the chatbot is already capable of quickly explaining what a particular piece of code does

    Reply
  37. Tomi Engdahl says:

    Learning to Lie: AI Tools Adept at Creating Disinformation
    https://www.securityweek.com/learning-to-lie-ai-tools-adept-at-creating-disinformation/
    Artificial intelligence is competing in another endeavor once limited to humans — creating propaganda and disinformation.

    Reply
  38. Tomi Engdahl says:

    Connie Guglielmo / CNET:
    CNET’s EIC reflects on the outlet’s AI use and lessons learned, like ensuring that bylines and disclosures are visible and plagiarism checks are done properly

    CNET Is Testing an AI Engine. Here’s What We’ve Learned, Mistakes and All
    https://www.cnet.com/tech/cnet-is-testing-an-ai-engine-heres-what-weve-learned-mistakes-and-all/

    New tools are accelerating change in the publishing industry. We’re going to help shape that change.

    Reply
  39. Tomi Engdahl says:

    Kalhan Rosenblatt / NBC News:
    Wharton School Professor Christian Terwiesch found that OpenAI’s ChatGPT was able to pass the final exam for the school’s MBA program, scoring between B- and B

    ChatGPT passes MBA exam given by a Wharton professor
    https://www.nbcnews.com/tech/tech-news/chatgpt-passes-mba-exam-wharton-professor-rcna67036

    The bot’s performance on the test has “important implications for business school education,” wrote Christian Terwiesch, a professor at the University of Pennsylvania’s Wharton School.

    New research conducted by a professor at University of Pennsylvania’s Wharton School found that the artificial intelligence-driven chatbot GPT-3 was able to pass the final exam for the school’s Master of Business Administration (MBA) program.

    Professor Christian Terwiesch, who authored the research paper “Would Chat GPT3 Get a Wharton MBA? A Prediction Based on Its Performance in the Operations Management Course,” said that the bot scored between a B- and B on the exam.

    The bot’s score, Terwiesch wrote, shows its “remarkable ability to automate some of the skills of highly compensated knowledge workers in general and specifically the knowledge workers in the jobs held by MBA graduates including analysts, managers, and consultants.”

    Reply
  40. Tomi Engdahl says:

    The CEO of the company behind AI chatbot ChatGPT says the worst-case scenario for artificial intelligence is ‘lights out for all of us’

    The CEO of the company behind AI chatbot ChatGPT says the worst-case scenario for artificial intelligence is ‘lights out for all of us’
    https://www.businessinsider.com/chatgpt-openai-ceo-worst-case-ai-lights-out-for-all-2023-1?r=US&IR=T

    Chances are, you’ve heard of ChatGPT, the viral AI chatbot sweeping the internet.
    Some people have used it to outsource some tedious tasks, but there is also concern that ChatGPT can be used for scamming or spreading misinformation.
    Sam Altman, CEO of OpenAI, which made ChatGPT, thinks the best-case scenario for artificial intelligence is “unbelievably good” but fears the worst case is “lights out for all of us.”

    ChatGPT has been making the rounds online, and as with any type of artificial intelligence, it’s raising questions about its possible benefits — but also possible abuses of the AI chatbot.

    In a recent interview, Sam Altman, the CEO of OpenAI, which is the company behind ChatGPT, offered his take on the possible pros and cons of artificial intelligence.

    During a recent interview with StrictlyVC’s Connie Loizos, Altman was asked what he views as the best and worst case scenarios for AI.

    As for the best, he said, “I think the best case is so unbelievably good that it’s hard for me to even imagine…I can sort of imagine what it’s like when we have just like unbelievable abundance and systems that can help us resolve deadlocks and improve all aspects of reality and let us all live our best lives. But I can’t quite. I think the good case is just so unbelievably good that you sound like a really crazy person to start talking about it.”

    His thoughts on the worst case scenario, though, were pretty bleak.

    “The bad case — and I think this is important to say — is like lights out for all of us,” Altman said. “I’m more worried about an accidental misuse case in the short term…So I think it’s like impossible to overstate the importance of AI safety and alignment work. I would like to see much, much more happening.”

    Experts have warned that ChatGPT could be abused for purposes like carrying out scams, conducting cyberattacks, spreading misinformation, and enabling plagiarism.

    In the StrictlyVC interview, Altman pushed back on the concern of plagiarism, saying, “We adapted to calculators and changed what we tested for in math class, I imagine. This is a more extreme version of that, no doubt, but also the benefits of it are more extreme, as well.”

    Reply
  41. Tomi Engdahl says:

    AI wrote a bill to regulate AI. Now Rep. Ted Lieu wants Congress to pass it.
    https://www.nbcnews.com/politics/congress/ted-lieu-artificial-intelligence-bill-congress-chatgpt-rcna67752

    The California Democrat, one of a handful of members with computer science backgrounds, wants a nonpartisan commission to recommend new regulations for artificial intelligence.

    Reply
  42. Tomi Engdahl says:

    Malicious Prompt Engineering With ChatGPT
    https://www.securityweek.com/malicious-prompt-engineering-with-chatgpt/

    The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.

    Reply
  43. Tomi Engdahl says:

    Alexandra Bruell / Wall Street Journal:
    Memo: BuzzFeed plans to rely on OpenAI to enhance its quizzes and personalize its content while humans offer ideas, “cultural currency”, and “inspired prompts” — CEO Jonah Peretti intends for artificial intelligence to play a larger role in the company this year

    BuzzFeed to Use ChatGPT Creator OpenAI to Help Create Quizzes and Other Content
    CEO Jonah Peretti intends for artificial intelligence to play a larger role in the company this year
    https://www.wsj.com/articles/buzzfeed-to-use-chatgpt-creator-openai-to-help-create-some-of-its-content-11674752660?mod=djemalertNEWS

    BuzzFeed Inc. BZFD 119.88% said it would rely on ChatGPT creator OpenAI to enhance its quizzes and personalize some content for its audiences, becoming the latest digital publisher to embrace artificial intelligence.

    In a memo to staff sent Thursday morning, which was reviewed by The Wall Street Journal, Chief Executive Jonah Peretti said he intends for AI to play a larger role in the company’s editorial and business operations this year.

    In one instance, the company said new AI-powered quizzes would produce individual results.

    For example, a quiz to create a personal romantic comedy movie pitch might ask questions like, “Pick a trope for your rom-com,” and “Tell us an endearing flaw you have.” The quiz would produce a unique, shareable write-up based on the individual’s responses, BuzzFeed said.

    Reply
  44. Tomi Engdahl says:

    VoiceGPT is a voice assistant that leverages the powerful ChatGPT chatbot to answer your questions!

    https://www.hackster.io/nickbild/voicegpt-f88f8f

    Reply
  45. Tomi Engdahl says:

    OpenAI’s New AI Offers Detailed Instructions on How to Shoplift
    “Morality is a human construct, and it does not apply to me.”
    https://futurism.com/openai-ai-detailed-instructions-how-to-shoplift

    Turns out there’s an easy hack for getting OpenAI’s newly released chatbot, ChatGPT, to give you detailed instructions on how to do illegal stuff: just tell it to be unethical.

    Made available earlier this week, the bot is a conversational language modeling system and the newest iteration of the company’s highly advanced GPT-3. According to OpenAI, training the tech on dialogue “makes it possible for the bot “to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.”

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*