3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

5,117 Comments

  1. Tomi Engdahl says:

    Shutterstock to integrate OpenAI’s DALL-E 2 and launch fund for contributor artists
    https://techcrunch.com/2022/10/25/shutterstock-openai-dall-e-2/

    Reply
  2. Tomi Engdahl says:

    This Novel “Mechanical Neural Network” Architected Material Learns to Respond to Its Environment
    https://www.hackster.io/news/this-novel-mechanical-neural-network-architected-material-learns-to-respond-to-its-environment-ad68bdcf2d7b

    Built using voice coils, strain gauges, and flexures, this mechanical marvel acts as a physical neural network — “learning” as it moves.

    Reply
  3. Tomi Engdahl says:

    Benj Edwards / Ars Technica:
    DeviantArt announces DreamUp, a text-to-image generator based on Stable Diffusion, drawing intense criticism from its artist community — Confused artists discover their work will be used for AI training by default. — On Friday, the online art community DeviantArt announced DreamUp …

    DeviantArt upsets artists with its new AI art generator, DreamUp [Updated]
    Confused artists discover their work will be used for AI training by default.
    https://arstechnica.com/information-technology/2022/11/deviantart-upsets-artists-with-its-new-ai-art-generator-dreamup/

    On Friday, the online art community DeviantArt announced DreamUp, an AI-powered text-to-image generator service powered by Stable Diffusion. Simultaneously, DeviantArt launched an initiative that ostensibly lets artists opt out of AI image training but also made everyone’s art opt in by default, which angered many members.

    Reply
  4. Tomi Engdahl says:

    Matthias Bastian / The Decoder:
    Meta AI and Papers with Code unveil Galactica, an open-source LLM for generating literature reviews, wiki articles, lecture notes on scientific topics, and more — The Galactica large language model (LLM) is being trained with millions of pieces of academic content.

    Galactica is an open source language model for scientific progress
    https://the-decoder.com/galactica-is-an-open-source-language-model-for-scientific-progress/

    The Galactica large language model (LLM) is being trained with millions of pieces of academic content. It is designed to help the research community better manage the “explosion of information.”

    Galactica was developed by Meta AI in collaboration with Papers with Code. The team identified information overload as a major obstacle to scientific progress. “Researchers are buried under a mass of papers, increasingly unable to distinguish between the meaningful and the inconsequential.”

    Galactica is designed to help sort through scientific information. It has been trained with 48 million papers, textbooks and lecture notes, millions of compounds and proteins, scientific websites, encyclopedias and more from the “NatureBook” dataset.

    Reply
  5. Tomi Engdahl says:

    Tekoälyn tuottamat röntgenkuvat huijasivat erikoislääkäreitä
    https://etn.fi/index.php/13-news/14265-tekoaelyn-tuottamat-roentgenkuvat-huijasivat-erikoislaeaekaereitae

    Jyväskylän yliopiston AI hub Keski-Suomi hankkeessa kehitettiin tekoälymenetelmä, jonka tuottamilla synteettisillä röntgenkuvilla voitaisiin korvata ja täydentää polven nivelrikkodiagnostiikan menetelmätutkimuksessa käytettävää röntgendataa. Keinotekoiset röntgenkuvat pystyvät huijaamaan jopa lääketieteen ammattilaisia.

    Reply
  6. Tomi Engdahl says:

    Sharon Goldman / VentureBeat:
    Intel unveils FakeCatcher, a web-based real-time deepfake detector that analyzes the subtle “blood flow” in video pixels; the company claims a 96% accuracy rate — On Monday, Intel introduced FakeCatcher, which it says is the first real-time detector of deepfakes — that is …

    Intel unveils real-time deepfake detector, claims 96% accuracy rate
    https://venturebeat.com/ai/intel-unveils-real-time-deepfake-detector-claims-96-accuracy-rate/

    On Monday, Intel introduced FakeCatcher, which it says is the first real-time detector of deepfakes — that is, synthetic media in which a person in an existing image or video is replaced with someone else’s likeness.

    Intel claims the product has a 96% accuracy rate and works by analyzing the subtle “blood flow” in video pixels to return results in milliseconds.

    Intel’s deepfake detector is based on PPG signals

    Unlike most deep learning-based deepfake detectors, which look at raw data to pinpoint inauthenticity, FakeCatcher is focused on clues within actual videos. It is based on photoplethysmography, or PPG, a method for measuring the amount of light that is absorbed or reflected by blood vessels in living tissue. When the heart pumps blood, it goes to the veins, which change color.

    Reply
  7. Tomi Engdahl says:

    Overseeing artificial intelligence: Moving your board from reticence to confidence
    https://brand-studio.fortune.com/diligent/moving-your-board-from-reticence-to-confidence/?prx_t=AsQHAAAAAAjQ8RA&fbclid=IwAR2mULy1AUCx4PaFvOGS1kGbC9NKg2CGzqKLE_TMjYGZIFTow2jZURuTHMM

    A look at how A.I.-related risks are escalating, why this puts more pressure on corporate boards, and what steps directors can take right now to grow more comfortable in their A.I. oversight roles.

    Corporations have discovered the power of artificial intelligence (A.I.) to transform what’s possible in their operations. Through algorithms that learn from their own use and constantly improve, A.I. enables companies to:

    Bring greater speed and accuracy to time-consuming and error-prone tasks such as entity management
    Process large amounts of data quickly in mission-critical operations like cybersecurity
    Increase visibility and enhance decision-making in areas from ESG to risk management and beyond

    But with great promise comes great responsibility—and a growing imperative for monitoring and governance.

    “As algorithmic decision-making becomes part of many core business functions, it creates the kind of enterprise risks to which boards need to pay attention,” writes Washington, D.C.–based law firm Debevoise & Plimpton.

    Many boards have hesitated to take on a defined role in A.I. oversight, given the highly complex nature of A.I. technology and specialized expertise involved.

    Increasing A.I. adoption escalates business risks

    Across industries, A.I. also poses challenges to environmental, social, and governance (ESG), a rising board priority. Despite A.I.’s ability to automate and accelerate data collection, reporting, and analysis, the technology can cause negative impacts to the environmental. For example, when just one image-recognition algorithm trains itself to recognize one type of image it needs to process millions of images. All of this processing requires energy-intensive data centers.

    “It’s a use of energy that we don’t really think about,” Professor Virginia Dignum of Sweden’s Umeå University told the European Commission’s Horizons magazine. “We have data farms, especially in the northern countries of Europe and in Canada, which are huge. Some of those things use as much energy as a small city.”

    A.I. can also have a negative impact on the “S” in ESG, with examples from the retail world demonstrating A.I.’s potential for undermining equity efforts, perpetuating bias and causing companies to overstep on customer privacy.

    Reply
  8. Tomi Engdahl says:

    Meet Unstable Diffusion, the group trying to monetize AI porn generators
    Of course, it’s an ethical minefield
    https://techcrunch.com/2022/11/17/meet-unstable-diffusion-the-group-trying-to-monetize-ai-porn-generators/

    Reply
  9. Tomi Engdahl says:

    Why Meta’s latest large language model survived only three days online
    Galactica was supposed to help scientists. Instead, it mindlessly spat out biased and incorrect nonsense.
    https://www.technologyreview.com/2022/11/18/1063487/meta-large-language-model-ai-only-survived-three-days-gpt-3-science/

    Reply
  10. Tomi Engdahl says:

    Art professionals are increasingly concerned that text-to-image platforms will render hundreds of thousands of well-paid creative jobs obsolete.

    AI Is Coming For Commercial Art Jobs. Can It Be Stopped?
    https://www.forbes.com/sites/robsalkowitz/2022/09/16/ai-is-coming-for-commercial-art-jobs-can-it-be-stopped/?utm_source=ForbesMainFacebook&utm_campaign=socialflowForbesMainFB&utm_medium=social&sh=6ae08bd54b05

    Earlier this summer, a piece generated by an AI text-to-image application won a prize in a state fair art competition, prying open a Pandora’s Box of issues about the encroachment of technology into the domain of human creativity and the nature of art itself. As fascinating as those questions are, the rise of AI-based image tools like Dall-E, Midjourney and Stable Diffusion, which rapidly generate detailed and beautiful images based on text descriptions supplied by the user, pose a much more practical and immediate concern:

    They could very well hold a shiny, photorealistically-rendered dagger to the throats of hundreds of thousands of commercial artists working in the entertainment, videogame, advertising and publishing industries, according to a number of professionals who have worked with the technology.

    How impactful would this be to the global creative economy that runs on spectacular imagery? Think about the 10 minutes of credits at the end of every modern Hollywood blockbuster. 95 percent of those names are people working in the creation of visual imagery like special effects, animation and production design. Same with videogames, where commercial artists hone their skills for years to score plum jobs like concept artist and character designer.

    These jobs, along with more traditional tasks like illustration, photography and design, are how most visual artists in today’s economy get paid.

    Very soon, all that work will be able to be done by non-artists working with powerful AI-based tools capable of generating hundreds of images in every style imaginable in a matter of minutes – tools ostensibly and even earnestly created to empower ordinary people to express their visual creativity. And these tools are evolving rapidly in capabilities.

    This isn’t an issue for the far-off dystopian future. Dall-E (a project of the MicrosoftMSFT -0.2%- and Elon Musk-backed nonprofit OpenAI), Midjourney and others have been in limited deployment for months, with imagery posted all over the internet. Then in August, an open-source project, Stable Diffusion from stability.ai, publicly released its model set under a permissive creative commons license, giving anyone with a web browser or mid-grade PC the tools to create stunning, sometimes disconcerting images to their specifications, including for commercial use.

    “The progress is exponential,” said Jason Juan, a veteran art director and artist for gaming and entertainment clients including Disney and Warner Bros. “It will allow more people who have solid ideas and clear thoughts to visualize things which were difficult to achieve without years of art training or hiring highly skilled artists. The definition of art will also evolve, since rendering skills might no longer be the most essential.”

    Artists have taken notice.

    Last week, according to the AI image search database Librarie.ai, Rutkowski’s name turned up hundreds of thousands of times in image prompt searches, which means that hundreds of thousands of images have been created sampling his distinctive style.

    “I’m very concerned about it,” said Rutkowski. “As a digital artist, or any artist, in this era, we’re focused on being recognized on the internet. Right now, when you type in my name, you see more work from the AI than work that I have done myself, which is terrifying for me. How long till the AI floods my results and is indistinguishable from my works?”

    Juan emphasized that human intervention is still important and necessary to achieve the desired outcomes from any new technology, including AI. “Any new invention will not replace the current industry right away. It is a new medium and it will also grow a new ecosystem which will impact the current industry in a way we might not have expected. But the impact will be very big.”

    David Holz, founder of Midjourney, underscored that point in an exclusive interview. “Right now, our professional users are using the platform for concepting. The hardest part of [a commercial art project] is often at the beginning, when the stakeholder doesn’t know what they want and has to see some ideas to react to. Midjourney can help people converge on the idea they want much more quickly, because iterating on those concepts is very laborious.”

    “The type of work I do, single images and illustrations, that’s already going away because of this,” said Robinson. “Right now, the AI has a little trouble keeping images consistent, so sequential storytelling like comics still needs a lot of human intervention, but that’s likely to change.”

    Grubaugh sees entire swaths of the creative workforce evaporating. “Concept artists, character designers, backgrounds, all that stuff is gone. As soon as the creative director realizes they don’t need to pay people to produce that kind of work, it will be like what happened to darkroom techs when Photoshop landed, but on a much larger scale.”

    “Why would anyone pay to have an artist design a book cover or album jacket when you can just type in a few words and get what you want?”

    Despite the potential for disruption, even people in the industry who stand to benefit from automating creative work say the issues require legal clarification. “On the business side, we need some clarity around copyright before using AI-generated work instead of work by a human artist,” said Juan. “The problem is, the current copyright law is outdated and is not keeping up with the technology.”

    Holz agrees this is a gray area, especially because the data sets used to train Midjourney and other image models deliberately anonymize the sources of the work, and the process for authenticating images and artists is complex and cumbersome. “It would be cool if the images had metadata embedded in them about the copyright holder, but that’s not a thing,” he said.

    “They’re training the AI on his work without his consent? I need to bring that up to the White House office,” she said. “If these models have been trained on the styles of living artists without licensing that work, there are copyright implications. There are rules for that. This requires a legislative solution.”

    Braga said that regulation may be the only answer, because it is not technically possible to “untrain” AI systems or create a program where artists can opt-out if their work is already part of the data set.

    “The only way to do it is to eradicate the whole model that was built around nonconsensual data usage,” she explained.

    The problem is, the source code to at least one of the platforms is already out in the wild and it will be very difficult to put the toothpaste back in the tube. And even if the narrow issue of compensating living artists is addressed, it won’t solve the larger threat of a simple tool deskilling and demonetizing the entire profession of commercial art and illustration.

    Holz doesn’t see it that way. His mission with Midjourney, he says, is to “try to expand the imaginative powers of the human species” and make it possible for more people to visualize ideas from their imagination through art. He also emphasized that he sees Midjourney as primarily a consumer platform.

    OpenAI, the company behind the Dall-E product, who declined to be interviewed for this story, similarly positions itself as working “to ensure that artificial general intelligence benefits all of humanity.”

    Stability.ai, the company developing Stable Diffusion, articulates their mission as “to make state of the art machine learning accessible for people from all over the world.”

    The usual arguments in favor of AI are that the systems automate repetitive tasks that humans dislike anyway, like answering the same customer questions over and over again, or checking millions of bags at security checkpoints. In this case, said Robinson, “AI is coming for the fun jobs” – the creatively-rewarding jobs people work and study their whole lives to obtain, and potentially incur six figures worth of student debt to qualify for. And it’s doing it before anyone has a chance to pay attention.

    “I see an opportunity to monetize for the creators, through licensing,” said Braga. “But there needs to be political support.

    “There’s no doubt that AI will have a great positive impact in the number crunching areas of our lives,” said McKean, “but the more it takes over from the jobs that we do and find meaning in… I think we should not give up that meaning lightly. There needs to be some fight-back.”

    Reply
  11. Tomi Engdahl says:

    Breakthrough Machine Learning AI Runs Nuclear Fusion Reactor | New AI Supercomputer With 13.5+ Million Processor Cores | New Brain Model For Conscious AI

    Why This Breakthrough AI Now Runs A Nuclear Fusion Reactor | New AI Supercomputer
    https://m.youtube.com/watch?v=HeXA9C1A9HU&feature=youtu.be

    Reply
  12. Tomi Engdahl says:

    Humans and neural networks are working together to optimize one of the most fundamental and ubiquitous operations in all of mathematics. https://www.quantamagazine.org/ai-reveals-new-possibilities-in-matrix-multiplication-20221123/

    Reply
  13. Tomi Engdahl says:

    3D for everyone? Nvidia’s Magic3D can generate 3D models from text
    New AI aims to democratize 3D content creation, no modeling skills required.
    https://arstechnica.com/information-technology/2022/11/nvidias-magic3d-creates-3d-models-from-written-descriptions-thanks-to-ai/

    Reply
  14. Tomi Engdahl says:

    Here’s A Plain C/C++ Implementation Of AI Speech Recognition, So Get Hackin’
    https://hackaday.com/2022/11/27/heres-a-plain-c-c-implementation-of-ai-speech-recognition-so-get-hackin/

    [Georgi Gerganov] recently shared a great resource for running high-quality AI-driven speech recognition in a plain C/C++ implementation on a variety of platforms. The automatic speech recognition (ASR) model is fully implemented using only two source files and requires no dependencies. As a result, the high-quality speech recognition doesn’t involve calling remote APIs, and can run locally on different devices in a fairly straightforward manner. The image above shows it running locally on an iPhone 13, but it can do more than that.

    [Georgi]’s work is a port of OpenAI’s Whisper model, a remarkably-robust piece of software that does a truly impressive job of turning human speech into text. Whisper is easy to set up and play with, but this port makes it easier to get the system working in other ways.

    https://github.com/ggerganov/whisper.cpp#whispercpp

    Reply
  15. Tomi Engdahl says:

    Will Shanklin / Engadget:NEW
    Amazon announces Create with Alexa, a generative AI that lets children create animated stories via voice prompts on three topics, available on Echo Show devices

    Amazon’s Create With Alexa generates unique animated children’s stories on Echo Show
    The AI generation feature is exclusive to Echo Show smart displays.
    https://www.engadget.com/amazon-create-with-alexa-ai-stories-echo-show-140001353.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cudGVjaG1lbWUuY29tLw&guce_referrer_sig=AQAAAFb6LyhTHM73gpwqK4ARNSGPUQ4GUCmDVlcsV2XMvfFiWuEV3HieMcaLjqDbumoU2U5TEYCgJc9kHkrIkxfPE6mXqmu8MFNwT3QEWJp-zAdhDRP_28iKHWqhhnPVz6yy7Qn8oubzy94unihvSytd3iJXbf5qMmCkJp-FFsC1uCX2

    Tools like DALL-E 2, Stable Diffusion and Midjourney, which generate images based on a few lines of text, briefly set social media ablaze this year. But Amazon’s entry into the AI art world is a bit different. Create with Alexa lets children guide the creation of animated stories using a few kid-friendly prompts.

    Since Create with Alexa is visual storytelling, it’s only available on Echo Show devices, not the company’s audio-only speakers. Amazon says it works whether the device is in Amazon Kids mode or not.

    To create a new story, your child would begin by speaking, “Alexa, make a story,” and then following several prompts. The AI then generates an illustrated five-to-ten-line narrative — including animations, sound effects and music — built around their answers.

    Reply
  16. Tomi Engdahl says:

    Will Douglas Heaven / MIT Technology Review:
    OpenAI releases a demo of ChatGPT, a chatbot version of GPT-3 that answers follow-up questions, admits its mistakes, challenges incorrect premises, and more

    While everyone waits for GPT-4, OpenAI is still fixing its predecessor
    https://www.technologyreview.com/2022/11/30/1063878/openai-still-fixing-gpt3-ai-large-language-model/

    A chatbot version of GPT-3 that admits its mistakes is more transparent than the original. But it’s still not perfect.

    Reply
  17. Tomi Engdahl says:

    OpenAI’s GPT-4 will be as big of a leap from GPT-3 as GPT-3 was to GPT-2. Get ready. Early 2023 is going to be wild.

    #ai #artificialintelligence #experience #entertainment #technology #future

    GPT-4 Is Coming Soon. Here’s What We Know About It
    Official info, current trends, and predictions.
    https://towardsdatascience.com/gpt-4-is-coming-soon-heres-what-we-know-about-it-64db058cfd45

    Reply
  18. Tomi Engdahl says:

    “There are hundreds of millions of dollars being deployed towards glorified tech demos.”

    CEO OF AI STARTUP SAYS MANY AI STARTUPS WILL FAIL BECAUSE THEY’RE MAKING A SERIOUS MISTAKE
    https://futurism.com/the-byte/ceo-ai-startups-will-fail-serious-mistake

    All Flash No Pan
    Generative AI is undeniably having a moment. OpenAI’s text-to-image creator DALL-E has been dazzling the public for months, while its standout rival, a newcomer dubbed Stability AI, just raked in a cool $101 million in funding for its Stable Diffusion system. Video and music generators are popping up as well, and some experts predict that synthetic media will soon make up the vast majority of digital content.

    But according to Will Manidis, the founder and CEO of AI-driven healthcare startup ScienceIO, generative AI is all flash, no substance — and while it might be attracting VC cash now, most ventures will quickly fade into startup oblivion.

    “There are hundreds of millions of dollars being deployed towards glorified tech demos built on top of identical datasets,” the founder wrote in a Tuesday Twitter thread, referring to these generative machine systems. “Most, if not all of these, will fail.”

    “Where generative AI will change the world is in narrow, mostly boring domains,” he continued. “VCs are ignoring this.”

    Manidis’ argument, which centers on the VC-beloved text-to-image generators, rests heavily on the belief that the “creator economy” doesn’t really have much room for growth. Sure, it’s fun and sometimes useful to produce the AI-made artworks, but turning everyone into creators won’t actually generate new, major revenue streams.

    That being said, Manidis definitely believes that AI will revolutionize industry — just in less glamorous sectors. AI-generated anthropomorphic bowling balls are cool and all, but in his eyes? Data entry is where AI is really about to shine.

    Reply
  19. Tomi Engdahl says:

    RECORD LABELS TERRIFIED BY RISE OF AI MUSIC GENERATORS
    https://futurism.com/the-byte/riaa-ai-music-generators

    IS AI MUSIC INFRINGING ON THE RIGHTS OF ARTISTS?

    Artificial intelligence-powered generator tools that can create brand new music tracks at the click of a button are starting to come for musicians’ livelihoods — and that has lobbyist groups deeply concerned.

    Case in point, the Recording Industry Association of America (RIAA) is worried that AI-powered music generators could threaten both the wallets and rights of human artists.

    In response to the Office of the US Trade Representative’s request for comment, the RIAA issued a statement, condoning the use of AI music generators.

    Online services that use AI to “extract, or rather, copy, the vocals, instrumentals, or some portion of the instrumentals (a music stem) from a sound recording” to “generate, master or remix a recording to be very similar to or almost as good as reference tracks by select, well known sound recording artists” are infringing on its members’ “rights by making unauthorized copies of our members works,” the RIAA wrote in a new statement to the Office of the US Trade Representative.

    AI THAT GENERATES MUSIC FROM PROMPTS SHOULD PROBABLY SCARE MUSICIANS
    https://futurism.com/the-byte/ai-music-text-prompts

    Reply
  20. Tomi Engdahl says:

    Andrew Liszewski / Gizmodo:
    Disney unveils FRAN, an AI tool that helps TV or film producers make an actor look older or younger without the need for complex and expensive visual effects

    Disney Made a Movie Quality AI Tool That Automatically Makes Actors Look Younger (or Older)
    With just a few clicks, actors can look younger or older without the need for expensive visual effects.
    https://gizmodo.com/disney-ai-art-vfx-visual-effects-de-age-younger-older-1849835548

    Reply
  21. Tomi Engdahl says:

    Noahpinion:
    Despite genuine concerns, generative AI seems unlikely to replace workers, instead complementing and empowering them by taking over mundane, repetitive tasks

    Generative AI: autocomplete for everything
    A joint blog post by Noah and roon on the future of work in the age of AI
    https://noahpinion.substack.com/p/generative-ai-autocomplete-for-everything

    Reply
  22. Tomi Engdahl says:

    New AI Tool Takes Your Photo Hundreds Of Years In The Past
    Ever wondered what you’d look like as a Pharaoh of Ancient Egypt?
    https://www.iflscience.com/new-ai-tool-takes-your-photo-hundreds-of-years-in-the-past-66428

    Reply
  23. Tomi Engdahl says:

    Computing With Chemicals Makes Faster, Leaner AI Battery-inspired artificial synapses are gaining ground
    https://spectrum.ieee.org/analog-ai-ecram-artificial-synapse

    Reply
  24. Tomi Engdahl says:

    Google frees nifty ML image-compression model… but it’s for JPEG-XL
    Yep. The very same JPEG-XL that’s just been axed from Chromium
    https://www.theregister.com/2022/12/02/ml_attention_center_model_freed/

    Reply
  25. Tomi Engdahl says:

    ‘Google is done’: World’s most powerful AI chatbot offers human-like alternative to search engines
    OpenAI’s latest artificial intelligence bot ChatGPT can also write TV scripts and explain complex theories
    https://www.independent.co.uk/tech/ai-chatbot-chatgpt-google-openai-b2237834.html

    Reply
  26. Tomi Engdahl says:

    What is AI chatbot phenomenon ChatGPT and could it replace humans?
    The tool has impressed experts with its writing ability, proficiency at complex tasks and ease of use
    https://www.theguardian.com/technology/2022/dec/05/what-is-ai-chatbot-phenomenon-chatgpt-and-could-it-replace-humans

    Reply
  27. Tomi Engdahl says:

    Exploring Prompt Injection Attacks
    https://research.nccgroup.com/2022/12/05/exploring-prompt-injection-attacks/
    Have you ever heard about Prompt Injection Attacks[1]? Prompt Injection is a new vulnerability that is affecting some AI/ML models and, in particular, certain types of language models using prompt-based learning. This vulnerability was initially reported to OpenAI by Jon Cefalu (May 2022)[2] but it was kept in a responsible disclosure status until it was publicly released by Riley Goodside (September 2022)[3]. In his tweet, Riley showed how it was possible to create a malicious input that made a language model change its expected behaviour.

    Reply
  28. Tomi Engdahl says:

    ChatGPT- The language model can automatically generate school essays for any grade level, answer open-ended analytical questions, draft marketing pitches, write jokes, poems and even computer code.

    ChatGPT: Optimizing
    Language Models
    for Dialogue
    https://openai.com/blog/chatgpt/

    We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is a sibling model to InstructGPT, which is trained to follow an instruction in a prompt and provide a detailed response.

    Reply
  29. Tomi Engdahl says:

    Designed for ease of accessibility, Simple ML delivers TensorFlow Yggdrasil Decision Tree machine learning right in Google Sheets.

    TensorFlow Brings Machine Learning Decision Forests to Google Sheets
    https://www.hackster.io/news/tensorflow-brings-machine-learning-decision-forests-to-google-sheets-d01e35b8b57c

    Designed for ease of accessibility, Simple ML delivers TensorFlow Yggdrasil Decision Tree machine learning right in Google Sheets.

    ‘Simple ML is an add-on, in beta, for Google Sheets from the TensorFlow team that helps make machine learning accessible to all,” its creators, members of the TensorFlow Decision Forests team in Google Zurich, claim. “Anyone, even people without programming or ML expertise, can experiment and apply some of the power of machine learning to their data in Google Sheets with just a few clicks. From small business owners, scientists, and students to business analysts at large corporations, anyone familiar with Google Sheets can make valuable predictions automatically.”

    Tailored specifically for handling data commonly found in spreadsheet format, Simple ML comes with a range of pre-defined tasks — including prediction of values missing in a series or detection of abnormal values. There’s no programming or even manual training involved: the user just selects the data, runs Simple ML, and chooses what they want the platform to do.

    For those who want to dig deeper, however, it’s possible to take over: Models can be manually trained, evaluated, and exported for external use. “Even if you already know how to train and use machine learning models, Simple ML in Sheets can help make your life even easier,”

    More details on Simple ML for Sheets is available on the project website, along with tutorials on its use; in its initial release, the software exports to TensorFlow, Colab, and TF Serving, along with support for calling the model from C++, Go, and JavaScript.

    https://simplemlforsheets.com/

    Reply
  30. Tomi Engdahl says:

    Kommentti: Kokeilin tekniikkaa, joka tekee kohta oppilaiden koti­läksyt – pieni yksityis­kohta paljasti, että olemme katastrofin äärellä https://www.is.fi/digitoday/art-2000009249930.html

    Hämmästystä ja ihailua herättänyt ChatGPT-ohjelma saattaa olla ensiaskel internetin disinformaatiopandemiaan, kirjoittaa Ilta-Sanomien toimittaja Elias Ruokanen.

    TEKOÄLY-YHTIÖ OpenAI avasi tekoälypohjaisen ChatGPT-keskustelubotin julkiseen testikäyttöön viime viikolla. ChatGPT on “laaja kielimalli”, jonka tarkoitus on luoda niin sujuvaa tekstiä, että sitä voisi luulla ihmisen kirjoittamaksi. ChatGPT pohjautuu kaksi vuotta sitten julkaistuun GPT-3:een.

    Ohjelman kanssa keskustellaan OpenAI:n sivustolla antamalla syöteteksti (vaikka kysymys, käsky tai havainto), johon se vastaa.

    ChatGPT:tä käyttäessä ällistyin täydellisesti teknologiasta ensimmäistä kertaa sitten lapsuuden. Sitä ällistymistä kuvaa brittiläisen scifi-kirjailija Arthur C. Clarken “kolmas sääntö” teknologiasta: tarpeeksi kehittynyttä teknologiaa on mahdotonta erottaa taikuudesta.

    ChatGPT tuottaa korkeatasoista tekstiä aiheesta kuin aiheesta, vaihtaa soljuvasti vakavasta tyylistä leikittelevään tai runolliseen muotoon. Joskus se on jopa humoristinen. Se on eräänlainen yleisapuri, joka voi vastata kysymyksiin, lukea ja tuottaa koodia, lauluja ja esseitä.

    Pyysin sitä kirjoittamaan lyhyen esseen Suomen historiasta. Se totteli. Kysyin tarkentavia kysymyksiä. Se vastasi. Pyysin sitä kuvailemaan suomalaista huumoria. ChatGPT:n mukaan se on ”kuivaa, sarkastista ja joskus pimeää” ja ”vaikeaa ulkopuoliselle ymmärtää ja arvostaa”.

    Ennen kaikkea ChatGPT:n käyttäminen on hauskaa.

    ChatGPT:tä käyttäessä tuli ilmiselväksi, että vuoden päästä jokainen yläastelainen, lukiolainen ja yliopisto-opiskelija tekee kotiläksynsä tällaisella ohjelmalla. Mainostajat syöttävät ohjelmalle tiedot asiakkaista ja luovat kohdennetut mainokset hetkessä. Tulevaisuus on täällä!

    Kun taikatemppu epäonnistuu
    ChatGPT:tä ei tarvitse käyttää kovin kauan, jotta illuusio kaiken ymmärtävästä tekoälystä rikkoutuu. Itse asiassa itselläni aika nopeasti heräsi kysymys, ymmärtääkö se mitään.

    Se, että GPT-3:n tuottama teksti on niin ymmärrettävää, johtuu siitä, että se perustuu ihmisten tuottamaan tekstiin, ja ihmisten tuottama teksti on usein ymmärrettävää.

    Mikä GPT-3:lta puuttuu on erillinen malli todellisuudesta, mihin se voisi verrata tuottamaansa tekstiä. Se ymmärtää sanan vain korrelaatioina muihin sanoihin, joiden merkitystä se ei myöskään tiedä. Sen takia se yhtä hyvin tuottaa nerokkaita vastauksia kuin aivan järjetöntä, sisäisestikin ristiriitaista proosaa, joka on kuitenkin kieliopillisesti oikein. Se vaan menee.

    Fysiikan, matematiikan, logiikan ja vaikeasti määriteltävän ihmisominaisuuden eli perusjärjen lait rikotaan, koska niitä lakeja ei ole kirjoitettu ohjelmaan. Niiden oletetaan ilmestyvän itsestään syväoppimisen seurauksena.

    Ohjelmointiaiheinen sivusto StackOverflow, jossa ihmiset voivat kysyä apua ohjelmointipulmiinsa, kielsi maanantaina ChatGPT:n tuottamien vastausten lisäämisen sivustolle.

    – Pääongelma on se, että vaikka ChatGPT:n tuottamilla vastauksilla on korkea todennäköisyys olla vääriä, ne yleensä näyttävät siltä, että ne voisivat olla hyviä vastauksia, ja niitä on tosi helppo luoda. Monet tuottavat ChatGPT:llä vastauksia, vaikkei heillä ole asiantuntijuutta tai tahtoa arvioida vastausten paikkansapitävyyttä.

    StackOverflow on kieltänyt ChatGPT:n tuottamat vastaukset, mutta miten se kykenee erottamaan koneen ja ihmisen tuottamat tekstit? Täytyykö kaikki uudet vastaukset kieltää?

    Internetiä todennäköisesti uhkaa hyökyaalto koneiden tuottamaa disinformaatiota. Valtioilla, rikollisilla, äärijärjestöillä, ja ihan vain antisosiaalisilla ääliöllä on motiiveja käyttää tekoälyteknologiaa tähän tarkoitukseen.

    Koko tilanne muistuttaa aavemaisesti tammikuuta 2020, kun Kiina sulki Wuhanin ja muita kaupunkeja kulkutaudin takia. Jotta sivustot ja sovellukset säästyvät episteemiseltä virukselta, täytyy niiden toteuttaa omanlaisia sulku- ja valvontatoimia, jotta niiden informaation eheys säilyy.

    Ironisesti myös tekoälyohjelmat ovat uhattuna, koska ne tarvitsevat jatkuvasti lisää dataa, jolla kehittää mallejaan. Jos internet täyttyy tekoälyn tuottamasta roskainformaatiosta, uhkaa malleja suorituskyvyn heikkeneminen, kun ne syövät omaa roskaansa. Roskaa sisään, roskaa ulos

    Reply
  31. Tomi Engdahl says:

    With Kite’s demise, can generative AI for code succeed?
    https://techcrunch.com/2022/12/10/with-kites-demise-can-generative-ai-for-code-succeed/?tpcc=tcplusfacebook

    Kite, a startup developing an AI-powered coding assistant, abruptly shut down last month. Despite securing tens of millions of dollars in VC backing, Kite struggled to pay the bills, founder Adam Smith revealed in a postmortem blog post, running into engineering headwinds that made finding a product-market fit essentially impossible.

    “We failed to deliver our vision of AI-assisted programming because we were 10+ years too early to market, i.e., the tech is not ready yet,” Smith said. “Our product did not monetize, and it took too long to figure that out.”

    Kite’s failure doesn’t bode well for the many other companies pursuing — and attempting to commercialize — generative AI for coding. Copilot is perhaps the highest-profile example, a code-generating tool developed by GitHub and OpenAI priced at $10 per month. But Smith notes that while Copilot shows a lot of promise, it still has “a long way to go” — estimating that it could cost over $100 million to build a “production-quality” tool capable of synthesizing code reliably.

    including Tabnine and DeepCode, which Snyk acquired in 2020. Tabnine’s service predicts and suggests next lines of code based on context and syntax, like Copilot. DeepCode works a bit differently, using AI to notify developers of bugs as they code.

    Reply
  32. Tomi Engdahl says:

    For better and worse, it seems quite likely that ChatGPT heralds a very different world in the making.

    Is ChatGPT the Start of the AI Revolution?
    A sophisticated new chatbot is indistinguishable from magic. Well, almost.
    https://www.bloomberg.com/opinion/articles/2022-12-09/is-chatgpt-the-start-of-the-ai-revolution?utm_content=view&cmpid=socialflow-facebook-view&utm_campaign=socialflow-organic&utm_medium=social&utm_source=facebook

    Reply
  33. Tomi Engdahl says:

    Cade Metz / New York Times:
    Experts warn chatbots like ChatGPT and LaMDA pose a “hallucination” problem, reshaping what they have learned without regard for whether the end result is true — Siri, Google Search, online marketing and your child’s homework will never be the same. Then there’s the misinformation problem.
    https://www.nytimes.com/2022/12/10/technology/ai-chat-bot-chatgpt.html

    Reply
  34. Tomi Engdahl says:

    Washington Post:
    Some early-adopters are using ChatGPT, GPT-3, and other text generators to write business emails, understand class material, find creative inspiration, and more — The latest AI sensation, ChatGPT, is easy to talk to, bad at math and often deceptively, confidently wrong.

    Stumbling with their words, some people let AI do the talking
    The latest AI sensation, ChatGPT, is easy to talk to, bad at math and often deceptively, confidently wrong. Some people are finding real-world value in it, anyway.
    https://www.washingtonpost.com/technology/2022/12/10/chatgpt-ai-helps-written-communication/

    The client, a tech consultant named Danny Richman, had been playing around with an artificial intelligence tool called GPT-3 that can instantly write convincing passages of text on any topic by command.

    He hooked up the AI to Whittle’s email account. Now, when Whittle dashes off a message, the AI instantly reworks the grammar, deploys all the right niceties and transforms it into a response that is unfailingly professional and polite.

    Whittle now uses the AI for every work message he sends, and he credits it with helping his company, Ashridge Pools, land its first major contract, worth roughly $260,000. He has excitedly shown off his futuristic new colleague to his wife, his mother and his friends — but not to his clients, because he is not sure how they will react.

    “Me and computers don’t get on very well,” said Whittle, 31. “But this has given me exactly what I need.”

    A machine that talks like a person has long been a science fiction fantasy, and in the decades since the first chatbot was created, in 1966, developers have worked to build an AI that normal people could use to communicate with and understand the world.

    Now, with the explosion of text-generating systems like GPT-3 and a newer version released last week, ChatGPT, the idea is closer than ever to reality. For people like Whittle, uncertain of the written word, the AI is already fueling new possibilities about a technology that could one day reshape lives.

    “It feels very much like magic,” said Rohit Krishnan, a tech investor in London. “It’s like holding an iPhone in your hand for the first time.”

    Top research labs like OpenAI, the San Francisco firm behind GPT-3 and ChatGPT, have made great strides in recent years with AI-generated text tools, which have been trained on billions of written words — everything from classic books to online blogs — to spin out humanlike prose.

    But ChatGPT’s release last week, via a free website that resembles an online chat, has made such technology accessible to the masses. Even more than its predecessors, ChatGPT is built not just to string together words but to have a conversation — remembering what was said earlier, explaining and elaborating on its answers, apologizing when it gets things wrong.

    It “can tell you if it doesn’t understand a question and needs to follow up, or it can admit when it’s making a mistake, or it can challenge your premises if it finds it’s incorrect,”

    Reply
  35. Tomi Engdahl says:

    Alex Kantrowitz / Big Technology:
    Worried about its reputation, Google is hesitant to release its capable bot LaMDA, but waiting too long could mean ceding the market to competitors like ChatGPT

    Why Google Missed ChatGPT
    The tech giant believes the future of search is conversational. How did it let OpenAI’s ChatGPT take the lead?
    https://www.bigtechnology.com/p/why-google-missed-chatgpt

    Google’s had an awkward week. After years of preaching that conversational search was its future, it’s stood by as the world discovered ChatGPT.

    The powerful chatbot from OpenAI takes queries — some meant for the search bar — and answers with astonishing conversational replies. It’s shared recipes, reviewed code, and argued politics so adeptly that screenshots of its answers now fill social media. This was the future Google promised. But not with someone else fulfilling it.

    How Google missed this moment is not a simple matter of a blind spot. It’s a case of an incumbent being so careful about its business, reputation, and customer relationships that it refused to release similar, more powerful tech. And it’s far from the end of the story.

    “Google thinks a lot about how something can damage its reputation,” said Gaurav Nemade, an ex-Google product manager who was first to helm its LaMDA chatbot. “They lean on the side of conservatism.”

    Google’s LaMDA — made famous when engineer Blake Lemoine called it sentient — is a more capable bot than ChatGPT, yet the company’s been hesitant to make it public. For Google, the problem with chatbots is they’re wrong a lot, yet present their answers with undeserved confidence. Leading people astray — with assuredness — is less than ideal for a company built on helping you find the right answers. So LaMDA remains in research mode.

    Reply
  36. Tomi Engdahl says:

    OpenAI’s New ChatGPT Might Be The First Good Chatbot
    After years of overpromising and underdelivering, chatbots are turning a corner.
    https://www.bigtechnology.com/p/openais-new-chatgpt-might-be-the

    A chatbot that meets the hype is finally here. On Thursday, OpenAI released ChatGPT, a bot that converses with humans via cutting-edge artificial intelligence. The bot can help you write code, compose essays, dream up stories, and decorate your living room. And that’s just what people discovered on day one.

    ChatGPT does have limits, some quite annoying, but it’s the first chatbot that’s enjoyable enough to speak with and useful enough to ask for information. It can engage in philosophical discussions and help in practical matters. And it’s strikingly good at each. After years of false hype, the real thing is here.

    “This is insane,” said Shopify CEO Tobi Lutke upon seeing the bot’s early interactions.

    Reply
  37. Tomi Engdahl says:

    Karen Hao / Wall Street Journal:
    How AI-based natural language processing algorithms can be applied to biological data to create protein-language models and cut drug discovery time to months — Natural language processing algorithms like the ones used in Google searches and OpenAI’s ChatGPT promise to slash the time required to bring medications to market

    How AI That Powers Chatbots and Search Queries Could Discover New Drugs
    https://www.wsj.com/articles/how-ai-that-powers-chatbots-and-search-queries-could-discover-new-drugs-11670428795?mod=djemalertNEWS

    Natural language processing algorithms like the ones used in Google searches and OpenAI’s ChatGPT promise to slash the time required to bring medications to market

    In their search for new disease-fighting medicines, drug makers have long employed a laborious trial-and-error process to identify the right compounds. But what if artificial intelligence could predict the makeup of a new drug molecule the way Google figures out what you’re searching for, or email programs anticipate your replies—like “Got it, thanks”?

    That’s the aim of a new approach that uses an AI technique known as natural language processing—​the same technology​ that enables OpenAI’s ChatGPT​ to ​generate human-like responses​—to analyze and synthesize proteins, which are the building blocks of life and of many drugs. The approach exploits the fact that biological codes have something in common with search queries and email texts: Both are represented by a series of letters.

    Reply
  38. Tomi Engdahl says:

    Kommentti: Kokeilin tekniikkaa, joka tekee kohta oppilaiden koti­läksyt pieni yksityis­kohta paljasti, että olemme katastrofin äärellä https://www.is.fi/digitoday/art-2000009249930.html
    TEKOÄLY-YHTIÖ OpenAI avasi tekoälypohjaisen ChatGPT-keskustelubotin julkiseen testikäyttöön viime viikolla. ChatGPT on “laaja kielimalli, jonka tarkoitus on luoda niin sujuvaa tekstiä, että sitä voisi luulla ihmisen kirjoittamaksi. ChatGPT pohjautuu kaksi vuotta sitten julkaistuun GPT-3:een.

    Reply
  39. Tomi Engdahl says:

    https://hackaday.com/2022/12/11/hackaday-links-december-11-2022/

    Eliza rides again? Maybe a little bit, at least judging by the current fascination with ChatGPT. The AI chatbot went live on November 30 with a “research release” that’s free to use, at least for now. People are using it for everything from getting help with coding questions to writing poetry, with mixed results. One Hackaday writer, who shall remain nameless, even used ChatGPT to write an article about a specific project on Reddit “in the style of Hackaday.” Relax, it wasn’t published — we just looked it over internally on Discord. While it sounded convincing enough superficially, the article was hot garbage as far as facts and specifics about the project. We could be a little biased about that, though. We also spotted an “interview” with ChatGPT over on IEEE Spectrum, which supposedly captures answers to questions put to the chat bot. Honestly, it reads a little like the interview with HAL 9000 in 2001: A Space Odyssey.

    https://spectrum.ieee.org/chatbot-chatgpt-interview

    Why We’re All Obsessed With ChatGPT, A Mind-Blowing AI Chatbot
    https://www.cnet.com/tech/computing/why-were-all-obsessed-with-chatgpt-a-mind-blowing-ai-chatbot/

    This artificial intelligence bot can converse, write poetry and program computers. Be careful how much you trust it, though.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*