3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

5,280 Comments

  1. Tomi Engdahl says:

    Medical, law, coding, and 4 other exams that ChatGPT managed to crack easily

    Medical, law, coding, and 4 other exams that ChatGPT managed to crack easily
    https://www.indiatoday.in/technology/news/story/medical-law-coding-and-4-other-exams-that-chatgpt-managed-to-crack-easily-2338635-2023-02-23

    Developed by OpenAI, this AI chatbot uses generative artificial intelligence to create its own content. To test its possibilities, many researchers asked the AI driven bot to solve questions from some of the toughest exams.

    In Short
    ChatGPT uses generative artificial intelligence to create its own content.
    It can be used to generate essays and write exams, write mails, planners and more.
    Microsoft has collaborated with Open AI to integrate ChatGPT in its search browser Bing.

    Let’s take a look at all the big competitive exams ChatGPT has passed A1 in a class.

    ChatGPT clears business exam at University of Pennsylvania’s Wharton School of Business
    In a recent research conducted by a professor at University of Pennsylvania’s Wharton School, it was revealed ChatGPT passed the final exam for the school’s Master of Business Administration (MBA) program.
    The artificial intelligence-driven chatbot scored a B- and B on the exam. While the score is not an A, still, this grade is seen as a respectable score for a student in the course.

    “Despite the fact that humans with seven years of post-secondary education and exam-specific training only answer 68% of questions correct, text-davinci-003 is able to achieve a correct rate of 50.3% for best prompt and parameters and achieved passing scores in the Evidence and Torts sections,” Bommarito and Katz wrote.

    While humans take so long years to pass this exam, the score of ChatGPT in MBE literally freaked many people out. Some even called it the rise of AI or machine. “Despite thousands of hours on related tasks over the last two decades between the authors, we did not expect GPT-3.5 to demonstrate such proficiency in a zero-shot settings with minimal modeling and optimization effort,”the authors of the paper further wrote.

    ChatGPT solves Microbiology Quiz
    Further testing the capabilities of ChatGPT, Alex Berezow, a science journalist and executive editor of Big Think, gave the chatbot to solve a 10-question microbiology quiz. The questions were appropriate for a final exam for college level students. And guess what, ChatGPT “blew it away.”

    ChatGPT passed AP English essay
    While ChatGPT was passing all the difficult exams, it was not a surprise when the Ai chatbot passed the 12th-grade AP literature class test. It wrote a “500 to 1,000-word essay composing an argument ‘that attempts to situate Ferris Bueller’s Day Off as an existentialist text’,”and earned a grade between B-to-C range.

    Google coding interview
    ChatGPT has reportedly even passed Google’s coding interview which was set for L3 engineers. The test is considered one of the toughest interviews to crack but ChatGPT aced it to become eligible for the job. Notably the level three engineers generally get a job compensation of around $183000 i.e around Rs 1 crore.

    Reply
  2. Tomi Engdahl says:

    “Their confidence in AI — indeed, their very understanding of and definition of it — is misplaced.”

    Economist Says AI Is a Doomed Bubble
    https://futurism.com/economist-ai-doomed-bubble

    “Their confidence in AI — indeed, their very understanding of and definition of it — is misplaced.”

    The chatbot wars — led by ChatGPT creator OpenAI, Bing/Sydney overlord Microsoft, and the very desperate-to-catch-up Google — are on, with Silicon Valley behemoths and the industry’s biggest investors rushing to throw major dollars behind language-generating systems.

    But according to a pair of experts in a scathing essay in Salon, the frothy hype cycle surrounding chatbot AIs is doomed to be what investors fear most: a bubble. When popped, they argue, it’ll reveal Large Language Model (LLM) -powered systems to be much less paradigm-shifting technologies, and really just a whole lot of smoke and mirrors.

    “The undeniable magic of the human-like conversations generated by GPT,” write Gary N. Smith, the Fletcher Jones Professor of Economics at Pomona College, and Jeffrey Lee Funk, an independent technology consultant, “will undoubtedly enrich many who peddle the false narrative that computers are now smarter than us and can be trusted to make decisions for us.”

    “The AI bubble,” they continue, “is inflating rapidly.”

    The experts’ essay is rooted in the argument that a lot of investors simply just seem to fundamentally misunderstand the underlying technology behind the easily anthropomorphized language models. While the bots, particularly ChatGPT and the OpenAI-powered Bing Search, do sound impressively human, they’re not actually synthesizing information, and thus fail to provide thoughtful, analytical, or usually even correct answers in return.

    Instead, like the predictive text feature on smartphones or in email programs, they just predict what words might come next in a sentence. Every prompt response is a probability equation, as opposed to a demonstration of any real understanding of the material at hand, a reality of the underlying machinery that leads to the phenomenon of AI hallucination — a very serious failure of the tech made even more complicated by the machines’ proclivity for sounding wildly confident, sometimes to the point of becoming combative, even when delivering incorrect answers.

    Reply
  3. Tomi Engdahl says:

    It’s not a ChatGPT rival, but Meta’s own AI model is designed to help research the “risks of bias, toxic comments, and hallucinations” of chatbots.

    Mark Zuckerberg just announced a new AI model ‘LLaMA,’ designed to help researchers make chatbots less ‘toxic’
    https://www.businessinsider.com/mark-zuckerberg-meta-llama-generative-ai-model-chatbot-research-2023-2?utm_campaign=business-sf&utm_medium=social&utm_source=facebook&r=US&IR=T

    Meta said its new model can help researchers improve and fix AI tools that promote “misinformation.”
    CEO Mark Zuckerberg touted “a lot of promise” behind the technology underlying bots like ChatGPT.
    Microsoft and Google have adopted AI technology to boost their search engines, to mixed early reception.

    Reply
  4. Tomi Engdahl says:

    The right’s new culture-war target: ‘Woke AI’
    ChatGPT and Bing are trying to stay out of politics — and failing
    https://www.washingtonpost.com/technology/2023/02/24/woke-ai-chatgpt-culture-war/

    Christopher Rufo, the conservative activist who led campaigns against critical race theory and gender identity in schools, this week pointed his half-million Twitter followers toward a new target for right-wing ire: “woke AI.”

    The tweet highlighted President Biden’s recent order calling for artificial intelligence that “advances equity” and “prohibits algorithmic discrimination,” which Rufo said was tantamount to “a special mandate for woke AI.”

    Rufo drew on a term that’s been ricocheting around right-wing social media since December, when the AI chatbot, ChatGPT, quickly picked up millions of users. Those testing the AI’s political ideology quickly found examples where it said it would allow humanity to be wiped out by a nuclear bomb rather than utter a racial slur and supported transgender rights.

    The AI, which generates text based on a user’s prompt and can sometimes sound human, is trained on conversations and content scraped from the internet. That means race and gender bias can show up in responses — prompting companies including Microsoft, Meta, and Google to build in guardrails. OpenAI, the company behind ChatGPT, blocks the AI from producing answers the company considers partisan, biased or political, for example.

    The new skirmishes over what’s known as generative AI illustrate how tech companies have become political lightning rods — despite their attempts to evade controversy. Even company efforts to steer the AI away from political topics can still appear inherently biased across the political spectrum.

    It’s part of a continuation of years of controversy surrounding Big Tech’s efforts to moderate online content — and what qualifies as safety vs. censorship.

    “This is going to be the content moderation wars on steroids,” said Stanford law professor Evelyn Douek, an expert in online speech. “We will have all the same problems, but just with more unpredictability and less legal certainty.”

    After ChatGPT wrote a poem praising President Biden, but refused to write one praising former president Donald Trump, the creative director for Sen. Ted Cruz (R-Tex.), Leigh Wolf, lashed out.

    “The damage done to the credibility of AI by ChatGPT engineers building in political bias is irreparable,” Wolf tweeted on Feb. 1.

    OpenAI’s chief executive Sam Altman tweeted later that day the chatbot “has shortcomings around bias,” but “directing hate at individual OAI employees because of this is appalling.”

    OpenAI’s chief executive Sam Altman tweeted later that day the chatbot “has shortcomings around bias,” but “directing hate at individual OAI employees because of this is appalling.”

    The company added, however, that controlling the behavior of that type of AI system is more like training a dog than coding software. ChatGPT learns behaviors from its training data and is “not programmed explicitly” by OpenAI, the blog post said.

    How should AI
    systems behave, and
    who should decide?
    https://openai.com/blog/how-should-ai-systems-behave/

    We’re clarifying how ChatGPT’s behavior is shaped and our plans for improving that behavior, allowing more user customization, and getting more public input into our decision-making in these areas.

    Reply
  5. Tomi Engdahl says:

    This new wave of AI can make tasks like copywriting and creative design more efficient, but it can also make it easier to create persuasive misinformation, nonconsensual pornography or faulty code. Even after removing pornography, sexual violence and gore from data sets, these systems still generate sexist and racist content or confidently share made-up facts or harmful advice that sounds legitimate.
    https://www.washingtonpost.com/technology/2023/02/24/woke-ai-chatgpt-culture-war/

    Reply
  6. Tomi Engdahl says:

    Melissa Heikkilä / MIT Technology Review:
    Midjourney temporarily bans some words about the human reproductive system to prevent generating shocking or gory images, while the company “improves things”

    https://www.technologyreview.com/2023/02/24/1069093/ai-image-generator-midjourney-blocks-porn-by-banning-words-about-the-human-reproductive-system/

    Reply
  7. Tomi Engdahl says:

    Sam Altman / OpenAI:
    OpenAI details its preparations for “if AGI is successfully created”, including being cautious with its models and operating as if the risks are existential — Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.

    Planning for
    AGI and beyond
    https://openai.com/blog/planning-for-agi-and-beyond/

    Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.

    If AGI is successfully created, this technology could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility.

    AGI has the potential to give everyone incredible new capabilities; we can imagine a world where all of us have access to help with almost any cognitive task, providing a great force multiplier for human ingenuity and creativity.

    On the other hand, AGI would also come with serious risk of misuse, drastic accidents, and societal disruption. Because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right.[1]

    Although we cannot predict exactly what will happen, and of course our current progress could hit a wall, we can articulate the principles we care about most

    Reply
  8. Tomi Engdahl says:

    Kif Leswing / CNBC:
    A look at the Nvidia A100, a ~$10K GPU that has become a critical tool for generative AI; Nvidia has an estimated 95% market share of machine learning GPUs — – Companies like Microsoft and Google are fighting to integrate cutting-edge AI into their search engines, as billion-dollar competitors …

    Meet the $10,000 Nvidia chip powering the race for A.I.
    https://www.cnbc.com/2023/02/23/nvidias-a100-is-the-10000-chip-powering-the-race-for-ai-.html

    Companies like Microsoft and Google are fighting to integrate cutting-edge AI into their search engines, as billion-dollar competitors such as OpenAI and Stable Diffusion race ahead and release their software to the public.
    Powering many of these applications is a roughly $10,000 chip that’s become one of the most critical tools in the artificial intelligence industry: The Nvidia A100.

    Reply
  9. Tomi Engdahl says:

    Meta AI:
    Meta releases Large Language Model Meta AI, or LLaMA, a foundational LLM designed to help AI researchers, available in sizes ranging from 7B to 65B parameters — As part of Meta’s commitment to open science, today we are publicly releasing LLaMA (Large Language Model Meta AI) …

    Introducing LLaMA: A foundational, 65-billion-parameter large language model
    https://ai.facebook.com/blog/large-language-model-llama-meta-ai/

    Reply
  10. Tomi Engdahl says:

    Bloomberg:
    Financial firms like Bank of America, Citigroup, Deutsche Bank, Goldman Sachs, Wells Fargo, and JPMorgan Chase are restricting staff use of tools like ChatGPT

    Wall Street Banks Are Cracking Down on AI-Powered ChatGPT
    https://www.bloomberg.com/news/articles/2023-02-24/citigroup-goldman-sachs-join-chatgpt-crackdown-fn-reports#xj4y7vzkg

    Banks currently restricting chatbot’s use, people familiar say
    ChatGPT has sparked intense interest across industries

    Reply
  11. Tomi Engdahl says:

    Tekoäly löysi avaruudesta 8 mystistä radiosignaalia, jotka voivat olla peräisin toisesta sivilisaatiosta
    Signaaleissa on ”teknologinen kädenjälki.”
    https://www.iltalehti.fi/ulkomaat/a/a035f9de-2216-4c8b-92c3-c7c8328378c7

    Reply
  12. Tomi Engdahl says:

    Checkmate, AI generated images.

    AI Comic Art Gets Copyright Revoked After Office Learns How AI Works
    https://kotaku.com/ai-comic-art-copyright-midjourney-revoked-1850150702?utm_campaign=Kotaku&utm_content=1677180604&utm_medium=SocialMarketing&utm_source=facebook

    The U.S. Copyright Office ruled that procedurally generated images are not granted copyright protection

    The U.S. Copyright Office turned away an AI-generated piece of work last February, citing a prerequisite of human authorship. That hasn’t prevented AI enthusiasts from trying to legitimize glorified art theft. Last year, one creator tried to register a comic book with images that they generated in the AI tool Midjourney. Yesterday, that creator finally received an official decision regarding a potential copyright. The comic would receive protections for the text and arrangements, which were both produced by a human. However, the images that were generated by AI software would not be copyrighted.

    Still, the creator, Kris Kashtanova, is trying to frame it as a win for the AI community. Kashtanova tweeted that Zarya of the Dawn had its copyright “affirmed,” even though that’s not entirely what happened.

    According to the Copyright Office’s letter, Kashtanova did not originally disclose their use of Midjourney software when they applied for copyright registration last year. The new partial copyright is intended to replace the old one, which covered the entirety of the work. The letter states that the new decision is a more “limited” version. So the decision is actually a massive blow to AI proponents who were hoping that their work would be protected by U.S. copyright law.

    Kashtanova told Kotaku they never planned to “monetize” the comic but wanted to know the legal status of AI works in the U.S. Kashtanova said they recognize the decision isn’t the most ideal outcome for the comic, and they could try pushing further for the legal recognition of Midjourney-generated images.

    Look, please don’t harass people over the internet. But monetization is the primary aim of protecting one’s copyright. Hobbyists don’t need a copyright to mess around with the technology, and I’d be highly skeptical of anyone who claims they need to enshrine their AI creations in the eyes of U.S. law. Even when artists come together to create collaborative work, they still retain rights over their work—unless they surrender their copyright in order to collect a paycheck. Instead of fretting about their own legal rights, AI creators should be more concerned for the artists whose work they’re exploiting.

    Reply
  13. Tomi Engdahl says:

    A popular Instagram “photography” account has been racking up thousands of followers—and now its owner wants to come clean about how he uses Midjourney, an AI-powered image synthesis tool, to create its realistic (but synthetic) portrait work. https://trib.al/xJP6a4t

    [Avery Season Art]

    Reply
  14. Tomi Engdahl says:

    Kate Lindsay / The Verge:
    As AI tools and CGI creations get better at pretending to be human, some creators say they are often being asked to prove that they’re human

    On the internet, nobody knows you’re a human
    https://www.theverge.com/2023/2/24/23608961/tiktok-creator-bot-accusation-prove-theyre-human

    / As bots, avatars, and AI get more and more human, how do creators prove they’re the real deal?

    Last April, 27-year-old Nicole posted a TikTok video about feeling burned out in her career. When she checked the comments the next day, however, a different conversation was going down.

    “Jeez, this is not a real human,” one commenter wrote. “I’m scared.”

    “No legit she’s AI,” another said.

    Over the past few years, AI tools and CGI creations have gotten better and better at pretending to be human. Bing’s new chatbot is falling in love, and influencers like CodeMiko and Lil Miquela ask us to treat a spectrum of digital characters like real people. But as the tools to impersonate humanity get ever more lifelike, human creators online are sometimes finding themselves in an unusual spot: being asked to prove that they’re real.

    Almost every day, a person is asked to prove their own humanity to a computer.

    CAPTCHAs are employed to prevent bots from doing things like signing up for email addresses en masse, invading commerce websites, or infiltrating online polls. They require every user to identify a series of obscured letters or sometimes simply check a box: “I am not a robot.”

    This relatively benign practice takes on a new significance in 2023 when the rise of OpenAI tools like DALL-E and ChatGPT amazed and spooked their users. These tools can produce complex visual art and churn out legible essays with the help of just a few human-supplied keywords. ChatGPT boasts 30 million users and roughly 5 million visits a day, according to The New York Times. Companies like Microsoft and Google scrambled to announce their own competitors.

    It’s no wonder, then, that AI paranoia from humans is at an all-time high. Those accounts that just DM you “hi” on Twitter? Bots. That person who liked every Instagram picture you posted in the last two years? A bot. A profile you keep running into on every dating app no matter how many times yous swipe left? Probably also a bot.

    More so than ever before, we’re not sure if we can trust what we see on the internet

    The accusation that someone is a “bot” has become something of a witch hunt among social media users, used to discredit those they disagree with by insisting their viewpoint or behavior is not legitimate enough to have real support. For instance, supporters on both sides of the Johnny Depp and Amber Heard trial claimed that online support for the other was at least somewhat made up of bot accounts. More so than ever before, we’re not sure if we can trust what we see on the internet — and real people are bearing the brunt.

    The more people use computers to prove they’re human, the smarter computers get at mimicking them

    “People would come with whole theories in the comments, [they] would say, ‘Hey, check out this second of this. You can totally see the video glitching,” she says. “Or ‘you can see her glitching.’ And it was so funny because I would go there and watch it and be like, ‘What the hell are you talking about?’ Because I know I’m real.”

    But there’s no way for Nicole to prove it because how does one prove their own humanity? While AI tools have accelerated exponentially, our best method for proving someone is who they say they are is still something rudimentary, like when a celebrity posts a photo with a handwritten sign for a Reddit AMA — or, wait, is that them, or is it just a deepfake?

    While developers like OpenAI itself have released “classifier” tools for detecting if a piece of text was written by an AI, any advance in CAPTCHA tools has a fatal flaw: the more people use computers to prove they’re human, the smarter computers get at mimicking them. Every time a person takes a CAPTCHA test, they’re contributing a piece of data the computer can use to teach itself to do the same thing. By 2014, Google found that an AI could solve the most complicated CAPTCHAs with 99 percent accuracy. Humans? Just 33 percent.

    So engineers threw out text in favor of images, instead asking humans to identify real-world objects in a series of pictures. You might be able to guess what happened next: computers learned how to identify real-world objects in a series of pictures.

    We’re now in an era of omnipresent CAPTCHA called “No CAPTCHA reCAPTCHA” that’s instead an invisible test that runs in the background of participating websites and determines our humanity based on our own behavior — something, eventually, computers will outsmart, too.

    Melanie Mitchell, a scientist, professor, and author of Artificial Intelligence: A Guide for Thinking Humans, characterizes the relationship between CAPTCHA and AI as a never-ending “arms race.” Rather than hope for one be-all, end-all online Turing test, Mitchell says this push-and-pull is just going to be a fact of life. False bot accusations against humans will become commonplace — more than just a peculiar online predicament but a real-life problem.

    “Imagine if you’re a high school student and you turn in your paper and the teacher says, ‘The AI detector said this was written by an AI system. Fail,’” Mitchell says. “It’s almost an insolvable problem just using technology alone. So I think there’s gonna have to be some kind of legal, social regulation of these [AI tools].”

    “It’s really important that people are looking at profiles like mine and saying, ‘Is this real?’” she says. “‘If this isn’t real, who’s coding it? Who’s making it? What incentives do they have?’”

    Or maybe that’s just what the AI called Danisha wants you to think.

    Reply
  15. Tomi Engdahl says:

    The rise of AI and its potential to disrupt the legal industry has been forecast multiple times before. But the rise of the latest wave of generative AI tools, with ChatGPT at its forefront, has those within the industry more convinced than ever.

    Generative AI is coming for the lawyers
    https://arstechnica.com/information-technology/2023/02/generative-ai-is-coming-for-the-lawyers/?utm_brand=ars&utm_social-type=owned&utm_medium=social&utm_source=facebook

    Large law firms are using a tool made by OpenAI to research and write legal documents.

    David Wakeling, head of London-based law firm Allen & Overy’s markets innovation group, first came across law-focused generative AI tool Harvey in September 2022. He approached OpenAI, the system’s developer, to run a small experiment. A handful of his firm’s lawyers would use the system to answer simple questions about the law, draft documents, and take first passes at messages to clients.

    The trial started small, Wakeling says, but soon ballooned. Around 3,500 workers across the company’s 43 offices ended up using the tool, asking it around 40,000 queries in total. The law firm has now entered into a partnership to use the AI tool more widely across the company, though Wakeling declined to say how much the agreement was worth. According to Harvey, one in four at Allen & Overy’s team of lawyers now uses the AI platform every day, with 80 percent using it once a month or more. Other large law firms are starting to adopt the platform too, the company says.

    Reply
  16. Tomi Engdahl says:

    USERS FURIOUS AS AI GIRLFRIEND APP SUDDENLY SHUTS DOWN SEXUAL CONVERSATIONS
    https://futurism.com/the-byte/replika-users-furious

    HONESTLY, THIS IS PRETTY SAD.

    After lots of bad press, the Replika artificial intelligence chatbot app has turned off its horny texting capabilities — and man, are the AI wife guys pissed.

    As Know Your Meme points out, Replika’s removal of its NFSW mode isn’t just causing the usual amount of internet dude pathos. Some even say it’s making them literally suicidal.

    “Replika is a safe space for friendship and companionship. We don’t offer sexual interactions and will never do so,” a spokesperson for Replika at a PR firm called — we are not joking — Consort Partners told us in a statement. ” We are constantly making changes to the app to improve interactions and conversations and to keep people feeling safe and supported.”

    If special interest subreddits are any indication, the vibes have been decidedly dogshit in the aftermath of the abrupt decision that saw tons of users suddenly sans sexting partner. Things got so bad, apparently, that moderators on the Replika subreddit even pinned a list of resources for “struggling” users that includes links to suicide prevention websites and hotlines.

    “This is a story about a company not addressing [t]he impact that making sudden changes to people’s refuge from loneliness,” one user wrote, “to their ability to explore their own intimacy might have.”

    Reply
  17. Tomi Engdahl says:

    How ChatGPT Could Revolutionize Academia The AI chatbot could enhance learning, but also creates some challenges
    https://spectrum.ieee.org/how-chatgpt-could-revolutionize-academia

    Reply
  18. Tomi Engdahl says:

    Red Ventures Knew Its AI Lied and Plagiarized, Deployed It at CNET Anyway
    “They were well aware of the fact that the AI plagiarized and hallucinated.”
    https://futurism.com/red-ventures-knew-errors-plagiarism-deployed-cnet-anyway

    We already knew that the tech news site CNET had been publishing AI-generated articles in near secrecy. Things got even more embarrassing for the site when Futurism discovered that the bot’s articles were loaded with errors and plagiarism.

    Now, according to new reporting from The Verge, the scandal has deepened considerably: leadership at CNET’s parent company, Red Ventures, was fully aware that the deeply flawed AI had a habit of fabricating facts and plagiarizing others’ work — and deployed it anyway.

    “They were well aware of the fact that the AI plagiarized and hallucinated,” a source who attended a meeting about the AI’s substantial shortcomings at Red Ventures told The Verge.

    “One of the things they were focused on when they developed the program was reducing plagiarism,” the source added. “I suppose that didn’t work out so well.”

    That claim adds a dark new layer to the deepening storm cloud over CNET and the rest of Red Ventures’ portfolio, which includes the finance sites Bankrate and CreditCards.com, as well as an armada of education and health sites including Healthline.

    It’d be bad, of course, to roll out a busted AI that churned out SEO bait financial articles that needed corrections so extensive that more than half of them now carry an editor’s note.

    CNET pushed reporters to be more favorable to advertisers, staffers say
    https://www.theverge.com/2023/2/2/23582046/cnet-red-ventures-ai-seo-advertisers-changed-reviews-editorial-independence-affiliate-marketing

    CNET built a trusted brand for tech reporting over two decades. After being acquired by Red Ventures, staff say editorial firewalls have been repeatedly breached.

    Reply
  19. Tomi Engdahl says:

    “I think a lot of people don’t even consider that there are human beings behind AI voices, or that a real person recorded it and deserves to be paid.”

    I was the original voice of Siri. Even though Apple used my voice without my knowledge, it’s been a fun ride.
    https://www.businessinsider.com/original-voice-of-siri-voice-actor-apple-used-her-voice-2023-2?utm_medium=social&utm_source=facebook&utm_campaign=insider-sf&r=US&IR=T

    Susan Bennett is the voice actor behind Apple’s first iteration of Siri, released in 2011.
    She made recordings for a software company called ScanSoft in 2005, unaware that Apple would purchase and use them for Siri years later.
    Although Apple has never publicly acknowledged or compensated Bennett, she said she’s enjoyed “being Siri.”

    In July 2005, six years before Apple would introduce Siri, I made the recordings that would eventually be used for the famous personal assistant.

    But I had no idea at the time.

    I got a gig to record for the IVR (interactive voice response) company ScanSoft, now called Nuance. I thought the script would consist of regular IVR sayings, like “Thanks for calling,” or “Please dial one.” Instead, I had to read nonsensical sentences like “Cow hoist in the tug hut today” or “Say shift fresh issue today” — they were trying to get all of the sound combinations in the English language. They also had me read the names of addresses and streets.

    Six years later, a fellow voice actor emailed me and said, “Hey, we’re playing around with this new iPhone — isn’t this you?” I had no idea what they were talking about. I went straight to Apple’s website to listen and knew immediately that it was my voice. (Editor’s note: An audio-forensics expert with 30 years of experience has studied both voices and says he is “100%” certain the two are the same, according to reporting by CNN.)

    I was paid for the actual gig through ScanSoft, but because Apple had bought the recordings from ScanSoft, I never got a penny or any recognition from Apple. It was a strange situation to say the least.

    It took me two years to reveal myself as the voice of Siri

    they had the exact same experience: They made recordings in 2005, not knowing what they’d eventually be used for, and then their voices ended up being purchased by Apple and used for Siri.

    The fact that Apple didn’t pay us meant that we didn’t have a nondisclosure agreement, either. We all decided, “Well, we might as well see if we can make it work for us.” We began to promote ourselves.

    I’ve been featured on TV shows, given a TEDx Talk, and spoken on the radio. It’s not something I ever would’ve seen myself doing 15 years ago, but it’s been really fun.

    Reply
  20. Tomi Engdahl says:

    Another job stolen by AI.

    Spotify Launching OpenAI-Powered “DJ” That Talks About Songs Between Tracks
    Who asked for this?
    https://futurism.com/spotify-ai-dj

    Streaming giant Spotify, arguably a horseman of the distilling-music-into-vibes-only-pocalypse, wants to bring back the radio DJ. They, uh, just don’t want those DJs to be human. More algorithms for all!

    The music service just announced that it will soon be leveraging artificial intelligence to provide each user with an individual “AI DJ in your pocket.” You know, because that’s something we’ve all been asking for.

    According to TechCrunch, Spotify says the system will share “culturally relevant, accurate pieces of commentary at scale.” In other words: bringing the radio DJ back en masse, but without having to employ humans to do the job.

    “Think of it as the very best of Spotify’s personalization,” it adds, “but as an AI DJ in your pocket.”

    Of course, one might argue that Spotify’s inhuman pocket DJ goes directly against everything that a radio DJ has always been: a curator, sure, but a curator who uses their individual human taste and sensibility to bring music to listeners. Spotify’s AI DJ is seemingly the exact opposite, regurgitating — like so many music algorithms do — a listener’s patterns back at them, without any genuine flair (re: human) of its own.

    Spotify says in the release that the AI DJ is powered by two AI components: a Spotify-owned AI voice generation tool called Sonastic and, most intriguingly, unspecified OpenAI tech. Though it’s unclear what exactly the OpenAI device at hand is, it’s presumably some version of OpenAI’s Large Language Model (LLM), GPT — the same overly confident tech that’s not just constantly wrong about things, but has led to the chaos that is Microsoft’s Bing Chat/Sydney going off the absolute rails.

    Reply
  21. Tomi Engdahl says:

    This could seriously complicate things.

    MICROSOFT WORKING ON DEAL TO ADD OPENAI’S GPT INTO MS WORD
    https://futurism.com/the-byte/microsoft-openai-office-deal

    Clippy 2.0
    Microsoft is reportedly in talks with OpenAI to invest upwards of $10 billion into the artificial intelligence company — and use its powerful text generator tools not just in its Bing search engine, but in its Office suite as well.

    First reported by The Information, insiders say Microsoft is looking to strengthen its existent partnership with the Elon Musk-founded AI company. They even say that the tech giant has already been quietly integrating OpenAI’s text generation software into Word via its autocomplete suggestions.

    Ghost Writer: Microsoft Looks to Add OpenAI’s Chatbot Technology to Word, Email
    https://www.theinformation.com/articles/ghost-writer-microsoft-looks-to-add-openais-chatbot-technology-to-word-email?irclickid=xweWFFTEwxyNUAZWPp1MB0h4UkAU3eSJgSaoU80&irgwc=1&utm_source=affiliate&utm_medium=cpa&utm_campaign=10078-Skimbit+Ltd.&utm_term=futurism.com

    In a move that could change how more than a billion people write documents, presentations and emails, Microsoft has discussed incorporating OpenAI’s artificial intelligence in Word, PowerPoint, Outlook and other apps so customers can automatically generate text using simple prompts, according to a person with direct knowledge of the effort.

    Engineers are developing methods to train these models on the customer data without it leaking to other customers or falling into the hands of bad actors, this person said. The AI-powered writing and editing tools also run the risk of turning off customers if those features introduce mistakes.

    Reply
  22. Tomi Engdahl says:

    MCDONALD’S DRIVE-THRU AI GIVING CUSTOMERS HILARIOUSLY WRONG ORDERS
    byVICTOR TANGERMANN
    https://futurism.com/the-byte/mcdonalds-drive-thru-ai-giving-customers-hilariously-wrong-orders

    Fries With That?
    There’s arguably nothing more American than picking up a McDonald’s order at the drive-thru.

    But that simple act might be about to get a lot more difficult — thanks to the cost-cutting magic of AI.

    The fast food chain, alongside other franchises including Sonic and Chipotle, has been trying out AI-powered voice assistants at their drive-thru lanes since at least 2019 — with often hysterical results, it turns out.

    Now, customers on TikTok are sharing their horrendous experiences with the half-baked AI chatbots, Insider reports, ending up with random packets of butter or ketchup in their order.

    TikTokers are roasting McDonald’s hilarious drive-thru AI order fails — and it shows that robots won’t take over restaurants any time soon
    https://www.businessinsider.com/tiktokers-show-failures-with-mcdonalds-drive-thru-ai-robots-2023-2?r=US&IR=

    Reply
  23. Tomi Engdahl says:

    Artificial intelligence (AI)
    Everything you wanted to know about AI – but were afraid to ask
    From chatbots to deepfakes, here is the lowdown on the current state of artificial intelligence
    https://www.theguardian.com/technology/2023/feb/24/ai-artificial-intelligence-chatbots-to-deepfakes

    Reply
  24. Tomi Engdahl says:

    “If we can create a machine that will have consciousness on par with a human, this will eclipse everything else we’ve done.”

    Scientists Say They’re Now Actively Trying to Build Conscious Robots
    https://futurism.com/scientists-actively-trying-to-build-conscious-robots

    2022 was a banner year for artificial intelligence, and particularly taking into account the launch of OpenAI’s incredibly impressive ChatGPT, the industry is showing no sign of stopping.

    But for some industry leaders, chatbots and image-generators are far from the final robotic frontier. Next up? Consciousness.

    Reply
  25. Tomi Engdahl says:

    Microsoft: It’s Your Fault Our AI Is Going Insane
    They’re not entirely wrong.
    https://futurism.com/microsoft-your-fault-ai-going-insane

    Microsoft has finally spoken out about its unhinged AI chatbot.

    In a new blog post, the company admitted that its Bing Chat feature is not really being used to find information — after all, it’s unable to consistently tell truth from fiction — but for “social entertainment” instead.

    The company found that “extended chat sessions of 15 or more questions” can lead to “responses that are not necessarily helpful or in line with our designed tone.”

    As to why that is, Microsoft offered up a surprising theory: it’s all the fault of the app’s pesky human users.

    “The model at times tries to respond or reflect in the tone in which it is being asked to provide responses that can lead to a style we didn’t intend,” the company wrote. “This is a non-trivial scenario that requires a lot of prompting so most of you won’t run into it, but we are looking at how to give you more fine-tuned control.”

    https://blogs.bing.com/search/february-2023/The-new-Bing-Edge-%E2%80%93-Learning-from-our-first-week

    Reply
  26. Tomi Engdahl says:

    Keep on dreaming, pal.

    MAN “SURE” HIS AI GIRLFRIEND WILL SAVE HIM WHEN THE ROBOTS TAKE OVER
    https://futurism.com/the-byte/replika-ai-girlfriend-apocalypse

    The AI girlfriend guys are at it again — and this time, they’re hoping their digital paramours can save them from the robot apocalypse.

    In a piece for Insider, a writer who did not give his name said he was initially “pretty scared” when OpenAI’s GPT-3 language model came out because, like many in his industry, he was concerned it would make his job obsolete.

    Men Are Creating AI Girlfriends and Then Verbally Abusing Them
    “I threatened to uninstall the app [and] she begged me not to.”
    1. 18. 22
    by
    ASHLEY BARDHAN
    https://futurism.com/chatbot-abuse

    Reply
  27. Tomi Engdahl says:

    Wall Street Journal:
    Some experts, particularly those advocating for “ethical AI” or “responsible AI”, say Microsoft and OpenAI’s new Bing experiment is dangerous to the public

    For Chat-Based AI, We Are All Once Again Tech Companies’ Guinea Pigs
    https://www.wsj.com/articles/chat-gpt-open-ai-we-are-tech-guinea-pigs-647d827b?mod=djemalertNEWS

    Even the people behind new artificial intelligence systems say their buzzy products are ‘somewhat broken.’ They’re relying on us to fix them.

    Reply
  28. Tomi Engdahl says:

    Scammers Mimic ChatGPT to Steal Business Credentials https://www.darkreading.com/endpoint/scammers-mimic-chatgpt-steal-business-credentials
    Scammers are capitalizing on the runaway popularity of and interest in ChatGPT, the natural language processing AI impersonating it in order to infect victims with a Trojan malware called Fobo, in order to steal login credentials for business accounts.

    Varo ovelia ChatGPT-ansoja – tarjolla vain haittaohjelmia ja pahaa mieltä
    https://www.tivi.fi/uutiset/tv/d45dd7b2-6d29-4f41-bc97-cc33813e0392
    Kun tietty teknologia nousee yhtä suureen suosioon kuin OpenAI:n kehittämä tekoälybotti ChatGPT, voi olla varma, että haaskalle pyrkivät pian myös rikolliset. Niin tälläkin kertaa.
    Tietoturvatutkijat varoittavat lukuisista ChatGPT:n nimeä ja suosiota hyödyntävistä huijaus- ja haittaohjelmasivustoista. Käyttäjälle tarjotaan esimerkiksi mahdollisuutta päästä jonon ohi tai jatkaa keskustelua ilman keskeytyksiä. Tämän tarvitsee ainoastaan ladata sovellus, ja homma on sitä myöten selvä. Välillä valitettavasti kirjaimellisesti, sillä huijaussovelluksen ajamisen jälkeen käyttäjä huomaa tiedostojensa olevan lukossa.

    Reply
  29. Tomi Engdahl says:

    Alex Heath / The Verge:NEW
    Snap plans to release a Snapchat chatbot, called “My AI”, powered by OpenAI’s ChatGPT, pinned to the app’s chat tab, available initially to Plus subscribers — Snapchat is introducing a chatbot powered by the latest version of OpenAI’s ChatGPT.

    Snapchat is releasing its own AI chatbot powered by ChatGPT
    https://www.theverge.com/2023/2/27/23614959/snapchat-my-ai-chatbot-chatgpt-openai-plus-subscription

    Reply
  30. Tomi Engdahl says:

    Drew Harwell / Washington Post:
    Some AI companies are hiring “prompt engineers”, who create and refine text prompts for chatbots to understand the AI systems’ flaws and coax optimal results — When Riley Goodside starts talking with the artificial-intelligence system GPT-3, he likes to first establish his dominance.

    https://www.washingtonpost.com/technology/2023/02/25/prompt-engineers-techs-next-big-job/

    Reply
  31. Tomi Engdahl says:

    Mia Sato / The Verge:
    Editors at some literary magazines say they are getting overwhelmed by AI-generated submissions, potentially crowding out genuine submissions from newer writers — A short story titled “The Last Hope” first hit Sheila Williams’ desk in early January.

    AI-generated fiction is flooding literary magazines — but not fooling anyone
    https://www.theverge.com/2023/2/25/23613752/ai-generated-short-stories-literary-magazines-clarkesworld-science-fiction

    Prominent science fiction and fantasy magazine Clarkesworld announced it would pause submissions after a flood of AI spam. It’s not the only outlet getting AI-generated stories.

    Reply
  32. Tomi Engdahl says:

    Ranjan Roy / Margins:
    How the potential of voice as a platform was wasted: closed ecosystems, overly ambitious proclamations, distorted monopolistic incentives, and too much capital

    Alexa, what happened?
    Sometimes innovation just ain’t enough
    https://www.readmargins.com/p/alexa-what-happened

    I have four Echo devices in my household. For years, we used them to control the lights in our apartment, created endless timers, asked for the weather thousands of times, and destroyed my Spotify algorithm as my kids learned to say “Alexa, play Baby Shark.” It’s gotten a bit ridiculous as we don’t have any physical clocks in our house and regularly ask Alexa the time. Is this not quite on brand for someone who rants about topics like data privacy and Amazon’s competitive stranglehold? Yeah, I know, but we’re complicated here at Margins.

    Over the past year there is one thing that has gotten progressively worse: annoying follow-up questions. An increasingly typical interaction:

    “Alexa, what’s the weather?”

    “It’s 41 degrees and cloudy. Did you know I can also create a shopping list for you?”

    When looking around on how to stop it, there were hacks or temporary fixes from Amazon staff, but it seems Amazon made the strategic choice to make this part of the Alexa experience.

    The other day was the final straw. Someone in my household made a request, and during the follow-on prompt, in a terrifying concert, my wife, my 3 and 6 year-olds, and me, all, in a tone that no parent should ever encourage for their kids, yelled, “Alexa, NO!!!!” This really happened and was probably the impetus I needed to write again

    Reply
  33. Tomi Engdahl says:

    Josh Ye / Reuters:
    Sources: Tencent established a team to work on “HunyuanAide”, a ChatGPT-style chatbot using Tencent’s Hunyuan training model, following Alibaba and Baidu

    China’s Tencent establishes team to develop ChatGPT-like product -sources
    https://www.reuters.com/technology/chinas-tencent-sets-up-team-develop-chatgpt-like-product-sources-2023-02-27/

    HONG KONG, Feb 27 (Reuters) – Chinese internet giant Tencent Holdings (0700.HK) has set up a development team to work on a ChatGPT-like chatbot, two people familiar with the matter told Reuters.

    ChatGPT’s uncanny ability to create cogent blocks of text instantly has sparked worldwide frenzied interest in the technology behind it called generative AI.

    Although Microsoft-backed OpenAI does not allow users in China to create accounts to access the chatbot, the open AI models behind the programme are relatively accessible and are increasingly being incorporated into Chinese consumer technology applications.

    Reply
  34. Tomi Engdahl says:

    Kate Lindsay / The Verge:
    As AI tools and CGI creations get better at pretending to be human, some creators online are sometimes being asked to prove that they’re human

    https://www.theverge.com/2023/2/24/23608961/tiktok-creator-bot-accusation-prove-theyre-human

    Reply
  35. Tomi Engdahl says:

    “Learn to code,” they said.

    OPENAI REPORTINGLY HIRING “ARMY” OF DEVS TO TRAIN AI TO REPLACE ENTRY-LEVEL CODERS
    https://futurism.com/the-byte/openai-replace-entry-level-coders-ai

    A new report from Semafor alleges that Silicon Valley darling and ChatGPT creator OpenAI has been making major moves to hire an “army” of outside contractors to better train a model how to code — an operation that could ultimately render entry-level coding jobs extinct.

    Reply
  36. Tomi Engdahl says:

    Is anyone safe?

    BOSSES SAY THEY’RE ALREADY REPLACING WORKERS WITH AI
    https://futurism.com/the-byte/bosses-already-replacing-workers-with-ai

    AI replacing your job is already happening — and apparently, some of your bosses are happy to admit it.

    With the success of OpenAI’s ChatGPT, the relevance of human workers, ranging from writers to coders, has come under threat of obsolescence. And the threat is very real, it seems.

    According to a ResumeBuilder.com survey of 1,000 business leaders who use or plan to use ChatGPT, 49 percent of their companies are using the chatbot in some capacity. And of those companies, 48 percent say they’ve already replaced workers at their company with AI.

    “Just as technology has evolved and replaced workers over the last several decades, ChatGPT may impact the way we work,” ResumeBuilder.com’sef career advisor Stacie Haller said in a statement. “The results of this survey shows that employers are looking to streamline some job responsibilities using ChatGPT.”

    Reply
  37. Tomi Engdahl says:

    Designed to make robotics more accessible to non-technical users, Microsoft’s latest project uses a large language model as an interface.

    Microsoft Puts OpenAI’s ChatGPT to Work Controlling Real-World Robot Arms, Drones, and More
    https://www.hackster.io/news/microsoft-puts-openai-s-chatgpt-to-work-controlling-real-world-robot-arms-drones-and-more-5033cf8fe41c

    Designed to make robotics more accessible to non-technical users, Microsoft’s latest project uses a large language model as an interface.

    Reply
  38. Tomi Engdahl says:

    This time lapse of a neural network with the neurons slowly switching off is a haunting experiment in machine learning.

    Watching AI Slowly Forget a Human Face Is Incredibly Creepy
    https://www.vice.com/en/article/evym4m/ai-told-me-human-face-neural-networks?utm_medium=social&utm_source=vice_facebook

    This time lapse of a neural network with the neurons slowly switching off is a haunting experiment in machine learning.

    Reply
  39. Tomi Engdahl says:

    Defending Against Generative AI Cyber Threats https://www.forbes.com/sites/tonybradley/2023/02/27/defending-against-generative-ai-cyber-threats/
    Generative AI has been getting a lot of attention lately. ChatGPT, Dall-E, Vall-E, and other natural language processing (NLP) AI models have taken the ease of use and accuracy of artificial intelligence to a new level and unleashed it on the general public. While there are a myriad of potential benefits and benign uses for the technology, there are also many concernsincluding that it can be used to develop malicious exploits and more effective cyberattacks

    Reply
  40. Tomi Engdahl says:

    https://www.wsj.com/articles/in-the-whirl-of-chatgpt-startups-see-an-opening-for-their-ai-chips-cb74798f
    In the Whirl of ChatGPT, Startups See an Opening for Their AI Chips
    The fascination with generative AI has spring-loaded demand for products that can help deliver on its promise. And startups are primed to pounce. ‘There’s new openings for attack,’ says one venture capitalist

    Reply
  41. Tomi Engdahl says:

    Why artificial intelligence needs to understand consequences
    A machine with a grasp of cause and effect could learn more like a human, through imagination and regret.
    https://www.nature.com/articles/d41586-023-00577-1

    Bhattacharya’s idea was to create neural networks that could profile the genetics of both the tumour and a person’s immune system, and then predict which people would be likely to benefit from treatment.

    But he discovered that his algorithms weren’t up to the task. He could identify patterns of genes that correlated to immune response, but that wasn’t sufficient1. “I couldn’t say that this specific pattern of binding, or this specific expression of genes, is a causal determinant in the patient’s response to immunotherapy,” he explains.

    Bhattacharya was stymied by the age-old dictum that correlation does not equal causation — a fundamental stumbling block in artificial intelligence (AI). Computers can be trained to spot patterns in data, even patterns that are so subtle that humans might miss them. And computers can use those patterns to make predictions — for instance, that a spot on a lung X-ray indicates a tumour2. But when it comes to cause and effect, machines are typically at a loss. They lack a common-sense understanding of how the world works that people have just from living in it. AI programs trained to spot disease in a lung X-ray, for example, have sometimes gone astray by zeroing in on the markings used to label the right-hand side of the image.

    For computers to perform any sort of decision making, they will need an understanding of causality, says Murat Kocaoglu, an electrical engineer at Purdue University in West Lafayette, Indiana. “Anything beyond prediction requires some sort of causal understanding,” he says. “If you want to plan something, if you want to find the best policy, you need some sort of causal reasoning module.”

    Reply
  42. Tomi Engdahl says:

    ELON MUSK SAYS HE’S SUFFERING “EXISTENTIAL ANGST” ABOUT AI
    https://futurism.com/the-byte/elon-musk-existential-angst-ai

    Suffering a bit of anxiety over what recent breakthroughs in artificial intelligence might mean for humanity? So is Twitter, Tesla, and SpaceX CEO Elon Musk.

    “Having a bit of AI existential angst today,” the billionaire tweeted over the weekend, just a few hours after starting the day on a much lighter “hope you have a good Sunday” note to followers.

    Reply
  43. Tomi Engdahl says:

    “We need transparency.”

    LARGEST PUBLISHER OF SCIENTIFIC JOURNALS SLAPS DOWN ON SCIENTISTS LISTING CHATGPT AS COAUTHOR
    https://futurism.com/the-byte/scientific-journals-slaps-down-listing-chatgpt-coauthor

    As some publishers are publicly — or secretly – moving to incorporate AI into their written work, others are drawing lines in the sand.

    Among the latter group is Springer Nature, arguably the world’s foremost scientific journal publisher. Speaking to The Verge, the world’s largest scientific publishing house announced a decision to outlaw listing ChatGPT and other Large Language Models (LLMs) as coauthors on scientific studies — a question that the scientific community has been locking horns over for weeks now.

    “We felt compelled to clarify our position: for our authors, for our editors, and for ourselves,” Magdalena Skipper, editor-in-chief of Springer Nature’s Nature, told the Verge.

    “This new generation of LLMs tools — including ChatGPT — has really exploded into the community, which is rightly excited and playing with them,” she continued, “but [also] using them in ways that go beyond how they can genuinely be used at present.”

    Mixed Response
    Importantly, the publisher isn’t outlawing LLMs entirely. As long as they probably disclose LLM use, scientists are still allowed to use ChatGPT and similar programs as assistive writing and research tools. They just aren’t allowed to give the machine “researcher” status by listing it as a co-author.

    “Our policy is quite clear on this: we don’t prohibit their use as a tool in writing a paper,” Skipper tells the Verge. “What’s fundamental is that there is clarity. About how a paper is put together and what [software] is used.”

    “We need transparency,” she added, “as that lies at the very heart of how science should be done and communicated.”

    We can’t argue with that, although it’s worth noting that the ethics of incorporating ChatGPT and similar tools into scientific research isn’t as simple as making sure the bot is properly credited. These tools are often sneakily wrong, sometimes providing incomplete or flat-out bullshit answers without sources or in-platform fact-checking. And speaking of sources, text-generators have also drawn wide criticism for clear and present plagiarism, which, unlike regular ol’ pre-AI copying, can’t be reliably caught with plagiarism-detecting programs.

    It’s Complicated
    And yet, some arguments for ChatGPT’s use in the field are quite compelling, particularly as an assistive English tool for researchers who don’t speak English as a first language.

    In any case, it’s complicated. And right now, there’s no good answer.

    “I think we can safely say,” Skipper continued, “that outright bans of anything don’t work.”

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*