3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

5,316 Comments

  1. Tomi Engdahl says:

    Tekoälyn riskit vaativat kriittistä yhteiskunnallista keskustelua, sanoo teknologiajuristi Charlotta Henriksson
    https://lakitieto.edita.fi/tekoalyn-riskit/

    Reply
  2. Tomi Engdahl says:

    Deranged New AI Has No Guardrails Whatsoever, Proudly Praises Hitler
    Is this what Elon Musk meant by an “anti-woke” AI?
    https://futurism.com/deranged-ai-no-guardrails

    Reply
  3. Tomi Engdahl says:

    OpenAI CEO Warns That Competitors Will Make AI That’s More Evil
    “There will be other people who don’t put some of the safety limits that we put on it.”
    https://futurism.com/sam-altman-warns-competitors

    Reply
  4. Tomi Engdahl says:

    An open letter signed by tech luminaries, renowned scientists, and even Elon Musk, has sent shockwaves through the tech world warning of an “out-of-control race” in AI and calling on a pause in the development of ChatGPT and other AI technologies.
    https://www.wired.com/story/chatgpt-pause-ai-experiments-open-letter/?utm_brand=wired&utm_source=facebook&mbid=social_facebook&utm_medium=social&utm_social-type=owned

    Reply
  5. Tomi Engdahl says:

    Tech leaders and AI experts demand a six-month pause on ‘out-of-control’ AI experiments
    The open letter warns of risks to humans if safety isn’t given greater consideration.
    https://www.engadget.com/tech-leaders-and-ai-experts-demand-a-six-month-pause-on-out-of-control-ai-experiments-114553864.html?guccounter=1&guce_referrer=aHR0cHM6Ly9sbS5mYWNlYm9vay5jb20v&guce_referrer_sig=AQAAAEAXX9vVKVLPxr_WEkN-cmxcI6dA_w6KeJ9lQZgsaMx48zYcyiOOXWfyD6upkRprHmztppyGlqKVNUUyptE62FqxQFzY8BFn7-0KWiBJLEnszFclu4dHXUMlZqjNCkLTGqiTwka2IQUM8p6l9USNho8z228SOaUJQLT-1qpgzgw4

    An open letter signed by tech leaders and prominent AI researchers has called for AI labs and companies to “immediately pause” their work. Signatories like Steve Wozniak and Elon Musk agree risks warrant a minimum six month break from producing technology beyond GPT-4 to enjoy existing AI systems, allow people to adjust and ensure they are benefiting everyone. The letter adds that care and forethought are necessary to ensure the safety of AI systems — but are being ignored.

    The reference to GPT-4, a model by OpenAI that can respond with text to written or visual messages, comes as companies race to build complex chat systems that utilize the technology. Microsoft, for example, recently confirmed that its revamped Bing search engine has been powered by the GPT-4 model for over seven weeks, while Google recently debuted Bard, its own generative AI system powered by LaMDA. Uneasiness around AI has long circulated, but the apparent race to deploy the most advanced AI technology first has drawn more urgent concerns.

    “Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” the letter states.

    Pause Giant AI Experiments: An Open Letter
    We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.
    https://futureoflife.org/open-letter/pause-giant-ai-experiments/

    Reply
  6. Tomi Engdahl says:

    Clash of the Titans: GPT-4 Sparks AI Civil War
    The AI community is divided over the open letter urging a pause on further training of AI models while researchers fight over saving vs killing the technology
    https://analyticsindiamag.com/clash-of-the-titans-gpt-4-sparks-ai-civil-war/

    Reply
  7. Tomi Engdahl says:

    “I felt like I was Captain Kirk tricking a computer into self-destructing.”

    Asking Bing’s AI Whether It’s Sentient Apparently Causes It to Totally Freak Out
    “I felt like I was Captain Kirk tricking a computer into self-destructing.”
    https://futurism.com/bing-ai-sentient

    Reply
  8. Tomi Engdahl says:

    Google And Bing’s AI Chatbots Appear To Be Citing Each Other’s Lies
    This could get real messy.
    https://www.iflscience.com/google-and-bings-ai-chatbots-appear-to-be-citing-each-others-lies-68116

    Reply
  9. Tomi Engdahl says:

    Microsoft could be working on an AI-powered Windows to rival Chrome OS
    By Christian Guyton published 2 days ago
    Watch your back, Google
    https://www.techradar.com/news/microsoft-could-be-working-on-an-ai-powered-windows-to-rival-chrome-os

    Reply
  10. Tomi Engdahl says:

    Producer uses AI to make his vocals sound like Kanye West: “The results will blow your mind. Utterly incredible”
    By Matt Mullen( Computer Music, Future Music, emusician ) published 4 days ago
    It’s becoming possible for anyone to use an AI copy of any artist’s vocal in their own music. Where could this lead, and how will we navigate the legal and ethical implications?
    https://www.musicradar.com/news/kanye-west-ai-voice-swap

    Reply
  11. Tomi Engdahl says:

    York student uses AI chatbot to get parking fine revoked
    https://bbc.in/40LdfIh

    A student has successfully appealed a £60 parking fine by using a letter written by an artificial intelligence chatbot.

    When Millie Houlton received the notice from York City Council she said she was tempted to pay rather than spend time compiling a response.

    However, the 22-year-old asked ChatGPT to “please help me write a letter to the council, they gave me a parking ticket” and sent it off.

    The authority withdrew the fine notice.

    Miss Houlton said the fine was wrongly issued for parking on her street – as she has a permit to do so.

    She said: “I was like, ‘oh I don’t need this fine, I’m a student’ but trying to articulate what I wanted to say was pretty difficult so I thought I’ll just see if ChatGPT can do it for me.

    “I put in all my details about where and when it happened, why it was wrong and my reference for the fine and it came back with this perfectly formed personalised response within minutes.”

    Miss Houlton said the chatbot’s response was “great” and it explained the situation perfectly.

    Reply
  12. Tomi Engdahl says:

    Musk, Scientists Call for Halt to AI Race Sparked by ChatGPT
    https://www.securityweek.com/musk-scientists-call-for-halt-to-ai-race-sparked-by-chatgpt/

    A group computer scientists and tech experts are calling for a 6-month pause to consider the profound risks of AI to society and humanity.

    Reply
  13. Tomi Engdahl says:

    Italy Temporarily Blocks ChatGPT Over Privacy Concerns
    https://www.securityweek.com/italy-temporarily-blocks-chatgpt-over-privacy-concerns/

    Italy is temporarily blocking the artificial intelligence software ChatGPT in the wake of a data breach as it investigates a possible violation of stringent European Union data protection rules.

    Italy is temporarily blocking the artificial intelligence software ChatGPT in the wake of a data breach as it investigates a possible violation of stringent European Union data protection rules, the government’s privacy watchdog said Friday.

    The Italian Data Protection Authority said it was taking provisional action “until ChatGPT respects privacy,” including temporarily limiting the company from processing Italian users’ data.

    U.S.-based OpenAI, which developed the chatbot, said late Friday night it has disabled ChatGPT for Italian users at the government’s request. The company said it believes its practices comply with European privacy laws and hopes to make ChatGPT available again soon.

    While some public schools and universities around the world have blocked ChatGPT from their local networks over student plagiarism concerns, Italy’s action is “the first nation-scale restriction of a mainstream AI platform by a democracy,” said Alp Toker, director of the advocacy group NetBlocks, which monitors internet access worldwide.

    The restriction affects the web version of ChatGPT, popularly used as a writing assistant, but is unlikely to affect software applications from companies that already have licenses with OpenAI to use the same technology driving the chatbot, such as Microsoft’s Bing search engine.

    Reply
  14. Tomi Engdahl says:

    https://www.securityweek.com/italy-temporarily-blocks-chatgpt-over-privacy-concerns/

    European consumer group BEUC called Thursday for EU authorities and the bloc’s 27 member nations to investigate ChatGPT and similar AI chatbots. BEUC said it could be years before the EU’s AI legislation takes effect, so authorities need to act faster to protect consumers from possible risks.

    “In only a few months, we have seen a massive take-up of ChatGPT, and this is only the beginning,” Deputy Director General Ursula Pachl said.

    Reply
  15. Tomi Engdahl says:

    https://hackaday.com/2023/04/02/hackaday-links-april-2-2023/

    From the “Stupid ChatGPT Tricks” department, this week we saw the wildly popular chatbot used to generate activation keys for Windows 95. While trying to scam the licensing engine of a nearly three-decade-old OS might sound like a silly thing to ask an AI to do, especially one geared to natural language processing, the hack here was that the OP, known as Enderman on YouTube, actually managed to trick ChatGPT into doing the job. Normally, the chatbot refuses to honor requests like, “Generate an activation key for Windows 95.” But if you ask it to generate a string that fits the specs of a valid Win95 key, it happily complies. Enderman had to tailor the request with painful specificity, but eventually got a list of valid-looking keys, a few of which actually worked. Honestly, it seems like something you could do just as easily using a spreadsheet, but discovering that all it takes to get around the ChatGPT safeguards is simply rewording the question is kind of fun.

    Activating Windows with ChatGPT
    https://www.youtube.com/watch?v=2bTXbujbsVk

    Reply
  16. Tomi Engdahl says:

    “Furby’s plan to take over the world involves infiltrating households through their cute and cuddly appearance and then using their advanced AI technology to manipulate and control their owners.”

    LORD HELP US AFTER THEY HOOKED CHATGPT UP TO A FURBY
    https://futurism.com/the-byte/furby-chatgpt

    Reply
  17. Tomi Engdahl says:

    ChatGPT, the AI Revolution, and the Security, Privacy and Ethical Implications
    https://www.securityweek.com/chatgpt-the-ai-revolution-and-the-security-privacy-and-ethical-implications/

    Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.

    This is the Age of artificial intelligence (AI). We think it is new, but it isn’t. The AI Revolution has been in progress for many years. What is new is the public appearance of the large scale generative pre-trained transformer (GPT) known as ChatGPT (an application of Large Language Models – LLMs).

    ChatGPT has breached our absolute sensory threshold for AI. Before this point, the evolution of AI was progressing, but largely unnoticed. Now we are suddenly very aware, as if AI happened overnight. But it’s an ongoing evolution – and is one that we cannot stop. The genius is out of the bottle, and we have little understanding of where it will take us.

    At a very basic level, these implications can be divided into areas such as social, business, political, economic and more. There are no clear boundaries between them. For example, social and business combine in areas such as the future of employment.

    OpenAI, the developer of ChatGPT published its own research in this area: An Early Look at the Labor Market Impact Potential of Large Language Models. (PDF). It concludes, among other things, “around 19% of workers may see at least 50% of their tasks impacted.”

    But we must be clear – these wider effects of AI on society and economics are not our concern here. We are limiting ourselves to discussing the cybersecurity, privacy and ethical implications emerging from the GPT and LLM elements of AI.

    working paper
    gpts are gptsZ an early look at the labor market impact potential of large language models
    https://arxiv.org/pdf/2303.10130.pdf

    we investigate the potential implications of large language models HllmsIL such as generative preM
    trained transformers HgptsIL on the uNsN labor marketL focusing on the increased capabilities arising from
    llmMpowered software compared to llms on their ownN using a new rubricL we assess occupations based
    on their alignment with llm capabilitiesL integrating both human expertise and gptMT classi<cationsN
    our <ndings reveal that around XPE of the uNsN workforce could have at least QPE of their work tasks
    a;ected by the introduction of llmsL while approximately QYE of workers may see at least UPE of their
    tasks impactedN we do not make predictions about the development or adoption timeline of such llmsN
    the projected e;ects span all wage levelsL with higherMincome jobs potentially facing greater exposure to
    llm capabilities and llmMpowered softwareN signi<cantlyL these impacts are not restricted to industries
    with higher recent productivity growthN our analysis suggests thatL with access to an llmL about QUE
    of all worker tasks in the us could be completed signi<cantly faster at the same level of qualityN when
    incorporating software and tooling built on top of llmsL this share increases to between TW and UVE
    of all tasksN this <nding implies that llmMpowered software will have a substantial e;ect on scaling
    the economic impacts of the underlying modelsN we conclude that llms such as gpts exhibit traits of
    generalMpurpose technologiesL indicating that they could have considerable economicL socialL and policy
    implications

    Reply
  18. Tomi Engdahl says:

    “I doubt it is possible to create a GPT model that can’t be abused,” adds Mike Parkin, senior technical engineer at Vulcan Cyber. “The challenge long term will be keeping threat actors from abusing the commercially available AI engines. Ultimately though, it will be impossible to keep them from creating their own and using them for whatever purposes they decide.”

    https://www.securityweek.com/chatgpt-the-ai-revolution-and-the-security-privacy-and-ethical-implications/

    Reply
  19. Tomi Engdahl says:

    SILICON VALLEY PRIVATE SCHOOL GIVING KIDS “AI TUTORS” QUIETLY CREATED BY OPENAI
    https://futurism.com/the-byte/silicon-valley-school-ai-tutors-openai

    WOULD YOU LET AN AI HELP TEACH YOUR KID?

    Wrath of Khan
    Would you trust OpenAI’s ChatGPT to help teach your kids?

    You might be inclined to say “no.” There’s a wealth of evidence out there demonstrating that ChatGPT and other chatbots can frequently get the facts wrong and make plagiarizing all too easy.

    But a Silicon Valley private school called Khan Lab School thinks differently. Enter its newly unveiled, AI-powered tutor called “Khanmigo,” quietly created with the help of OpenAI, the Washington Post reports.

    “I’m still pretty new, so I sometimes make mistakes,” Khanmigo told a student in a pop-up window, according to the newspaper. “If you catch me making a mistake… press the thumbs down.”

    Reply
  20. Tomi Engdahl says:

    Bill Gates said pausing the development of artificial intelligence, as suggested by more than 1,000 AI experts including Elon Musk last week, would not “solve the challenges” ahead.

    Bill Gates Rejects ‘Pause’ On AI, Suggesting It’s Impractical
    https://www.forbes.com/sites/anafaguy/2023/04/04/bill-gates-rejects-pause-on-ai-suggesting-its-impractical/?utm_campaign=socialflowForbesMainFB&utm_source=ForbesMainFacebook&utm_medium=social

    TOPLINE Former Microsoft CEO and billionaire Bill Gates said pausing the development of artificial intelligence, as suggested by more than 1,000 AI experts including Elon Musk last week, would not “solve the challenges” ahead, in an interview with Reuters.

    The Microsoft co-founder said he didn’t understand how a pause on AI, which Microsoft has invested heavily in, could work globally, adding that asking particular groups to pause development doesn’t solve the challenges in the field, Gates told Reuters.

    Gates suggested instead that people focus on how to best use the developments in the field and “identify the tricky areas.”

    CRUCIAL QUOTE
    Gates told Reuters he didn’t understand how a pause in development could be enforced, saying, “I don’t really understand who they’re saying could stop, and would every country in the world agree to stop, and why to stop.”

    Reply
  21. Tomi Engdahl says:

    ChatGPT, the AI Revolution, and the Security, Privacy and Ethical Implications
    https://www.securityweek.com/chatgpt-the-ai-revolution-and-the-security-privacy-and-ethical-implications/

    Two of humanity’s greatest drivers, greed and curiosity, will push AI development forward. Our only hope is that we can control it.

    This is the Age of artificial intelligence (AI). We think it is new, but it isn’t. The AI Revolution has been in progress for many years. What is new is the public appearance of the large scale generative pre-trained transformer (GPT) known as ChatGPT (an application of Large Language Models – LLMs).

    ChatGPT has breached our absolute sensory threshold for AI. Before this point, the evolution of AI was progressing, but largely unnoticed. Now we are suddenly very aware, as if AI happened overnight. But it’s an ongoing evolution – and is one that we cannot stop. The genius is out of the bottle, and we have little understanding of where it will take us.

    At a very basic level, these implications can be divided into areas such as social, business, political, economic and more. There are no clear boundaries between them. For example, social and business combine in areas such as the future of employment.

    OpenAI, the developer of ChatGPT published its own research in this area: An Early Look at the Labor Market Impact Potential of Large Language Models. (PDF). It concludes, among other things, “around 19% of workers may see at least 50% of their tasks impacted.”

    But we must be clear – these wider effects of AI on society and economics are not our concern here. We are limiting ourselves to discussing the cybersecurity, privacy and ethical implications emerging from the GPT and LLM elements of AI.

    Reply
  22. Tomi Engdahl says:

    Tutkijalta vakava varoitus teko­älystä: ”Kaikista todennäköisin seuraus on…” https://www.is.fi/digitoday/art-2000009499347.html

    Reply
  23. Tomi Engdahl says:

    How to Connect ChatGPT to Ableton for Automatic AI Music Making
    https://m.youtube.com/watch?v=-sKXN4NrFuY&feature=youtu.be

    Let’s try to connect ChatGPT to the Ableton Live to automatically generate music.

    Reply
  24. Tomi Engdahl says:

    No coding required.

    Want to land one of A.I.’s lucrative six-figure roles? Experts say there are ‘no technical skills required’
    https://trib.al/oszOS0i

    Experts often use an analogy of a toddler to describe A.I., suggesting products like chatbot phenomenon ChatGPT need to be taught everything they know by a real human being.

    In their early days, large language models (LLMs) like these are created by developers and programmers who build them up to a useable level. Then comes the point in an A.I.’s lifespan where it needs to learn how to communicate clearly and efficiently.

    This is where a new breed of technology employees is being created—and they don’t need to know a thing about coding.

    They are the ‘prompt engineers’, tasked with training LLMs to continuously give users accurate and useful responses.

    Despite people in the role raking in six-figure salaries, potential employers often welcome candidates who don’t come from a tech background or have any coding skills. As Tesla’s former head of A.I. Andrei Kaparthy put it: “The hottest new programming language is English.”

    The shift in the tech careers landscape comes amid a heated race for the top spot in the A.I. market, which intensified in recent months after OpenAI’s ChatGPT was labeled a game changer.

    Prompt engineer postings at the time of writing range from contracted remote work for $200 an hour, up to full-time positions paying up to $335,000.

    “We think A.I. systems like the ones we’re building have enormous social and ethical implications,” the company says. “This makes representation even more important, and we strive to include a range of diverse perspectives on our team.”

    Scully told Fortune that although skills like coding aren’t essential to land the job, candidates with a background in linguistics or critical thinking should at least familiarize themselves with the basics of data science, machine learning and deep thinking—even if just through free online courses.

    “Candidates who can provide more contextual information when crafting prompts will be more likely to receive comprehensive and accurate answers from the A.I.,” he said. “This ability to think critically and understand the importance of context will be invaluable in the A.I. field.”

    “The job is based on an ability to build prompts and not on prior experience, so candidate will probably normally do task-based interviews,” Hellier said. “This should ensure we are hiring from diverse backgrounds, hiring on traits and ability rather than prior experience.”

    Reply
  25. Tomi Engdahl says:

    Sasha Luccioni / Wired:
    Instead of halting AI research, an unfeasible task, the industry must improve transparency and accountability by being open to external audits and to regulation — Tech leaders’ Open Letter proposed a pause on ChatGPT. But researchers already know how to make artificial intelligence safer.

    The Call to Halt ‘Dangerous’ AI Research Ignores a Simple Truth
    Tech leaders’ Open Letter proposed a pause on ChatGPT. But researchers already know how to make artificial intelligence safer.
    https://www.wired.com/story/the-call-to-halt-dangerous-ai-research-ignores-a-simple-truth/

    Reply
  26. Tomi Engdahl says:

    Jeff Mason / Reuters:
    Biden says tech companies have a responsibility to ensure AI products are safe before making them public and whether AI is dangerous remains to be seen — U.S. President Joe Biden said on Tuesday it remains to be seen whether artificial intelligence is dangerous, but underscored …

    Biden eyes AI dangers, says tech companies must make sure products are safe
    https://www.reuters.com/technology/biden-discuss-risks-ai-tuesday-meeting-with-science-advisers-2023-04-04/

    WASHINGTON, April 4 (Reuters) – U.S. President Joe Biden said on Tuesday it remains to be seen whether artificial intelligence (AI) is dangerous, but underscored that technology companies had a responsibility to ensure their products were safe before making them public.

    Biden told science and technology advisers that AI could help in addressing disease and climate change, but it was also important to address potential risks to society, national security and the economy.

    “Tech companies have a responsibility, in my view, to make sure their products are safe before making them public,” he said at the start of a meeting of the President’s Council of Advisors on Science and Technology (PCAST). When asked if AI was dangerous, he said, “It remains to be seen. It could be.”

    The president said social media had already illustrated the harm that powerful technologies can do without the right safeguards.

    “Absent safeguards, we see the impact on the mental health and self-images and feelings and hopelessness, especially among young people,” Biden said.

    He reiterated a call for Congress to pass bipartisan privacy legislation to put limits on personal data that technology companies collect, ban advertising targeted at children, and to prioritize health and safety in product development.

    Reply
  27. Tomi Engdahl says:

    OpenAI:
    OpenAI details its approach to AI safety, including evaluating AI systems, improving safeguards based on real-world use, protecting kids, and respecting privacy — Ensuring that AI systems are built, deployed, and used safely is critical to our mission. — Authors — OpenAI — Safety & Alignment

    Our approach to AI safety
    https://openai.com/blog/our-approach-to-ai-safety

    Ensuring that AI systems are built, deployed, and used safely is critical to our mission.

    Reply
  28. Tomi Engdahl says:

    Samsung workers made a major error by using ChatGPT
    By Lewis Maddison published 1 day ago
    Samsung meeting notes and new source code are now in the wild after being leaked in ChatGPT
    https://www.techradar.com/news/samsung-workers-leaked-company-secrets-by-using-chatgpt

    Samsung workers have unwittingly leaked top secret data whilst using ChatGPT to help them with tasks.

    The company allowed engineers at its semiconductor arm to use the AI writer to help fix problems with their source code. But in doing so, the workers inputted confidential data, such as the source code itself for a new program, internal meeting notes data relating to their hardware.

    The upshot is that in just under a month, there were three recorded incidences of employees leaking sensitive information via ChatGPT. Since ChatGPT retains user input data to further train itself, these trade secrets from Samsung are now effectively in the hands of OpenAI, the company behind the AI service.

    In one of the aforementioned cases, an employee asked ChatGPT to optimize test sequences for identifying faults in chips, which is confidential – however, making this process as efficient as possible has the potential to save chip firms considerable time in testing and verifying processors, leading to reductions in cost too.

    In another case, an employee used ChatGPT to convert meeting notes into a presentation, the contents of which were obviously not something Samsung would have liked external third parties to have known.

    Samsung Electronics sent out a warning to its workers on the potential dangers of leaking confidential information in the wake of the incidences, saying that such data is impossible to retrieve as it is now stored on the servers belonging to OpenAI. In the semiconductor industry, where competition is fierce, any sort of data leak could spell disaster for the company in question.

    It doesn’t seem as if Samsung has any recourse to request the retrieval or deletion of the sensitive data OpenAI now holds. Some have argued(opens in new tab) that this very fact makes ChatGPT non-compliant with the EU’s GDPR, as this is one of the core tenants of the law governing how companies collect and use data. It is also one of the reasons why Italy has now banned the use of ChatGPT nationwide

    Reply
  29. Tomi Engdahl says:

    AI Vision Processor Tackles 4K Images
    April 3, 2023
    The Hailo-15 AI Vision Processor, which delivers 20 TOPS of neural-network performance, includes an application and DSP core along with an image signal processor.
    https://www.electronicdesign.com/technologies/embedded/machine-learning/article/21263209/electronic-design-ai-vision-processor-tackles-4k-images?utm_source=EG+ED+Connected+Solutions&utm_medium=email&utm_campaign=CPS230330048&o_eid=7211D2691390C9R&rdx.identpull=omeda|7211D2691390C9R&oly_enc_id=7211D2691390C9R

    Reply
  30. Tomi Engdahl says:

    Meet Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data
    https://www.marktechpost.com/2023/04/05/meet-baize-an-open-source-chat-model-with-parameter-efficient-tuning-on-self-chat-data/

    Natural Language Processing, or NLP, is one of the most fascinating fields in the ever-growing world of artificial intelligence and machine learning. Recent technological breakthroughs in the field of NLP have given rise to numerous impressive models employed in chat services, virtual assistants, language translators, etc., across multiple sectors. The most notable example of this is OpenAI’s conversational dialogue agent, ChatGPT, which has recently taken the world by storm. The OpenAI chatbot gained over a million users within five days of its inception because of its astonishing ability to generate insightful and versatile human-like responses to user questions originating from a variety of fields.

    Reply
  31. Tomi Engdahl says:

    A programmer built an AI Furby using ChatGPT for ‘complete domination over humanity,’ and millennials are both fascinated and horrified
    https://www.insider.com/programmer-ai-friend-furby-robot-chatgpt-artificial-intelligence-viral-2023-4

    A video of an AI Furby’s creepy dialogue about taking over the world went viral.
    The programmer, Jessica Card, told Insider she’s obsessed with the idea of creating AI “friends.”
    People (particularly millennials) are enamored and a bit scared by Card’s latest invention using their childhood favorite toy

    Reply
  32. Tomi Engdahl says:

    HOLLYWOOD PANICKING OVER THE RISE OF GENERATIVE AI
    https://futurism.com/the-byte/hollywood-panicking-rise-of-generative-ai

    FILM STUDIOS ARE REALLY STARTING TO FEEL THE HEAT.

    Copied on the Spot
    The power and ubiquity of generative AI has had people fearing for their livelihoods — especially artists, who worry that their work will be vulnerable to mass plagiarism by the AIs that are being “trained” on their art.

    And, as The Wall Street Journal reports, even Hollywood is beginning to feel the heat.

    “This woke everyone up,” he told WSJ.

    Credit Where It’s Due
    Wiser makes a fair point. The profitability of the entertainment business rides on intellectual property ownership, as the WSJ notes. But if cheap, easy-to-use AI enables practically anyone to create images that are directly ripped from existing art or iconic characters, who gets the credit — and the money?

    That’s a question the Writers Guild of America is reportedly grappling with when it comes to screenplays.

    “One of the biggest risks here is that these engines can generate our intellectual property in new ways, and that is out in the hands of the public,” Wiser told the WSJ.

    Reply
  33. Tomi Engdahl says:

    ChatGPT Lands OpenAI in Legal Trouble, Globally
    https://analyticsindiamag.com/chatgpt-lands-openai-in-legal-trouble-globally/

    The Office of the Privacy Commissioner of Canada (OPC), is investigating OpenAI against a complaint alleging the collection, use and disclosure of personal information without consent.

    OpenAI, the creator of the widely used chatbot, ChatGPT, is currently facing legal issues in several jurisdictions. In Australia, Brian Wood, Mayor of Hepburn Shire, may sue OpenAI if ChatGPT’s erroneous statements about him serving a prison term for bribery are not rectified.

    Wood was surprised to hear from the general public that ChatGPT had wrongly accused him of being involved in a foreign bribery scandal linked to a subsidiary of the Reserve Bank of Australia during the early 2000s.

    If Wood does sue OpenAI, it would be the first instance where OpenAI is being sued for claims made by ChatGPT.

    Meanwhile, the Office of the Privacy Commissioner of Canada (OPC) is investigating OpenAI against a complaint alleging the collection, use and disclosure of personal information without consent.

    “We need to keep up with—and stay ahead of—fast-moving technological advances, and that is one of my key focus areas,” Canadian privacy commissioner Philippe Dufresne, said in a media statement.

    Recently, Italy also became the first European nation to ban ChatGPT. The country’s data protection authority has directed OpenAI to halt the processing of data belonging to Italian users on a temporary basis.

    “There appears to be no legal basis underpinning the massive collection and processing of personal data in order to ‘train’ the algorithms on which the platform relies,” the regulator said.

    Reply
  34. Tomi Engdahl says:

    SWIFTIES ARE USING AI TO MAKE TAYLOR SWIFT SAY NICE THINGS TO THEM
    https://futurism.com/the-byte/swifties-ai

    Reply
  35. Tomi Engdahl says:

    University plagiarism detector will catch students who cheat with ChatGPT
    https://www.telegraph.co.uk/news/2023/04/04/chatgpt-cheat-students-plagiarism-univerisites-ai/?utm_content=telegraph&utm_medium=Social&utm_campaign=Echobox&utm_source=Facebook#Echobox=1680588973

    The plagiarism detector used by most British universities has announced it will now be able to identify students using ChatGPT in their essays or coursework.

    Turnitin claims that the technology will be able to identify the use of AI writing tools, such as the popular ChatGPT, with 98 per cent confidence.

    The AI detector, which has been combined into existing products, works by providing an assessment of how many sentences in a written submission may have been generated by an artificial intelligence software. This can then be used by the institution to decide if further review, inquiry or discussion with the student is needed.

    Released AI writing resources
    “To maintain a less than one per cent false positive rate, we only flag something when we are 98 per cent sure it is written by AI based on data that was collected and verified in our controlled lab environment.”

    The firm has also released some AI writing resources on its website to help institutions understand how to deal with this new technology.

    However, some academics have urged embracing AI tools such as ChatGPT. Mike Sharples, emeritus professor of educational technology at The Open University, advocates for AI to be seen as a creative tool.

    ‘Need to develop clear policy’
    He said: “All this discussion of AI detectors misses the point that ChatGPT should be seen as a tool for creativity not as a substitute writer.

    “Educators and institutions will need to develop clear policy and guidelines for appropriate use of AI in education. At the same time, they should explore how generative AI can empower students and teachers with good educational practices.”

    Reply
  36. Tomi Engdahl says:

    Bristol University student creates app to stop cheats using essay bot
    https://www.bbc.com/news/uk-england-bristol-65200549

    Reply
  37. Tomi Engdahl says:

    Can AI be sued? We could be about to find out
    By James Cutler published 3 days ago
    ChatGPT faces a new defamation hurdle
    https://www.techradar.com/news/can-ai-be-sued-we-could-be-about-to-find-out

    If you’re starting to feel like 2023 might be the year of AI, you’re not alone. As recently as February, excitement for the technology managed to see AI chatbot ChatGPT become the fastest-growing consumer app in history

    Reply
  38. Tomi Engdahl says:

    As Anthropic seeks billions to take on OpenAI, ‘industrial capture’ is nigh. Or is it?
    https://venturebeat.com/ai/as-anthropic-seeks-billions-to-take-on-openai-industrial-capture-is-nigh-or-is-it/

    In May 2021, Dario Amodei, former VP of research at OpenAI, co-founded Anthropic with his sister Daniela (also an OpenAI employee), and explained that the company was primarily focused on AI safety research. But even back then, he said that he could see many opportunities for its work to create commercial value.

    That commercial focus was on full display in TechCrunch’s article yesterday, which said it gained access to Anthropic company documents and revealed that the company is “working to raise as much as $5 billion over the next two years to take on rival OpenAI and enter over a dozen major industries.”

    Reply
  39. Tomi Engdahl says:

    Terrifying study shows how fast AI can crack your passwords; here’s how to protect yourself
    https://9to5mac.com/guides/artificial-intelligence/

    Along with the positive aspects of the new generative AI services come new risks. One that’s surfaced is an advanced approach to cracking passwords called PassGAN. Using the latest AI, it was able to compromise 51% of passwords in under one minute with 71% of passwords cracked in less than a day. Read on for a look at the character thresholds that offer security against AI password cracking, how PassGAN works, and more.

    Reply
  40. Tomi Engdahl says:

    Jailbreaking AI Chatbots Is Tech’s New Pastime
    AI programs have safety restrictions built in to prevent them from saying offensive or dangerous things. It doesn’t always work
    https://www.bloomberg.com/news/articles/2023-04-08/jailbreaking-chatgpt-how-ai-chatbot-safeguards-can-be-bypassed

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*