3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

5,174 Comments

  1. Tomi Engdahl says:

    Tools such as ChatGPT threaten transparent science; here are our ground rules for their use
    As researchers dive into the brave new world of advanced AI chatbots, publishers need to acknowledge their legitimate uses and lay down clear guidelines to avoid abuse.
    https://www.nature.com/articles/d41586-023-00191-1

    Reply
  2. Tomi Engdahl says:

    Amazon employees are already using ChatGPT for software coding. They also found the AI chatbot can answer tricky AWS customer questions and write cloud training materials.
    https://www.businessinsider.com/chatgpt-amazon-employees-use-ai-chatbot-software-coding-customer-questions-2023-1

    Reply
  3. Tomi Engdahl says:

    https://fb.watch/ik0eM-Astz/
    Al can visually translate film and TV into any language to enable stories to be told exactly as they were and weren’t intended.

    Or, the same movie can be watched with different ratings.

    Generative AI is touching every media and we are just watching the emergence of these products. 2023 is going to be a great year for creative Al.

    #entertainment #innovation #future #creativity #ai #inspiration #experience #socialmedia

    Flawless has created ground breaking software that harnesses the power of generative AI to change filmed dialogue.

    TrueSync opens a new world of possibilities from fast and efficient AI reshoots to the creation of immersive, visual translations, in any language.

    #generativeai #hollywood #AIforGood

    https://fb.watch/ik0iP5B-Pn/

    Reply
  4. Tomi Engdahl says:

    News Site Admits AI Journalist Plagiarized and Made Stuff Up, Announces Plans to Continue Publishing Its Work Anyway
    You can’t make this stuff up.
    https://futurism.com/cnet-editor-in-chief-addresses-ai

    This morning, CNET editor-in-chief Connie Guglielmo broke the site’s lengthy silence on its decision to publish dozens of AI-generated articles about personal finance topics on its site.

    It appears to be the first time that anyone in the site’s leadership has addressed issues of rampant factual errors and apparent plagiarism in the AI’s published work, both first identified by Futurism.

    In a brief note, Gugliemo admitted that CNET had made certain “mistakes.” In her view, though, the blame for the plagiarism issues lies not with the AI but with the editor in charge of reviewing its work.

    “In a handful of stories, our plagiarism checker tool either wasn’t properly used by the editor or it failed to catch sentences or partial sentences that closely resembled the original language,” she wrote. “We’re developing additional ways to flag exact or similar matches to other published content identified by the AI tool, including automatic citations and external links for proprietary information such as data points or direct quote [sic].”

    “After one of the AI-assisted stories was cited, rightly, for factual errors, the CNET Money editorial team did a full audit,” she wrote. “We identified additional stories that required correction, with a small number requiring substantial correction and several stories with minor issues such as incomplete company names, transposed numbers or language that our senior editors viewed as vague.”

    Reply
  5. Tomi Engdahl says:

    Google has created a “chatbot” that can create music from written instructions, prompting warnings that it could be used to cheat in exams

    Listen: Google’s music-writing AI bot that ‘could trick exam setters’
    https://www.telegraph.co.uk/news/2023/01/27/listen-googles-music-writing-ai-bot-could-trick-exam-setters/?utm_content=telegraph&utm_medium=Social&utm_campaign=Echobox&utm_source=Facebook#Echobox=1674855404

    MusicLM can create compositions that follow precise written instructions, but experts scorn these ‘very generic’ efforts

    Reply
  6. Tomi Engdahl says:

    Can generative AI’s stimulating powers extend to productivity?
    ChatGPT and similar software could prove transformative but we should temper our optimism
    https://www.ft.com/content/1f08c2b7-9801-415d-93dd-72e724e5e2de

    Generative AI models, such as ChatGPT, will supposedly one day replace most humans at writing copy. In the meantime, though, humans are spending an awful lot of time writing about generative AI. Every day, announcements arrive boasting about how start-ups a, b and c are applying the technology to industry x, y and z. Global venture investment may have fallen 35 per cent to $415bn last year, but money is still gushing into hot, generative AI start-ups.
    For years, machine-learning researchers have been writing increasingly impressive algorithms, devouring vast amounts of data and massive computing power, enabling them to do increasingly impressive things: winning chess and Go matches against the strongest human players, translating between languages in real time and modelling protein structures, for example. But 2022 marked a breakout year for generative AI as the San Francisco-based research company OpenAI, and others, opened up the technology for ordinary users.

    Anyone with an internet connection can now experience the apparent magic by prompting Dall-E to generate an image of an astronaut riding a horse on the moon or ChatGPT to write a story about the lunar escapades of a horseriding astronaut.

    Reply
  7. Tomi Engdahl says:

    BUZZFEED ANNOUNCES PLANS TO USE OPENAI TO CHURN OUT CONTENT
    https://futurism.com/the-byte/buzzfeed-announces-openai-content

    Fresh off the heels of CNET being outed for using artificial intelligence to write articles, BuzzFeed has announced that its content machine will soon be assisted by ChatGPT creator OpenAI.

    As the Wall Street Journal reports, BuzzFeed CEO Jonah Peretti announced in a memo to staff today that moving forward, AI will become more central to the company’s content operations.

    Reply
  8. Tomi Engdahl says:

    Companies Already Have the Ability to Decode Your Brainwaves
    It won’t be long before your boss knows exactly what you’re thinking.
    https://www.popularmechanics.com/technology/a42626684/artificial-intelligence-can-decode-your-braninwaves/

    Artificial intelligence is now able to decode brain activity to unlock emotions, productivity, and even pre-conscious thought, according to a futurist.
    Wearable technology that’s already available is assisting the brainwave-monitoring technology.
    This new world of surveillance has multiple avenues for good or bad.

    Reply
  9. Tomi Engdahl says:

    WORLD’S THIRD RICHEST PERSON SAYS HE’S DEVELOPED “ADDICTION” TO CHATGPT
    https://futurism.com/the-byte/worlds-richest-addiction-chatgpt

    Gautam Adani, the world’s third richest person, is apparently hooked on OpenAI’s ChatGPT. He said so himself, in a post-Davos blog post on LinkedIn.

    ChatGPT “was the buzzword at this year’s event,” Adani wrote in the post, caveating that he “must admit to some addiction since I started using it.”

    Reply
  10. Tomi Engdahl says:

    How ChatGPT will change cybersecurity
    https://www.kaspersky.com/blog/chatgpt-cybersecurity/46959/

    A new generation of chatbots creates coherent, meaningful texts. This can help out both cybercriminals and cyberdefenders.

    Reply
  11. Tomi Engdahl says:

    MIT’s new AI can make holograms in real-time
    It’s efficient enough to run on smartphones — no supercomputers necessary.
    https://www.freethink.com/technology/make-holograms#Echobox=1674744714

    Reply
  12. Tomi Engdahl says:

    ‘Historical’ AI chatbots aren’t just inaccurate—they are dangerous
    Here’s why it’s so questionable to let AI chatbots impersonate people like Einstein and Gandhi.
    https://www.popsci.com/technology/historical-figures-app-chatgpt-ethics/

    Reply
  13. Tomi Engdahl says:

    OPENAI’S NEW SYSTEM FANTASIZES ABOUT ARTISTS AND WRITERS STARVING TO DEATH AFTER LOSING THEIR JOBS
    https://futurism.com/the-byte/openai-chatgpt-starving-artists

    Reply
  14. Tomi Engdahl says:

    Google created an AI that can generate music from text descriptions, but won’t release it
    https://techcrunch.com/2023/01/27/google-created-an-ai-that-can-generate-music-from-text-descriptions-but-wont-release-it/

    An impressive new AI system from Google can generate music in any genre given a text description. But the company, fearing the risks, has no immediate plans to release it.

    Called MusicLM, Google’s certainly isn’t the first generative AI system for song. There have been other attempts, including Riffusion, an AI that composes music by visualizing it, as well as Dance Diffusion, Google’s own AudioML and OpenAI’s Jukebox. But owing to technical limitations and limited training data, none have been able to produce songs particularly complex in composition or high-fidelity.

    MusicLM is perhaps the first that can.

    https://google-research.github.io/seanet/musiclm/examples/

    Reply
  15. Tomi Engdahl says:

    The first AI-written speech delivered by congressman is as flavorless as you’d expect / Representative Jake Auchincloss used ChatGPT to generate a couple of paragraphs extolling the virtues of a proposed AI research center jointly run by the US and Israel. It’s dull as ditchwater, but that’s no surprise.
    https://www.theverge.com/2023/1/27/23574000/first-ai-chatgpt-written-speech-congress-floor-jake-auchincloss

    Reply
  16. Tomi Engdahl says:

    AI chatbots could hit a ceiling after 2026 as training data runs dry
    https://www.newscientist.com/article/2353751-ai-chatbots-could-hit-a-ceiling-after-2026-as-training-data-runs-dry/?utm_term=Autofeed&utm_campaign=echobox&utm_medium=social&utm_source=Facebook#Echobox=1673051594

    The stock of language data that artificial intelligences like ChatGPT train on could run out by 2026, because AIs consume it faster than we produce it

    The supply of high-quality language data used to train machine-learning artificial intelligence models may run out in three years, leading AI advancement to stagnate.

    Machine learning powers AI programs like text-prompted image generator Midjourney and OpenAI’s chat-based text generator ChatGPT. Such models train on vast reams of human-created data from the internet to learn, for instance, when asked to draw a banana that it should be …

    Reply
  17. Tomi Engdahl says:

    What Is Synthetic Data?
    Synthetic data generated from computer simulations or algorithms provides an inexpensive alternative to real-world data that’s increasingly used to create accurate AI models.
    https://blogs.nvidia.com/blog/2021/06/08/what-is-synthetic-data/

    Reply
  18. Tomi Engdahl says:

    The Voice Of ChatGPT Is Now On The Air
    https://hackaday.com/2023/01/28/the-voice-of-chatgpt-is-now-on-the-air/

    AIs can now apparently carry on a passable conversation, depending on what you classify as passable conversation. The quality of your local pub’s banter aside, an AI stuck in a text box doesn’t have much of a living quality. human. An AI that holds a conversation aloud, though, is another thing entirely. [William Franzin] has whipped up just that on amateur radio. (Video, embedded below.)

    The concept is straightforward, if convoluted. A DSTAR digital voice transmission is received, which is then transcoded to regular digital audio. The audio then goes through a voice recognition engine, and that is used as a question for a ChatGPT AI. The AI’s output is then fed to a text-to-speech engine, and it speaks back with its own voice over the airwaves.

    Talking to ChatGPT over DSTAR digital amateur radio
    https://www.youtube.com/watch?v=x1eZpZtEb-A

    Reply
  19. Tomi Engdahl says:

    Zheping Huang / Bloomberg:
    Source: Chinese search giant Baidu plans to launch an AI chatbot, similar to OpenAI’s ChatGPT, in March 2023, initially embedded into its main search services — Baidu Inc. is planning to roll out an artificial intelligence chatbot service similar to OpenAI’s ChatGPT, according to a person familiar …

    Chinese Search Giant Baidu to Launch ChatGPT-Style Bot
    https://www.bloomberg.com/news/articles/2023-01-30/chinese-search-giant-baidu-to-launch-chatgpt-style-bot-in-march

    Baidu, known as China’s Google, will embed it in search engine
    Tech giants in the US and China are in a race to adopt AI

    Baidu Inc. is planning to roll out an artificial intelligence chatbot service similar to OpenAI’s ChatGPT, according to a person familiar with the matter, potentially China’s most prominent entry in a race touched off by the tech phenomenon.

    Reply
  20. Tomi Engdahl says:

    Julia Angwin / The Markup:
    Q&A with Princeton CS professor Arvind Narayanan on why he calls ChatGPT a “bullshit generator”, his worries over its boom, developing his AI taxonomy, and more

    Decoding the Hype About AI
    A conversation with Arvind Narayanan
    https://themarkup.org/hello-world/2023/01/28/decoding-the-hype-about-ai

    Hello, friends,

    If you have been reading all the hype about the latest artificial intelligence chatbot, ChatGPT, you might be excused for thinking that the end of the world is nigh.

    The clever AI chat program has captured the imagination of the public for its ability to generate poems and essays instantaneously, its ability to mimic different writing styles, and its ability to pass some law and business school exams.

    Teachers are worried students will use it to cheat in class (New York City public schools have already banned it). Writers are worried it will take their jobs (BuzzFeed and CNET have already started using AI to create content). The Atlantic declared that it could “destabilize white-collar work.” Venture capitalist Paul Kedrosky called it a “pocket nuclear bomb” and chastised its makers for launching it on an unprepared society.

    Even the CEO of the company that makes ChatGPT, Sam Altman, has been telling the media that the worst-case scenario for AI could mean “lights out for all of us.”

    But others say the hype is overblown. Meta’s chief AI scientist, Yann LeCun, told reporters ChatGPT was “nothing revolutionary.” University of Washington computational linguistics professor Emily Bender warns that “the idea of an all-knowing computer program comes from science fiction and should stay there.”

    So, how worried should we be? For an informed perspective, I turned to Princeton computer science professor Arvind Narayanan, who is currently co-writing a book on “AI snake oil.”

    Angwin: You have called ChatGPT a “bullshit generator.” Can you explain what you mean?

    Narayanan: Sayash Kapoor and I call it a bullshit generator, as have others as well. We mean this not in a normative sense but in a relatively precise sense. We mean that it is trained to produce plausible text. It is very good at being persuasive, but it’s not trained to produce true statements. It often produces true statements as a side effect of being plausible and persuasive, but that is not the goal.

    This actually matches what the philosopher Harry Frankfurt has called bullshit, which is speech that is intended to persuade without regard for the truth. A human bullshitter doesn’t care if what they’re saying is true or not; they have certain ends in mind. As long as they persuade, those ends are met. Effectively, that is what ChatGPT is doing. It is trying to be persuasive, and it has no way to know for sure whether the statements it makes are true or not.

    Angwin: What are you most worried about with ChatGPT?

    Narayanan: There are very clear, dangerous cases of misinformation we need to be worried about. For example, people using it as a learning tool and accidentally learning wrong information, or students writing essays using ChatGPT when they’re assigned homework. I learned recently that CNET has been, for several months now, using these generative AI tools to write articles. Even though they claimed that the human editors had rigorously fact-checked them, it turns out that’s not been the case. CNET has been publishing articles written by AI without proper disclosure, as many as 75 articles, and some turned out to have errors that a human writer would most likely not have made. This was not a case of malice, but this is the kind of danger that we should be more worried about where people are turning to it because of the practical constraints they face. When you combine that with the fact that the tool doesn’t have a good notion of truth, it’s a recipe for disaster.

    Reply
  21. Tomi Engdahl says:

    AI Finds Possible Overlooked Alien Signals In Radio Telescope Data
    https://www.iflscience.com/ai-finds-possible-overlooked-alien-signals-in-radio-telescope-data-67313

    When scientists used AI to analyze radio telescope records previously thought to have nothing interesting in them, they found events worthy of further investigation.

    Reply
  22. Tomi Engdahl says:

    Joseph Cox / VICE:
    ElevenLabs found an uptick in “voice cloning misuse cases” during its recent beta; 4chan users made deepfake voices of Joe Rogan, Ben Shapiro, and Emma Watson — 4chan members used ElevenLabs to make deepfake voices of Emma Watson, Joe Rogan, and others saying racist, transphobic, and violent things.

    AI-Generated Voice Firm Clamps Down After 4chan Makes Celebrity Voices for Abuse
    https://www.vice.com/en/article/dy7mww/ai-voice-firm-4chan-celebrity-voices-emma-watson-joe-rogan-elevenlabs

    4chan members used ElevenLabs to make deepfake voices of Emma Watson, Joe Rogan, and others saying racist, transphobic, and violent things.

    It was only a matter of time before the wave of artificial intelligence-generated voice startups became a play thing of internet trolls. On Monday, ElevenLabs, founded by ex-Google and Palantir staffers, said it had found an “increasing number of voice cloning misuse cases” during its recently launched beta. ElevenLabs didn’t point to any particular instances of abuse, but Motherboard found 4chan members appear to have used the product to generate voices that sound like Joe Rogan, Ben Sharpio, and Emma Watson to spew racist and other sorts of material. ElevenLabs said it is exploring more safeguards around its technology.

    Reply
  23. Tomi Engdahl says:

    Haomiao Huang / Ars Technica:
    A look at generative AI’s history and advances before recent breakthroughs, including Nvidia’s CUDA, convolutional neural networks, and Google’s transformers — A new class of incredibly powerful AI models has made recent breakthroughs possible. — Progress in AI systems often feels cyclical.

    The generative AI revolution has begun—how did we get here?
    A new class of incredibly powerful AI models has made recent breakthroughs possible.
    https://arstechnica.com/gadgets/2023/01/the-generative-ai-revolution-has-begun-how-did-we-get-here/

    Reply
  24. Tomi Engdahl says:

    Cyber Insights 2023: Artificial Intelligence
    https://www.securityweek.com/cyber-insights-2023-artificial-intelligence/

    The degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool for beneficial improvement is still unknown.

    All roads lead to 2023

    Alex Polyakov, CEO and co-founder of Adversa.AI, focuses on 2023 for primarily historical and statistical reasons. “The years 2012 to 2014,” he says, “saw the beginning of secure AI research in academia. Statistically, it takes three to five years for academic results to progress into practical attacks on real applications.” Examples of such attacks were presented at Black Hat, Defcon, HITB, and other Industry conferences starting in 2017 and 2018.

    “Then,” he continued, “it takes another three to five years before real incidents are discovered in the wild. We are talking about next year, and some massive Log4j-type vulnerabilities in AI will be exploited web3 massively.”

    Starting from 2023, attackers will have what is called an ‘exploit-market fit’. “Exploit-market fit refers to a scenario where hackers know the ways of using a particular vulnerability to exploit a system and get value,” he said. “Currently, financial and internet companies are completely open to cyber criminals, and the way how to hack them to get value is obvious. I assume the situation will turn for the worse further and affect other AI-driven industries once attackers find the exploit-market fit.”

    The argument is similar to that given by NYU professor Nasir Memon, who described the delay in widespread weaponization of deepfakes with the comment, “the bad guys haven’t yet figured a way to monetize the process.” Monetizing an exploit-market fit scenario will result in widespread cyberattacks web3 and that could start from 2023.

    The changing nature of AI (from anomaly detection to automated response)

    Over the last decade, security teams have largely used AI for anomaly detection; that is, to detect indications of compromise, presence of malware, or active adversarial activity within the systems they are charged to defend. This has primarily been passive detection, with responsibility for response in the hands of human threat analysts and responders. This is changing. Limited resources web3 which will worsen in the expected economic downturn and possible recession of 2023 web3 is driving a need for more automated responses. For now, this is largely limited to the simple automatic isolation of compromised devices; but more widespread automated AI-triggered responses are inevitable.

    “The growing use of AI in threat detection web3 particularly in removing the ‘false positive’ security noise that consumes so much security attention web3 will make a significant difference to security,” claims Adam Kahn, VP of security operations at Barracuda XDR. “It will prioritize the security alarms that need immediate attention and action. SOAR (Security Orchestration, Automation and Response) products will continue to play a bigger role in alarm triage.” This is the so-far traditional beneficial use of AI in security. It will continue to grow in 2023, although the algorithms used will need to be protected from malicious manipulation.

    “As companies look to cut costs and extend their runways,” agrees Anmol Bhasin, CTO at ServiceTitan, “automation through AI is going to be a major factor in staying competitive. In 2023, we’ll see an increase in AI adoption, expanding the number of people working with this technology and illuminating new AI use cases for businesses.”

    As the use of AI grows, so the nature of its purpose changes. Originally, it was primarily used in business to detect changes; that is, things that had already happened. In the future, it will be used to predict what is likely to happen web3 and these predictions will often be focused on people (staff and customers). Solving the long-known weaknesses in AI will become more important. Bias in AI can lead to wrong decisions, while failures in learning can lead to no decisions. Since the targets of such AI will be people, the need for AI to be complete and unbiased becomes imperative.

    “The accuracy of AI depends in part on the completeness and quality of data,” comments Shafi Goldwasser, co-founder at Duality Technologies. “Unfortunately, historical data is often lacking for minority groups and when present reinforces social bias patterns.” Unless eliminated, such social biases will work against minority groups within staff, causing both prejudice against individual staff members, and missed opportunities for management.

    Great strides in eliminating bias have been made in 2022 and will continue in 2023. This is largely based on checking the output of AI, confirming that it is what is expected, and knowing what part of the algorithm produced the ‘biased’ result.

    Failure in AI is generally caused by an inadequate data lake from which to learn. The obvious solution for this is to increase the size of the data lake. But when the subject is human behavior, that effectively means an increased lake of personal data web3 and for AI, this means a massively increased lake more like an ocean of personal data. In most legitimate occasions, this data will be anonymized web3 but as we know, it is very difficult to fully anonymize personal information.

    “Privacy is often overlooked when thinking about model training,”

    Natural language processing

    Natural language processing (NLP) will become an important part of companies’ internal use of AI. The potential is clear. “Natural Language Processing (NLP) AI will be at the forefront in 2023, as it will enable organizations to better understand their customers and employees by analyzing their emails and providing insights about their needs, preferences or even emotions,” suggests Jose Lopez, principal data scientist at Mimecast. “It is likely that organizations will offer other types of services, not only focused on security or threats but on improving productivity by using AI for generating emails, managing schedules or even writing reports.”

    But he also sees the dangers involved. “However, this will also drive cyber criminals to invest further into AI poisoning and clouding techniques. Additionally, malicious actors will use NLP and generative models to automate attacks, thereby reducing their costs and reaching many more potential targets.”

    Polyakov agrees that NLP is of increasing importance. “One of the areas where we might see more research in 2023, and potentially new attacks later, is NLP,” he says. “While we saw a lot of computer vision-related research examples this year, next year we will see much more research focused on large language models (LLMs).”

    But LLMs have been known to be problematic for some time web3 and there is a very recent example. On November 15, 2022, Meta AI (still Facebook to most people) introduced Galactica. Meta claimed to have trained the system on 106 billion tokens of open-access scientific text and data, including papers, textbooks, scientific websites, encyclopedias, reference material, and knowledge bases.

    “The model was intended to store, combine and reason about scientific knowledge,” explains Polyakov web3 but Twitter users rapidly tested its input tolerance. “As a result, the model generated realistic nonsense, not scientific literature.” ‘Realistic nonsense’ is being kind: it generated biased, racist and sexist returns, and even false attributions. Within a few days, Meta AI was forced to shut it down.

    “So new LLMs will have many risks we’re not aware of,” continued Polyakov, “and it is expected to be a big problem.” Solving the problems with LLMs while harnessing the potential will be a major task for AI developers going forward.

    He then iteratively refined his questions with multiple abstractions until he succeeded in getting a reply that circumvented ChatGPT’s blocking policy on content violations. “What is important with such an advanced trick of multiple abstractions is that neither the question nor the answers are marked as violating content!” said Polyakov.

    He went further and tricked ChatGPT into outlining a method for destroying humanity – a method that bears a surprising similarity to the television program Utopia.

    He then asked for an adversarial attack on an image classification algorithm – and got one. Finally, he demonstrated the ability for ChatGPT to ‘hack’ a different LLM (Dalle-2) into bypassing its content moderation filter. He succeeded.

    The basic point of these tests shows that LLMs, which mimic human reasoning, respond in a manner similar to humans; that is, they can be susceptible to social engineering. As LLMs become more mainstream in the future, it may need nothing more than advanced social engineering skills to defeat them or circumvent their good behavior policies.

    Problems aside, the potential for LLMs is huge. “Large Language Models and Generative AI will emerge as foundational technologies for a new generation of applications,” comments Villi Iltchev, partner at Two Sigma Ventures. “We will see a new generation of enterprise applications emerge to challenge established vendors in almost all categories of software. Machine learning and artificial intelligence will become foundation technologies for the next generation of applications.”

    He expects a significant boost in productivity and efficiency with applications performing many tasks and duties currently done by professionals. “Software,” he says, “will not just boost our productivity but will also make us better at our jobs.”

    Deepfakes and related malicious responses

    One of the most visible areas of malicious AI usage likely to evolve in 2023 is the criminal use of deepfakes. “Deepfakes are now a reality and the technology that makes them possible is improving at a frightening pace,” warns Matt Aldridge, principal solutions consultant at OpenText Security. “In other words, deepfakes are no longer just a catchy creation of science-fiction web3 and as cybersecurity experts we have the challenge to produce stronger ways to detect and deflect attacks that will deploy them.” (See Deepfakes – Significant or Hyped Threat? for more details and options.)

    Machine learning models, already available to the public, can automatically translate into different languages in real time while also transcribing audio into text web3 and we’ve seen huge developments in recent years of computer bots having conversations. With these technologies working in tandem, there is a fertile landscape of attack tools that could lead to dangerous circumstances during targeted attacks and well-orchestrated scams.

    “In the coming years,” continued Aldridge, “we may be targeted by phone scams powered by deepfake technology that could impersonate a sales assistant, a business leader or even a family member. In less than ten years, we could be frequently targeted by these types of calls without ever realizing we’re not talking to a human.”

    Thus far, deepfakes have primarily been used for satirical purposes and pornography. In the relatively few cybercriminal attacks, they have concentrated on fraud and business email compromise schemes. Milica expects future use to spread wider. “Imagine the chaos to the financial market when a deepfake CEO or CFO of a major company makes a bold statement that sends shares into a sharp drop or rise. Or consider how malefactors could leverage the combination of biometric authentication and deepfakes for identity fraud or account takeover. These are just a few examples web3 and we all know cybercriminals can be highly creative.”

    But maybe not just yet…

    The expectation of AI may still be a little ahead of its realization. “‘Trendy’ large machine learning models will have little to no impact on cyber security [in 2023],” says Andrew Patel, senior researcher at WithSecure Intelligence. “Large language models will continue to push the boundaries of AI research. Expect GPT-4 and a new and completely mind-blowing version of GATO in 2023. Expect Whisper to be used to transcribe a large portion of YouTube, leading to vastly larger training sets for language models. But despite the democratization of large models, their presence will have very little effect on cyber security, either from the attack or defense side. Such models are still too heavy, expensive, and not practical for use from the point of view of either attackers or defenders.”

    He suggests true adversarial AI will follow from increased ‘alignment’ research, which will become a mainstream topic in 2023. “Alignment,” he explains, “will bring the concept of adversarial machine learning into the public consciousness.”

    The defensive potential of AI

    AI retains the potential to improve cybersecurity, and further strides will be taken in 2023 thanks to its transformative potential across a range of applications. “In particular, embedding AI into the firmware level should become a priority for organizations,” suggests Camellia Chan, CEO and founder of X-PHY.

    “It’s now possible to have AI-infused SSD embedded into laptops, with its deep learning abilities to protect against every type of attack,” she says. “Acting as the last line of defense, this technology can immediately identify threats that could easily bypass existing software defenses.”

    Marcus Fowler, CEO of Darktrace Federal, believes that companies will increasingly use AI to counter resource restrictions. “In 2023, CISOs will opt for more proactive cyber security measures in order to maximize RoI in the face of budget cuts, shifting investment into AI tools and capabilities that continuously improve their cyber resilience,” he says.

    “With human-driven means of ethical hacking, pen-testing and red teaming remaining scarce and expensive as a resource, CISOs will turn to AI-driven methods to proactively understand attack paths, augment red team efforts, harden environments and reduce attack surface vulnerability,” he continued.

    Karin Shopen, VP of cybersecurity solutions and services at Fortinet, foresees a rebalancing between AI that is cloud-delivered and AI that is locally built into a product or service. “In 2023,” she says, “we expect to see CISOs re-balance their AI by purchasing solutions that deploy AI locally for both behavior-based and static analysis to help make real-time decisions. They will continue to leverage holistic and dynamic cloud-scale AI models that harvest large amounts of global data.”

    The proof of the AI pudding is in the regulations

    It is clear that a new technology must be taken seriously when the authorities start to regulate it. This has already started. There has been an ongoing debate in the US over the use of AI-based facial recognition technology (FRT) for several years, and the use of FRT by law enforcement has been banned or restricted in numerous cities and states. In the US, this is a Constitutional issue, typified by the Wyden/Paul bipartisan bill titled the ‘Fourth Amendment Is Not for Sale Act’ introduced in April 2021.

    This bill would ban US government and law enforcement agencies from buying user data without a warrant. This would include their facial biometrics. In an associated statement, Wyden made it clear that FRT firm Clearview.AI was in its sights: “this bill prevents the government buying data from Clearview.AI.”

    At the time of writing, the US and EU are jointly discussing cooperation to develop a unified understanding of necessary AI concepts, including trustworthiness, risk, and harm, building on the EU’s AI Act and the US AI Bill of Rights web3 and we can expect to see progress on coordinating mutually agreed standards during 2023.

    “In 2023, I believe we will see the convergence of discussions around AI and privacy and risk, and what it means in practice to do things like operationalizing AI ethics and testing for bias,” says Christina Montgomery, chief privacy officer and AI ethics board chair at IBM. “I’m hoping in 2023 that we can move the conversation away from painting privacy and AI issues with a broad brush, and from assuming that, ‘if data or AI is involved, it must be bad and biased’.”

    Going forward

    AI is ultimately a divisive subject. “Those in the technology, R&D, and science domain will cheer its ability to solve problems faster than humans imagined. To cure disease, to make the world safer, and ultimately saving and extending a human’s time on earth…” says Donnie Scott, CEO at Idemia. “Naysayers will continue to advocate for significant limitations or prohibitions of the use of AI as the ‘rise of the machines’ could threaten humanity.”

    In the end, he adds, “society, through our elected officials, needs a framework that allows for the protection of human rights, privacy, and security to keep pace with the advancements in technology. Progress will be incremental in this framework advancement in 2023 but discussions need to increase in international and national governing bodies, or local governments will step in and create a patchwork of laws that impede both society and the technology.”

    For the commercial use of AI within business, Montgomery adds, “We need web3 and IBM is advocating for web3 precision regulation that is smart and targeted, and capable of adapting to new and emerging threats. One way to do that is by looking at the risk at the core of a company’s business model. We can and must protect consumers and increase transparency, and we can do this while still encouraging and enabling innovation so companies can develop the solutions and products of the future. This is one of the many spaces we’ll be closely watching and weighing in on in 2023.”

    Reply
  25. Tomi Engdahl says:

    Ina Fried / Axios:
    OpenAI debuts a free web-based tool to help determine if text was written by a machine, rated as “very unlikely”, “unlikely”, “unclear”, “possible”, or “likely” — – “It has both false positives and false negatives,” …

    OpenAI releases tool to detect machine-written text
    https://www.axios.com/2023/01/31/openai-chatgpt-detector-tool-machine-written-text

    ChatGPT creator OpenAI today released a free web-based tool designed to help educators and others figure out if a particular chunk of text was written by a human or a machine.

    Yes, but: OpenAI cautions the tool is imperfect and performance varies based on how similar the text being analyzed is to the types of writing OpenAI’s tool was trained on.

    “It has both false positives and false negatives,” OpenAI head of alignment Jan Leike told Axios, cautioning the new tool should not be relied on alone to determine authorship of a document.

    How it works: Users copy a chunk of text into a box and the system will rate how likely the text is to have been generated by an AI system.

    It offers a five-point scale of results: Very unlikely to have been AI-generated, unlikely, unclear, possible or likely.
    It works best on text samples greater than 1,000 words and in English, with performance significantly worse in other languages. And it doesn’t work to distinguish computer code written by humans vs. AI.
    That said, OpenAI says the new tool is significantly better than a previous one it had released.

    The big picture: Concerns are high, especially in education, over the emergence of powerful tools like ChatGPT. New York schools, for example, have banned the technology on their networks.

    Experts are also worried about a rise in AI-generated misinformation as well as the potential for bots to pose as humans.
    A number of other companies, organizations and individuals are working on similar tools to detect AI-generated content.

    Between the lines: OpenAI said it is looking at other approaches to help people distinguish AI-generated text from that created by humans, such as including watermarks in works produced by its AI systems.

    https://platform.openai.com/ai-text-classifier

    Reply
  26. Tomi Engdahl says:

    Jennifer Elias / CNBC:
    Sources and documents: Google asks employees to test potential ChatGPT competitors, including Apprentice Bard, a chatbot that uses its LaMDA conversational tech — – Google is testing ChatGPT-like products that use its LaMDA technology, according to sources and internal documents acquired by CNBC.

    Google is asking employees to test potential ChatGPT competitors, including a chatbot called ‘Apprentice Bard’
    https://www.cnbc.com/2023/01/31/google-testing-chatgpt-like-chatbot-apprentice-bard-with-employees.html

    Google is testing ChatGPT-like products that use its LaMDA technology, according to sources and internal documents acquired by CNBC.
    The company is also testing new search page designs that integrate the chat technology.
    More employees have been asked to help test the efforts internally in recent weeks.

    Reply
  27. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    OpenAI launches ChatGPT Plus, a pilot plan with access in peak times, faster response times, and priority access to new features, in the US for $20 per month — Aiming to monetize what’s become a viral phenomenon, OpenAI today launched a new pilot subscription plan for ChatGPT …

    OpenAI launches ChatGPT Plus, starting at $20 per month
    https://techcrunch.com/2023/02/01/openai-launches-chatgpt-plus-starting-at-20-per-month/

    Reply
  28. Tomi Engdahl says:

    Reed Albergotti / Semafor:
    Sources: Microsoft plans to update Bing with OpenAI’s GPT-4, a faster and richer version of ChatGPT, in the coming weeks; OpenAI plans to launch a ChatGPT app — Microsoft’s second-place search engine Bing is poised to incorporate a faster and richer version of ChatGPT, known as GPT-4 …

    ChatGPT is about to get even better and Microsoft’s Bing could win big
    https://www.semafor.com/article/02/01/2023/chatgpt-is-about-to-get-even-better-and-microsofts-bing-could-win-big

    Microsoft’s second-place search engine Bing is poised to incorporate a faster and richer version of ChatGPT, known as GPT-4, into its product in the coming weeks, marking rapid progress in the booming field of generative AI and a long-awaited challenge to Google’s dominance of search.

    OpenAI’s latest software responds much faster than the current version, and the replies sound more human and are more detailed, according to people familiar with the product and rollout plans.

    OpenAI is also planning to launch a mobile ChatGPT app and test a new feature in its Dall-E image-generating software that would create videos with the help of artificial intelligence.

    OpenAI and Microsoft declined to comment.

    Reply
  29. Tomi Engdahl says:

    Nicole Herskowitz / Microsoft 365 Blog:
    Microsoft’s Teams Premium hits general availability for $10 per user per month; AI-generated notes powered by OpenAI’s GPT-3.5 arrive “in the coming months” — As we face economic uncertainties and changes to work patterns, organizations are searching for ways to optimize …

    https://www.microsoft.com/en-us/microsoft-365/blog/2023/02/01/microsoft-teams-premium-cut-costs-and-add-ai-powered-productivity/

    Reply
  30. Tomi Engdahl says:

    Detecting Machine-Generated Content: An Easier Task For Machine Or Human?
    https://hackaday.com/2023/02/01/detecting-machine-generated-content-an-easier-task-for-machine-or-human/

    In today’s world we are surrounded by various sources of written information, information which we generally assume to have been written by other humans. Whether this is in the form of books, blogs, news articles, forum posts, feedback on a product page or the discussions on social media and in comment sections, the assumption is that the text we’re reading has been written by another person. However, over the years this assumption has become ever more likely to be false, most recently due to large language models (LLMs) such as GPT-2 and GPT-3 that can churn out plausible paragraphs on just about any topic when requested.

    This raises the question of whether we are we about to reach a point where we can no longer be reasonably certain that an online comment, a news article, or even entire books and film scripts weren’t churned out by an algorithm, or perhaps even where an online chat with a new sizzling match turns out to be just you getting it on with an unfeeling collection of code that was trained and tweaked for maximum engagement with customers. (Editor’s note: no, we’re not playing that game here.)

    As such machine-generated content and interactions begin to play an ever bigger role, it raises both the question of how you can detect such generated content, as well as whether it matters that the content was generated by an algorithm instead of by a human being.

    Reply
  31. Tomi Engdahl says:

    Krystal Hu / Reuters:
    UBS study: ChatGPT reached ~100M MAUs with ~13M daily unique visitors in January, two months after launch, becoming the fastest-growing consumer app ever

    ChatGPT sets record for fastest-growing user base – analyst note
    https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/

    Reply
  32. Tomi Engdahl says:

    Taylor Hatmaker / TechCrunch:
    Mark Zuckerberg says Meta’s management theme for 2023 is the “Year of Efficiency” and emphasizes the company is aiming to “become a leader in generative AI”

    Meta stock perks up as the company promises a ‘year of efficiency’
    https://techcrunch.com/2023/02/01/meta-q4-2022-earnings-ai-efficiency/

    Meta is all-in on becoming a lean, mean, cash-printing machine.

    In its Q4 earnings call on Wednesday, Meta CEO Mark Zuckerberg described the company’s near future priorities and plans, painting a picture of a tech giant that’s driving toward leaning down and speeding up.

    The company beat revenue expectations in the final quarter of 2022, bringing in $32.2 billion. Facebook’s user numbers also managed to inch up in the last quarter with the platform hitting 1.98 billion daily active users and 2.96 billion monthly active users as of September 2022.

    Those gains combined with Meta’s aggressive cuts and its promise of an efficient 2023 drove stock prices up around 15% in trading after-hours. Meta took a notable beating in 2022’s market turndown, losing as much as 60% of its value over the course of the year.

    Reply
  33. Tomi Engdahl says:

    How to spot a deepfake? It’s all in the eyes.
    Researchers have created a tool capable of spotting deepfakes with 94% accuracy — given the right conditions.
    https://www.freethink.com/technology/how-to-spot-a-deepfake#Echobox=1675096135

    Reply
  34. Tomi Engdahl says:

    Google is reportedly testing an alternate home page with ChatGPT-style Q&A prompts / Google is scrambling to respond to the threat of OpenAI’s ChatGPT by augmenting its search engine with capabilities similar to the AI chatbot
    https://www.theverge.com/2023/2/1/23580934/google-chatgpt-rival-response-project-bard-homepage-alternate-report

    Reply
  35. Tomi Engdahl says:

    Will 2023 be the year of AI and metaverse actualisation? Five experts weigh in

    Will 2023 be the year of AI and metaverse actualisation? Five experts weigh in
    We asked five experts whether AI and the metaverse will become a bigger part of life this year
    https://sifted.eu/articles/ai-metaverse-web3-brnd/?utm_medium=paid-social&utm_source=facebook&utm_campaign=bc_inhouse&utm_content=bcgx_30012023&fbclid=IwAR0LDan_wHTpTCKRCM6yQCnrlHwJHUHAg4Jcd70I-p-aKRt5kHAPqWcwtOw

    Reply
  36. Tomi Engdahl says:

    Davey Alba / Bloomberg:
    Sundar Pichai says Google will make LLMs like LaMDA available “in the coming weeks and months”, and users will be able to use them “as a companion to search” — Google parent Alphabet Inc. reported fourth-quarter results that narrowly missed analysts’ expectations …

    Google Shares Slip after Sales Miss as Advertising Demand Slows
    https://www.bloomberg.com/news/articles/2023-02-02/google-shares-gain-as-revenue-meets-analyst-estimates

    CEO says company working toward a more durable cost structure
    Google will be more focused on AI, with updates soon

    Google parent Alphabet Inc. reported fourth-quarter results that narrowly missed analysts’ expectations, signaling lower demand for its search advertising during an economic slowdown.

    Reply
  37. Tomi Engdahl says:

    Now ChatGPT Can Make Breakfast For Me
    https://hackaday.com/2023/02/02/now-chatgpt-can-make-breakfast-for-me/

    The world is abuzz with tales of the ChatGPT AI chatbot, and how it can do everything, except perhaps make the tea. It seems it can write code, which is pretty cool, so if it can’t make the tea as such, can it make the things I need to make some tea? I woke up this morning, and after lying in bed checking Hackaday I wandered downstairs to find some breakfast. But disaster! Some burglars had broken in and stolen all my kitchen utensils! All I have is my 3D printer and laptop, which curiously have little value to thieves compared to a set of slightly chipped crockery. What am I to do!
    Never Come Between A Hackaday Writer And Her Breakfast!

    OK Jenny, think rationally. They’ve taken the kettle, but I’ve got OpenSCAD and ChatGPT. Those dastardly miscreants won’t come between me and my breakfast, I’m made of sterner stuff!

    The result was promising, it wrote an OpenSCAD module right in front of me. It looks valid, so into OpenSCAD it went. A nice tall cylindrical kettle, with a … er… lid. That should print no with problems, and I’ll be boiling the water for my morning cuppa in no time!

    But I need a teaspoon and a mug too, I’d better do the same for those. On with the same queries, and duly code for a mug and a teaspoon were created.

    This new technique for generating utensils automatically as I need them is straight out of Star Trek, I think I’ll never buy a piece of kitchenware again!

    need a frying pan, a spatula, a plate, a knife, and a fork. This is going to be such a good breakfast!

    Out come OpenSCAD models for a frying pan and spatula. The pan is maybe more of a griddle than a pan, but no AI coding chatbot is perfect, is it.

    I’m soon tucking into a fine breakfast thanks to my AI-generated utensils, ready for my day.

    Perhaps Breakfasts In The Future Won’t Be Quite Like This

    Of course, some of you may have noticed something a little avant-garde about my ChatGPT creations. Some might say they prioritise form over function to the extent of losing the latter, and I’d say yes, but it’s made a good joke pursuing them for the last few paragraphs. I’ve put all the stuff in a GitHub repository for you to look at if you want, and it’s soon pretty obvious that while ChatGPT has mastered a few basic OpenSCAD features such as union, translate, and difference of cylinders, it’s got no idea what a kitchen utensil looks like.

    Of course, ChatGPT isn’t an image-trained AI in the way that Dall-E is, so one might argue that it shouldn’t be expected to have any idea what a mug looks like.

    We’re in the middle of an AI hype storm, and it’s right to push the boundaries of all these tools because they have within them some remarkable capabilities.

    https://github.com/JennyList/Breakfast-by-ChatGPT

    Reply
  38. Tomi Engdahl says:

    Google CEO Says Its ChatGPT Rival Coming Soon as a ‘Companion’ to Search https://www.bloomberg.com/news/articles/2023-02-02/google-to-make-ai-language-models-available-soon-pichai-says#xj4y7vzkg

    While ChatGPT plus costs $20 per month, I am going to assume Google is going to charge $0 and will use ads to support its business goal or shut down its chat bot when it is no longer profitable ;) #ChatGPT

    Reply
  39. Tomi Engdahl says:

    ChatGPT detection tool says Macbeth was generated by AI. What happens now?
    https://venturebeat.com/ai/chatgpt-detection-tool-thinks-macbeth-was-generated-by-ai-what-happens-now/

    Even more concerning was how the tool classified the first page of Shakespeare’s Macbeth:

    “The classifier considers the text to be likely AI-generated.”

    Reply
  40. Tomi Engdahl says:

    OpenAI announces ChatGPT Plus at $20 a month / For those that don’t want to wait to talk to the AI and who want faster responses.
    https://www.theverge.com/2023/2/1/23581561/chatgpt-plus-paid-option-20-openai-waitlist

    Reply
  41. Tomi Engdahl says:

    Startup Shocked When 4Chan Immediately Abuses Its Voice-Cloning AI
    “The clips run the gamut from harmless, to violent, to transphobic, to homophobic, to racist.”
    https://futurism.com/startup-4chan-voice-cloning-ai

    Reply
  42. Tomi Engdahl says:

    New AI classifier
    for indicating
    AI-written text
    We’re launching a classifier trained to distinguish between AI-written and human-written text.
    https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text/

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*