3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

5,574 Comments

  1. Tomi Engdahl says:

    Reuters:
    Brazil’s government hires OpenAI to expedite the screening and analysis of thousands of lawsuits using AI, providing insights to the solicitor general’s office — Brazil’s government is hiring OpenAI to expedite the screening and analysis of thousands of lawsuits using artificial intelligence …

    Brazil hires OpenAI to cut costs of court battles
    https://www.reuters.com/technology/artificial-intelligence/brazil-hires-openai-cut-costs-court-battles-2024-06-11/

    Reply
  2. Tomi Engdahl says:

    Josh Tyrangiel / Washington Post:
    Q&A with Tim Cook on how AI will help users “save time”, sticking to Apple’s values, the Apple Intelligence name, hallucinations, OpenAI, journalism, and more — In an interview, Tim Cook explains how Apple’s new AI will enhance your work and life, with guardrails.

    https://www.washingtonpost.com/opinions/2024/06/11/tim-cook-apple-interview/

    Reply
  3. Tomi Engdahl says:

    “It’s wild how much everyone has the same vision for AI.” https://trib.al/vCcSLCo

    Reply
  4. Tomi Engdahl says:

    APPLE’S HUGE AI ANNOUNCEMENT IS A CHATBOT AND AN IMAGE GENERATOR, WHICH IS THE EXACT SAME BORING OFFERING AS MICROSOFT, GOOGLE AND META
    https://futurism.com/the-byte/apple-ai-announcement-new-ideas?fbclid=IwZXh0bgNhZW0CMTEAAR1JPIC8rz7Uzp8N2BnTTBa8Twji4jKJoIU-NJjwbPl3oHABtQNi4RWe3aQ_aem_AaSWZ_hpGDURXHBEE0r3ZOV6dzPA8byQYe3O1dQqx9Nh79Wr9VOV3ssjvSZc5SyJGf84dD0lrT4LGpRTM61Nf48C

    Tech giant Apple has finally shown off its particular take on artificial intelligence tech — and we can’t shake the feeling that we’ve seen this all before.

    During today’s kickoff of the company’s Worldwide Developer’s Conference, Apple unveiled what it’s calling “Apple Intelligence,” a set of machine learning features to be integrated across its desktop and mobile platforms.

    The system’s “new” capabilities are a very familiar mishmash of AI stuff we’ve seen before from other companies, including integrating generative AI into Siri and a surprisingly basic image generator that can create “Genmojis” while chatting.

    In other words, most of Apple’s huge and much-hyped AI play amounts to a chatbot and an image generator — the exact two products we’ve already seen from Microsoft, Google, Meta, and pretty much any other tech outfit caught up in the AI hype game.

    Reply
  5. Tomi Engdahl says:

    Benjamin Hoffman / New York Times:
    How some people are using the term “slop” as a descriptor for low-grade AI material, after emerging in reaction to the release of AI art generators in 2022 — A new term has emerged to describe dubious A.I.-generated material. — You may not know exactly what “slop” means in relation to artificial intelligence.

    First Came ‘Spam.’ Now, With A.I., We’ve Got ‘Slop’
    A new term has emerged to describe dubious A.I.-generated material.
    https://www.nytimes.com/2024/06/11/style/ai-search-slop.html?unlocked_article_code=1.zE0.r6qY.VuSWO62oB07C&smid=url-share

    You may not know exactly what “slop” means in relation to artificial intelligence. But on some level you probably do.

    Slop, at least in the fast-moving world of online message boards, is a broad term that has developed some traction in reference to shoddy or unwanted A.I. content in social media, art, books and, increasingly, in search results.

    Google suggesting that you could add nontoxic glue to make cheese stick to a pizza? That’s slop. So is a low-price digital book that seems like the one you were looking for, but not quite. And those posts in your Facebook feed that seemingly came from nowhere? They’re slop as well.

    The term became more prevalent last month when Google incorporated its Gemini A.I. model into its U.S.-based search results. Rather than pointing users toward links, the service attempts to solve a query directly with an “A.I. Overview” — a chunk of text at the top of a results page that uses Gemini to form its best guess at what the user is looking for.

    The change was a reaction to Microsoft having incorporated A.I. into its search results on Bing, and it had some immediate missteps, leading Google to declare it would roll back some of its A.I. features until problems can be ironed out.

    But with the dominant search engines having made A.I. a priority, it appears that vast quantities of information generated by machines, rather than largely curated by humans, will be served up as a daily part of life on the internet for the foreseeable future.

    Hence the term slop, which conjures images of heaps of unappetizing food being shoveled into troughs for livestock. Like that type of slop, A.I.-assisted search comes together quickly, but not necessarily in a way that critical thinkers can stomach.

    Reply
  6. Tomi Engdahl says:

    When Vendors Overstep – Identifying the AI You Don’t Need

    AI models are nothing without vast data sets to train them and vendors will be increasingly tempted to harvest as much data as they can and answer any questions later.

    https://www.securityweek.com/when-vendors-overstep-identifying-the-ai-you-dont-need/

    A year so far of data policy fails

    Microsoft recently announced an AI-powered feature ‘Windows Recall’ which effectively captures everything a user has ever done on a PC. While it promised heavy encryption of the data it captures, along with a promise the info would never leave the device, security professionals were less than impressed. Former Microsoft exec, Kevin Beaumont, remarking “In essence, a keylogger is being baked into Windows as a feature”.

    But it’s far from an isolated incident. In March, DocuSign updated its FAQ to state that “if you have given contractual consent, we may use your data to train DocuSign’s in-house, proprietary AI models.”

    Then, in May, a similar AI scare had Slack users up in arms with many criticizing vague data policies after it emerged that, by default, their data —including messages, content, and files — was being used to train Slack’s global AI models. Reports remarked that ‘it felt like there was no benefit to opting in for anyone but Slack.’

    After conducting our own research we found that such cases are just the tip of the iceberg and even the likes of LinkedIn, X, Pinterest, Grammarly, and Yelp have potentially risky content training declarations.

    Vendors are increasingly tempted to harvest data now and respond to complaints later

    We are likely to see more and more instances of vendors pushing boundaries as the global arms race to get ahead in AI accelerates. AI models are nothing without vast data sets to train them and vendors will be increasingly tempted to harvest as much data as they can and answer any questions later. This could be in the form of feature updates that, in effect, are a ‘data grab’ and deliver little value to the end user, or it could be in the form of vague policies deliberately designed to confuse which businesses unwittingly sign up to. So how can businesses be on their guard against the AI features they don’t need?

    Reply
  7. Tomi Engdahl says:

    Narcissus 12.0
    This project will explore a number of issues that are involved in creating a personality for an interactive chat-bot or actual robot.
    https://hackaday.io/project/196012-narcissus-120

    Reply
  8. Tomi Engdahl says:

    Mark Gurman / Bloomberg:
    Sources: Apple isn’t paying OpenAI as part of their partnership and aims to eventually make money from AI by striking revenue-sharing deals with chatbot owners — – The iPhone maker isn’t paying OpenAI to use the chatbot — Apple announced OpenAI agreement as part of AI push this week

    Apple to ‘Pay’ OpenAI for ChatGPT Through Distribution, Not Cash

    The iPhone maker isn’t paying OpenAI to use the chatbot
    Apple announced OpenAI agreement as part of AI push this week

    https://www.bloomberg.com/news/articles/2024-06-12/apple-to-pay-openai-for-chatgpt-through-distribution-not-cash

    Reply
  9. Tomi Engdahl says:

    Reed Albergotti / Semafor:
    AI search engine Perplexity says it was working on revenue-sharing deals with publishers when Forbes criticized it for misusing content from Forbes and others — The Scoop — Perplexity, the AI search startup that recently came under fire from Forbes for allegedly misusing its content …

    Perplexity was planning revenue-sharing deals with publishers when it came under media fire
    https://www.semafor.com/article/06/12/2024/perplexity-was-planning-revenue-sharing-deals-with-publishers

    Reply
  10. Tomi Engdahl says:

    Reuters:
    Samsung announces plans to speed up the delivery of AI chips for clients by integrating its memory chip, foundry, and chip packaging services — Samsung Electronics (005930.KS) said its contract manufacturing business plans to offer a one-stop shop for clients to get their AI chips made faster …

    https://www.reuters.com/technology/artificial-intelligence/samsung-announces-turnkey-approach-ai-chipmaking-2024-06-12/

    Reply
  11. Tomi Engdahl says:

    Samuel K. Moore / IEEE Spectrum:
    MLCommons shares results from its MLPerf 4.0 training benchmarks, which added Google’s and Intel’s AI accelerators; Nvidia H100 GPUs topped all nine benchmarks — For years, Nvidia has dominated many machine learning benchmarks, and now there are two more notches in its belt.

    Nvidia Conquers Latest AI Tests​

    GPU maker tops new MLPerf benchmarks on graph neural nets and LLM fine-tuning
    https://spectrum.ieee.org/mlperf-nvidia-conquers

    Reply
  12. Tomi Engdahl says:

    I tried Google’s new AI alphabet generator, and it’s way more fun than it sounds
    GenType Alphabet Creator is free to use, and here’s why you should try it.
    https://www.zdnet.com/article/i-tried-googles-new-ai-alphabet-generator-and-its-way-more-fun-than-it-sounds/

    Reply
  13. Tomi Engdahl says:

    Running Large Language Models on Raspberry Pi at the Edge
    Transform a Raspberry Pi into a powerful AI hub, running LLMs for real-time, on-site data analysis and insights using Ollama and Python.
    https://www.hackster.io/mjrobot/running-large-language-models-on-raspberry-pi-at-the-edge-63bb11

    Reply
  14. Tomi Engdahl says:

    Generative AI Is Not Going To Build Your Engineering Team For You
    It’s easy to generate code, but not so easy to generate good code.
    https://stackoverflow.blog/2024/06/10/generative-ai-is-not-going-to-build-your-engineering-team-for-you/

    The software industry is growing up
    To some extent, this is just what happens as an industry matures. The early days of any field are something of a Wild West, where the stakes are low, regulation nonexistent, and standards nascent. If you look at the early history of other industries—medicine, cinema, radio—the similarities are striking.

    There is a magical moment with any young technology where the boundaries between roles are porous and opportunity can be seized by anyone who is motivated, curious, and willing to work their asses off.

    It never lasts. It can’t; it shouldn’t. The amount of prerequisite knowledge and experience you must have before you can enter the industry swells precipitously. The stakes rise, the magnitude of the mission increases, the cost of mistakes soars. We develop certifications, trainings, standards, legal rites. We wrangle over whether or not software engineers are really engineers.

    Reply
  15. Tomi Engdahl says:

    But generative AI is not a member of your team
    In that particular sense—generating code that you know is untrustworthy—GenAI is a bit like a junior engineer. But in every other way, the analogy fails. Because adding a person who writes code to your team is nothing like autogenerating code. That code could have come from anywhere—Stack Overflow, Copilot, whatever. You don’t know, and it doesn’t really matter. There’s no feedback loop, no person on the other end trying iteratively to learn and improve, and no impact to your team vibes or culture.

    To state the supremely obvious: giving code review feedback to a junior engineer is not like editing generated code. Your effort is worth more when it is invested into someone else’s apprenticeship. It’s an opportunity to pass on the lessons you’ve learned in your own career. Even just the act of framing your feedback to explain and convey your message forces you to think through the problem in a more rigorous way, and has a way of helping you understand the material more deeply.

    https://stackoverflow.blog/2024/06/10/generative-ai-is-not-going-to-build-your-engineering-team-for-you/

    Reply
  16. Tomi Engdahl says:

    Researchers claim GPT-4 passed the Turing test
    https://bgr.com/science/researchers-claim-gpt-4-passed-the-turing-test/

    OpenAI’s GPT-4 has become the first AI to pass the Turing Test. At least, that’s what a group of researchers claim in a new study. The study, which is currently available on the preprint server arXiv, has yet to be peer-reviewed. Still, the results here are intriguing, to say the least.

    The Turing test, which was first proposed by Alan Turing in 1950, seeks to judge whether a machine can show intelligence well enough to make it indistinguishable from a human. In order for an AI to pass the Turing test, it must be able to talk to someone and fool them into thinking that they are talking to a human.

    Reply
  17. Tomi Engdahl says:

    The world’s most popular web framework is going AI native
    On today’s episode we chat with Jared Palmer, VP of AI at Vercel, who says the company has three key goals. First, support AI native web apps like ChatGPT and Claude. Second, use GenAI to make it easier to build. Third, provide an SDK so that developers have the tools they need to easily add GenAI to their websites.
    https://stackoverflow.blog/2024/06/14/vercel-next-node-js-ai-sdk/

    Reply
  18. Tomi Engdahl says:

    Running Large Language Models on Raspberry Pi at the Edge
    Transform a Raspberry Pi into a powerful AI hub, running LLMs for real-time, on-site data analysis and insights using Ollama and Python.

    Intermediate
    https://www.hackster.io/mjrobot/running-large-language-models-on-raspberry-pi-at-the-edge-63bb11

    Reply
  19. Tomi Engdahl says:

    GPT-4 autonomously hacks zero-day security flaws with 53% success rate
    https://newatlas.com/technology/gpt4-autonomously-hack-zero-day-security-flaws/

    Reply
  20. Tomi Engdahl says:

    The day the AI dream died
    Unveiling tech pessimism
    https://iai.tv/articles/the-day-the-ai-dream-died-auid-2850

    AI’s promise was to solve problems, not only of the day but all future obstacles as well. Yet the hype has started to wear off. Amidst the growing disillusionment, Nolen Gertz challenges the prevailing optimism, suggesting that our reliance on AI might be less about solving problems and more about escaping the harsh realities of our time. He questions whether AI is truly our saviour or just a captivating distraction, fuelling capitalist gains and nihilistic diversions from the global crises we face.

    Reply
  21. Tomi Engdahl says:

    Yet while the hype surrounding AI seems to have disappeared, the AI itself has not.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*