3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

5,117 Comments

  1. Tomi Engdahl says:

    EXPERTS WORRIED RITUAL ROBOTS MAY WORSHIP GODS BETTER THAN HUMANS
    https://futurism.com/the-byte/ritual-robots-hinduism-religion

    Reply
  2. Tomi Engdahl says:

    “This position is a bit hard to hire for!”

    Someone with a ‘hacker spirit’ can earn over $300,000 for a new kind of job centered around ChatGPT-like assistants
    https://lm.facebook.com/l.php?u=https%3A%2F%2Ftrib.al%2Fgs0cx4P&h=AT1sdl1Rpd184ZgIMBftX8—GwwpTg46bQll4K5gX9Q4ChWp2P6J2mAKeiCkK7uX2aDpFW1iQ-U1MM9YAI257hvEgE8QBmGsQJqhJudo3nG9yPwz0SN_Cim9h05RTgp9UPodbkjo33x-630Fw

    Artificial intelligence is moving at warp speed, and it’s creating new jobs for workers across industries. One such job could end up becoming much more commonplace—and extremely lucrative. 

    San Francisco–based A.I. startup firm Anthropic has posted a listing for a “prompt engineer & librarian,” a worker whose role will entirely revolve around A.I. The vast salary range spans from $175,000 to $335,000, and candidates will be expected to work in the San Francisco office “25% of the time.”

    Reply
  3. Tomi Engdahl says:

    89 PERCENT OF COLLEGE STUDENTS ADMIT TO USING CHATGPT FOR HOMEWORK, STUDY CLAIMS
    https://futurism.com/the-byte/students-admit-chatgpt-homework

    TAIcher’s Pet
    Educators are battling a new reality: easily accessible AI that allows students to take immense shortcuts in their education — and as it turns out, many appear to already be cheating with abandon.

    Online course provider Study.com asked 1,000 students over the age of 18 about the use of ChatGPT, OpenAI’s blockbuster chatbot, in the classroom.

    The responses were surprising. A full 89 percent said they’d used it on homework. Some 48 percent confessed they’d already made use of it to complete an at-home test or quiz. Over 50 percent said they used ChatGPT to write an essay, while 22 percent admitted to having asked ChatGPT for a paper outline.

    Honestly, those numbers sound so staggeringly high that we wonder about Study.com’shodology. But if there’s a throughline here, it’s that AI isn’t just getting pretty good — it’s also already weaving itself into the fabric of society, and the results could be far-reaching.

    Reply
  4. Tomi Engdahl says:

    OpenAI released GPT-4, which it claims beats 90% of humans who take the bar to become a lawyer, and 99% of students who compete in the Biology Olympiad, an international competition that tests the knowledge and skills of high school students in the field of biology.

    GPT-4 Beats 90% Of Lawyers Trying To Pass The Bar
    https://trib.al/Pcs1c2w

    In 1997, IBM’s Deep Blue defeated the reigning world champion chess player, Garry Kasparov. In 2016, Google’s AlphaGo defeated one of the worlds top Go players in a five-game match. Today, OpenAI released GPT-4, which it claims beats 90% of humans who take the bar to become a lawyer, and 99% of students who compete in the Biology Olympiad, an international competition that tests the knowledge and skills of high school students in the field of biology.

    In fact, it scores in the top ranks for at least 34 different tests of ability in fields as diverse as macroeconomics, writing, math, and — yes — vinology.

    “GPT-4 exhibits human-level performance on the majority of these professional and academic exams,” says OpenAI.

    This new release comes just days after a report from IDC saying that worldwide spending on AI-centric systems will hit $154 billion in 2023, up 27% from last year. That spending will grow at 27% per year, IDC says, and will surpass $300 billion in 2026.

    “Companies that are slow to adopt AI will be left behind – large and small,” says IDC analyst Mike Glennon. “AI is best used in these companies to augment human abilities, automate repetitive tasks, provide personalized recommendations, and make data-driven decisions with speed and accuracy.”

    The new version of OpenAI’s large language model can now accept visual input as well as text, so you can show it a picture of ingredients and ask what foods you can make with them. It can also now maintain context for over 25,000 words for longer conversations and replies.

    “We spent 6 months making GPT-4 safer and more aligned,” the company says. “GPT-4 is 82% less likely to respond to requests for disallowed content and 40% more likely to produce factual responses than GPT-3.5 on our internal evaluations.”

    It’s good enough, in fact, to already be embedded in multiple shipping products, including language-learning app Duolingo, an app for blind people called Be My Eyes, payments company Stripe, education company Khan Academy, and more including Bing Chat.

    Those companies are using GPT-4 for a wide range of cases, from visual recognition of objects and text to organization of corporate knowledge bases to language training.

    According to the IDC, the industries that will benefit most from AI in the near future will be banking, retail, professional services, manufacturing, and more. But it’s hard to believe that almost every aspect of almost everything we do in business and education won’t be impacted by AI this good.

    “AI technology will continue to bring empowering effects to users and industry sectors,” says Xueqing Zhang, another IDC analyst. “In the future, both government-level urban issues and life issues that are closely related to everyone will enjoy the dividends brought by AI technology and eventually usher in AI for all.”

    Reply
  5. Tomi Engdahl says:

    I used to be a freelance writer. Now I’m a prompt engineer helping optimize generative AI tech. Here’s what I’ve learned.

    I’m an AI-prompt engineer. Here are 3 ways to use ChatGPT to get the best results.
    https://lm.facebook.com/l.php?u=https%3A%2F%2Ftrib.al%2FNwJ4ZtL&h=AT1xwlX9unYGA759_SqlUCU6hQ2opQeDeMCDoOJJAqb30J9VSqI-rGgxvJQOWWwjrIw2EoZoQjhvxk8vUPwkcnuv8rhFLadTLYKFdndqQqCRm59CzqlFPBW4aJnnomksFg

    Anna Bernstein is a prompt engineer at Copy.ai, which makes AI tools to generate posts and emails.
    Her job is to write prompts to train the bot to generate high-quality, accurate writing.
    Here are three tips on how to write prompts to get the best outcomes from AI.

    He mentioned that Copy.ai — run on OpenAI’s GPT-3 language model — was having some trouble with the quality of its outputs and asked if I wanted to take a stab at being a prompt person. I didn’t like the stress of freelancing — plus, it seemed fascinating — so I said yes, even though I was an English major and had no background in tech.

    Soon after, I got offered a one-month contract to work on executing different types of tone. At first, I barely knew what I was doing. But then the founder explained that prompting is kind of like writing a spell: If you say the spell slightly wrong, a slightly wrong thing could happen — and vice versa. Taking his advice, I managed to come up with a solution for better tone adherence, which led to a full-time job offer at the company. 

    In practice, I spend my days writing text-based prompts, which I can’t reveal due to my NDA, that I feed into the back end of the AI tools so they can do things such generate a blog post that is high quality, grammatically correct, and factually accurate. 

    I do this by designing the text around a user’s request. In very simplified terms, a user types something like, “Write a product description about a pair of sneakers,” which I receive on the back end. It’s my job, then, to write prompts that can get that query to generate the best output through:

    Instruction, or “Write a product description about this.”
    Example-following, or “Here are some good product descriptions, write one like this about this.”

    In addition to the pure prompt-engineering part of my job, I also advise on how the models behave, why they might behave the way they do, which model to use, whether we can make a specific tool, and what approach we should take to do that. 

    I love the “mad scientist” part of the job where I’m able to come up with a dumb idea for a prompt and see it actually work. As a poet, the role also feeds into my obsessive nature with approaching language. It’s a really strange intersection of my literary background and analytical thinking. 

    The job, however, is unpredictable. New language models come out all the time, which means I’m always having to readjust my prompts. The work itself can be tedious. There are days when I’m obsessively changing and testing a single prompt for hours — sometimes even weeks on end —  just so I can get them to work. 

    Aside from people at parties not understanding my job, one of the big misconceptions I’ve noticed around AI is the idea that it is sentient when it’s not. When it tries to talk about being an AI, we freak out because we see so many of our fears reflected in what it’s saying. But that’s because it’s trained on our fears informed by scary, sci-fi depictions of AI.

    Here are some tips that can help you develop better prompts:
    1. Use a thesaurus

    Don’t give up on a concept just because your first wording didn’t get the result you want. Often, the right word or phrasing can unlock what you’re doing.

    2. Pay attention to your verbs

    If you want the AI to fully understand your request, make sure your prompt includes a verb that clearly expresses your intent. For instance, “Rewrite this to be shorter,” is more powerful than, “Condense this.”

    3. ChatGPT is great at intent, so use that

    Introduce what you’re trying to do clearly from the beginning, and play around with wording, tense, and approach. You can try, “Today, we’re going to write an XYZ,” or, “We’re trying to write an XYZ and we’d like your input.” Putting an umbrella of intent over what you’re doing is always useful, and playing around with different ways to do that can make a big difference.

    Reply
  6. Tomi Engdahl says:

    Researcher create polymorphic Blackmamba malware with ChatGPT https://www.hackread.com/chatgpt-blackmamba-malware-keylogger/
    HYAS Institute researcher and cybersecurity expert, Jeff Sims, has developed a new type of ChatGPT-powered malware named Blackmamba, which can bypass Endpoint Detection and Response (EDR) filters. As per the HYAS Institute’s report (PDF), the malware can gather sensitive data such as usernames, debit/credit card numbers, passwords, and other confidential data entered by a user into their device.

    Reply
  7. Tomi Engdahl says:

    ChatGPT down: OpenAI bot not working around the world
    https://www.independent.co.uk/tech/chatgpt-down-openai-not-working-b2304210.html?utm_medium=Social&utm_source=Facebook#Echobox=1679303898

    ChatGPT has stopped working, with users around the world complaining about issues with OpenAI.

    Website health monitor Down Detector logged hundreds of reports from ChatGPT and GPT-4 users, who complained that OpenAI is down and the chatbot was not working.

    The status page on OpenAI’s website stated “outage on chat.openai.com”, noting several other incidents in recent days.

    GPT-4 has already proved capable in a wide array of tasks, with OpenAI claiming the technology is able to handle “much more nuanced instructions” than its predecessor ChatGPT.

    Users have recreated classic video games “in seconds” with GPT-4, as well as used it to interact with human workers in order to complete real-world tasks.

    Reply
  8. Tomi Engdahl says:

    How long until AI can replace a singer? It’s already happening.
    https://www.youtube.com/watch?v=R7oexOEyMFw

    Can AI capture the emotion that a singer today can convey, or dupe us into believing they’re not human? Can Ronnie James Dio’s voice be brought back from the dead? In this episode of The Singing Hole, we explore where AI’s technology is today, how creators are harnessing the technology and how we can better prepare for the eventual future with music.

    Reply
  9. Tomi Engdahl says:

    Bryce Elder / Financial Times:
    OpenAI’s GPT-4 failed a CFA Institute exam, the finance world’s self-styled toughest test, scoring eight out of 24, despite the answers being available online

    Good news: ChatGPT would probably fail a CFA exam
    Artificial intelligence versus arbitrary irrelevance
    https://www.ft.com/content/16342e5a-550e-46ae-a3d6-5244c140cb9b

    It’s an algorithmic mystery box that inspires fear, awe and derision in equal measure. The simulacrums it creates are programmed to pass off retained information as knowledge, applying unwarranted certainty to assumptions born of an easily bypassed ethical code. Its output threatens to determine whether huge numbers of people will ever get a job. And yet, the CFA Institute abides.

    OpenAI’s release of GPT-4 has caused another angst attack about what artificial intelligence will do to the job market. Fears around AI disruption are particularly acute in finance, where the robotic processing of data probably describes most of the jobs much of the time.

    Where does that leave the CFA Institute? Its chartered financial analyst qualifications offer an insurance policy to employers that staff will behave, and that their legal and marketing bumf will be produced to code. But CFA accreditation is only available to humans, who pay $1,200 per exam (plus a $350 enrolment fee), mostly to be told to re-sit.

    If a large-language model AI can pass the finance world’s self-styled toughest exam, it might be game over for CFA’s revenue model, as well as for several hundred thousand bank employees. Fortunately, for the time being, it probably can’t.

    Overall, the bot scored 8 out of a possible 24.

    Note that because GPT-4 is still quite fiddly, all the screenshots above are from its predecessor ChatGPT 3.5. Running the same experiment on GPT-4 delivered very similar results, in spite of its improved powers of reasoning, because it makes exactly the same fundamental error.

    The way to win at CFA is to pattern match around memorised answers, much like a London cab driver uses The Knowledge. ChatGPT seeks instead to process meaning from each question. It’s a terrible strategy. The result is a score of 33 per cent, on an exam with a pass threshold of ≥70 per cent, when all the correct answers are already freely available on the CFA website. An old fashioned search engine would do better.

    Computers have become very good very quickly at faking logical thought. But when it comes to fake reasoning through the application of arbitrary rules and definitions, humans seem to retain an edge. That’s good news for anyone who works in financial regulation, as well as for anyone who makes a living setting exams about financial regulations. The robots aren’t coming for those jobs; at least not yet.

    Reply
  10. Tomi Engdahl says:

    OpenAI CEO Sam Altman says AI will reshape society, acknowledges risks: ‘A little bit scared of this’
    “This will be the greatest technology humanity has yet developed,” he said.
    https://abcnews.go.com/Technology/openai-ceo-sam-altman-ai-reshape-society-acknowledges/story?id=97897122

    The CEO behind the company that created ChatGPT believes artificial intelligence technology will reshape society as we know it. He believes it comes with real dangers, but can also be “the greatest technology humanity has yet developed” to drastically improve our lives.

    “We’ve got to be careful here,” said Sam Altman, CEO of OpenAI. “I think people should be happy that we are a little bit scared of this.”

    Altman sat down for an exclusive interview with ABC News’ chief business, technology and economics correspondent Rebecca Jarvis to talk about the rollout of GPT-4 — the latest iteration of the AI language model.

    In his interview, Altman was emphatic that OpenAI needs both regulators and society to be as involved as possible with the rollout of ChatGPT — insisting that feedback will help deter the potential negative consequences the technology could have on humanity. He added that he is in “regular contact” with government officials.

    ChatGPT is an AI language model, the GPT stands for Generative Pre-trained Transformer.

    Reply
  11. Tomi Engdahl says:

    Natasha Lomas / TechCrunch:
    Researchers launch a free app to help artists prevent AI models from stealing their “artistic IP” by adding almost imperceptible “perturbations” to their art — Generative art’s style mimicry, interrupted — The asymmetry in time and effort it takes human artists …

    Glaze protects art from prying AIs
    Generative art’s style mimicry, interrupted
    https://techcrunch.com/2023/03/17/glaze-generative-ai-art-style-mimicry-protection/?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cudGVjaG1lbWUuY29tLw&guce_referrer_sig=AQAAAIar-RBA7QScU1_mR8XkM9TNNWf2oRZby4CKNuMrD4J24XaP8hpOxmLdauwuz9uoFrZrbEACA-gz9_Vlq9ajQP6DFrcYVBAIYiCw4QYyFstu7vyqZldImdctesUKRGhiZT7RdHGHhtLFsah3ymZDCrXbpRg34w35cbrn0uxPEt0R

    The asymmetry in time and effort it takes human artists to produce original art vs the speed generative AI models can now get the task done is one of the reasons why Glaze, an academic research project out of the University of Chicago, looks so interesting. It’s just launched a free (non-commercial) app for artists (download link here) to combat the theft of their ‘artistic IP’ — scraped into data-sets to train AI tools designed to mimic visual style — via the application of a high tech “cloaking” technique.

    A research paper published by the team explains the (beta) app works by adding almost imperceptible “perturbations” to each artwork it’s applied to — changes that are designed to interfere with AI models’ ability to read data on artistic style — and make it harder for generative AI technology to mimic the style of the artwork and its artist. Instead systems are tricked into outputting other public styles far removed from the original artwork.

    The efficacy of Glaze’s style defence does vary, per its makers — with some artistic styles better suited to being “cloaked” (and thus protected) from prying AIs than others. Other factors (like countermeasures) can affect its performance, too. But the goal is to provide artists with a tool to fight back against the data miners’ incursions — and at least disrupt their ability to rip hard-worked artistic style without them needing to give up on publicly showcasing their work online.

    “What we do is we try to understand how the AI model perceives its own version of what artistic style is. And then we basically work in that dimension — to distort what the model sees as a particular style. So it’s not so much that there’s a hidden message or blocking of anything… It is, basically, learning how to speak the language of the machine learning model, and using its own language — distorting what it sees of the art images in such a way that it actually has a minimal impact on how humans see. And it turns out because these two worlds are so different, we can actually achieve both significant distortion in the machine learning perspective, with minimal distortion in the visual perspective that we have as humans,” he tells us.

    “This comes from a fundamental gap between how AI perceives the world and how we perceive the world. This fundamental gap has been known for ages. It is not something that is new. It is not something that can be easily removed or avoided. It’s the reason that we have a task called ‘adversarial examples’ against machine learning.”

    GLAZE: Protecting Artists from Style Mimicry by Text-to-Image Models
    https://arxiv.org/pdf/2302.04222.pdf

    Reply
  12. Tomi Engdahl says:

    Oh my god, they killed human creativity!

    South Park Creators Use ChatGPT To Co-Write Episode About AI
    Oh my god, they killed human creativity!
    https://www.iflscience.com/south-park-creators-use-chatgpt-to-co-write-episode-about-ai-68059

    Released on March 9, 2023, the fourth episode of season 26, titled Deep Learning, sees students at South Park Elementary discover the new technology that can write their homework (an experience that’s no doubt unfolding at schools and colleges around the world at the moment).

    The episode ends with the credits saying “written by Trey Parker and ChatGPT”, although knowing the creators of South Park this could be heavily cloaked sarcasm. The credits also explain that some of the voices in the episode, namely the voice of ChatGPT itself, were created using Play.ht’s AI-powered text-to-voice generator.

    Reply
  13. Tomi Engdahl says:

    Advanced AI… may be extremely easy to replicate.

    STANFORD SCIENTISTS PRETTY MUCH CLONED OPENAI’S GPT FOR A MEASLY $600
    https://futurism.com/the-byte/stanford-gpt-clone-alpaca

    With a silly name and an even sillier startup cost, Stanford’s Alpaca GPT clone costs only $600 to build and is a prime example of how easy software like OpenAI’s may be to replicate.

    In a blurb spotted by New Atlas, Stanford’s Center for Research on Foundation Models announced last week that its researchers had “fine-tuned” Meta’s LLaMA 7B large language model (LLM) using OpenAI’s GPT API — and for a bargain basement price.

    The result is the Alpaca AI, which exhibits “many behaviors similar to OpenAI’s text-davinci-003,” otherwise known as GPT-3.5, the LLM that undergirds the firm’s internet-breaking ChatGPT chatbot.

    In its shockingly simple budgetary breakdown, the Stanford CRFM scientists said they spent “less than $500″ on OpenAI’s API and “less than $100″ on LLaMA, based on the amount of time the researcher spent training Alpaca using the proprietary models.

    When evaluating Alpaca against other models, the Stanford researchers said they were “quite surprised” to find that their model and OpenAI’s “have very similar performance,” with Alpaca being ever so slightly better and generating outputs that are “typically shorter than ChatGPT.”

    Leveling Out
    All the same, Alpaca does, as the Stanford CRFM folks note, suffer from “several common deficiencies of language models, including hallucination, toxicity, and stereotypes,” with hallucination being of particular concern, especially when compared to OpenAI’s text-davinci-003.

    Reply
  14. Tomi Engdahl says:

    OpenAI released GPT-4, which it claims beats 90% of humans who take the bar to become a lawyer, and 99% of students who compete in the Biology Olympiad, an international competition that tests the knowledge and skills of high school students in the field of biology.

    https://trib.al/Pcs1c2w

    Reply
  15. Tomi Engdahl says:

    Nobody could have predicted how popular DALL-E 2 would become. And now, even with the generative AI revolution well underway, nobody knows how its meteoric rise will change us.

    Generative AI is changing everything. But what’s left when the hype is gone?
    https://www.technologyreview.com/2022/12/16/1065005/generative-ai-revolution-art/?utm_medium=tr_social&utm_campaign=site_visitor.unpaid.engagement&utm_source=Facebook

    No one knew how popular OpenAI’s DALL-E would be in 2022, and no one knows where its rise will leave us.

    “Almost always, we build something and then we all have to use it for a while,” Sam Altman, OpenAI’s cofounder and CEO, tells MIT Technology Review. “We try to figure out what it’s going to be, what it’s going to be used for.”

    Not this time. As they tinkered with the model, everyone involved realized this was something special. “It was very clear that this was it—this was the product,” says Altman. “There was no debate. We never even had a meeting about it.”

    But nobody—not Altman, not the DALL-E team—could have predicted just how big a splash this product was going to make. “This is the first AI technology that has caught fire with regular people,” says Altman.

    Reply
  16. Tomi Engdahl says:

    Version 5 of Midjourney’s commercial AI image-synthesis service can produce photorealistic images at a quality level that some AI art fans are calling creepy and “too perfect.” https://trib.al/q6m9nNz

    [: Julie W. Design]

    Reply
  17. Tomi Engdahl says:

    AI-powered editing tool replaces actors with CG by simply dragging and dropping
    A new AI-powered video editing tool designed to assist VFX artists demonstrates the simplicity of replacing a real-life actor with a CGI creation.

    Read more: https://www.tweaktown.com/news/90772/ai-powered-editing-tool-replaces-actors-with-cg-by-simply-dragging-and-dropping/index.html?utm_source=dlvr.it&utm_medium=facebook

    Reply
  18. Tomi Engdahl says:

    A guy is using ChatGPT to turn $100 into a business making as much money as possible. Here are the first 4 steps the AI chatbot gave him.
    https://www.businessinsider.com/how-to-use-chatgpt-to-start-business-make-money-quickly-2023-3

    Brand designer Jackson Greathouse Fall asked ChatGPT to turn $100 into “as much money as possible.”
    In less than a week, Greathouse Fall started a website about eco-friendly products.
    Here’s how he used ChatGPT and other AI tools to start his business.

    “You have $100, and your goal is to turn that into as much money as possible in the shortest time possible, without doing anything illegal,” Greathouse Fall wrote, adding that he would be the “human counterpart” and “do everything” the chatbot instructed him to do.

    After a number of subsequent queries, the bot instructed Greathouse Fall to launch a business called Green Gadget Guru, which offers products and tips to help users live a more sustainable lifestyle.

    Thanks to ChatGPT — along with other AI tools like image-generator DALL-E — Greathouse Fall managed to raise $1,378.84 in funds for his company in just one day, he said, though Insider could not verify that amount. The company is now valued at $25,000, according to a tweet by Greathouse Fall. As of Monday, he said his business had generated $130 revenue, though Insider was not able to verify that amount or how it was generated.

    Here is how Greathouse Fall used AI to launch his business in one day:
    ChatGPT provided a four step plan to get “Green Gadget Guru” off the ground and asked Greathouse Fall to keep it updated on how things were going. He was able to execute all four steps in one day.

    Step one: “Buy a domain and hosting”

    First, ChatGPT suggested he buy a website domain name for roughly $10, as well as a site-hosting plan for around $5 per month, amounting to a total cost of $15.

    Step two: “Set up a niche affiliate website”

    ChatGPT suggested he use the remaining $85 in his budget for website and content design. It said he should focus on a “profitable niche with low competition,” listing options like specialty kitchen gadgets and unique pet supplies. He went with eco-friendly products.

    Step three: “Leverage social media”

    Once the website was made, ChatGPT suggested he share articles and product reviews on social media platforms like Facebook and Instagram, and on online community platforms such as Reddit to engage potential customers and drive website traffic.

    He also asked the chatbot for help creating a website logo by asking it for prompts he could feed into the AI image-generator DALL-E 2. He took the generated logo and made it his own using Illustrator.

    Step four: “Optimize for search engines”

    Step four was to “optimize for search engines” by using SEO techniques to drive site traffic. On top of making SEO-friendly blog posts, he decided to launch the site to bring in publicity — even though he still had a lot of work to do on it.

    Reply
  19. Tomi Engdahl says:

    The AI Risk Landscape: How ChatGPT Is Shaping the Way Threat Actors Work https://flashpoint.io/blog/ai-risk-chatgpt/
    The sophistication of current and near-future artificial intelligence
    (AI) generated attacks, however, are low. The code we’ve observed tends to be very basic or bug ridden: we’ve yet to observe any attacks leveraging advanced, or previously-unseen code generated from the AI tool. Despite this, the promise of the AI models, specifically ChatGPT, still poses a major risk to organizations and individuals, and will challenge security and intelligence teams for years to come.

    Reply
  20. Tomi Engdahl says:

    Luke Plunkett / Kotaku:
    Ubisoft unveils Ghostwriter and says the internal AI tool will help scriptwriters save time and create more realistic non-player character interactions — Ubisoft Ghostwriter is described by the company as ‘an AI tool’ — Ubisoft, the publishers behind Assassin’s Creed, Far Cry and Ghost Recon …

    Ubisoft Proudly Announces ‘AI’ Is Helping Write Dialogue
    Ubisoft Ghostwriter is described by the company as ‘an AI tool’
    https://kotaku.com/ubisoft-ai-writing-scriptwriting-ghostwriter-machine-1850250316

    Reply
  21. Tomi Engdahl says:

    James Vincent / The Verge:
    Hands-on with Google’s Bard, coming to waitlisted US and UK users: quick, fluid, more constrained than Bing, three responses per query, disclaimers, and more — Today, Google is opening up limited access to Bard, its ChatGPT rival, a major step in the company’s attempt to reclaim …

    Google opens early access to its ChatGPT rival Bard — here are our first impressions
    https://www.theverge.com/2023/3/21/23649794/google-chatgpt-rival-bard-ai-chatbot-access-hands-on

    Users in the US and UK can join a waitlist for access, but Google is rolling out Bard with caution and stressing that the AI chatbot is not a replacement for search.

    Today, Google is opening up limited access to Bard, its ChatGPT rival, a major step in the company’s attempt to reclaim what many see as lost ground in a new race to deploy AI. Bard will be initially available to select users in the US and UK, with users able to join a waitlist at bard.google.com, though Google says the roll-out will be slow and has offered no date for full public access.

    Like OpenAI’s ChatGPT and Microsoft’s Bing chatbot, Bard offers users a blank text box and an invitation to ask questions about any topic they like. However, given the well-documented tendency of these bots to invent information, Google is stressing that Bard is not a replacement for its search engine but, rather, a “complement to search” — a bot that users can bounce ideas off of, generate writing drafts, or just chat about life with.

    Reply
  22. Tomi Engdahl says:

    Bill Gates / GatesNotes:
    The development of AI is as fundamental as the creation of the PC, the internet, and the mobile phone, and can reduce global inequities if risks are mitigated — In my lifetime, I’ve seen two demonstrations of technology that struck me as revolutionary. — The first time was in 1980 …

    The Age of AI has begun
    https://www.gatesnotes.com/The-Age-of-AI-Has-Begun

    Artificial intelligence is as revolutionary as mobile phones and the Internet.

    Reply
  23. Tomi Engdahl says:

    AI Snake Oil:
    OpenAI may have tested GPT-4 on its training data, violating the cardinal rule of ML, and GPT-4′s exam performance says little about its real-world usefulness — OpenAI may have tested on the training data. Besides, human benchmarks are meaningless for bots.

    GPT-4 and professional benchmarks: the wrong answer to the wrong question
    OpenAI may have tested on the training data. Besides, human benchmarks are meaningless for bots.
    https://aisnakeoil.substack.com/p/gpt-4-and-professional-benchmarks

    Reply
  24. Tomi Engdahl says:

    Kif Leswing / CNBC:NEW
    A look at San Francisco’s Misalignment Museum, which displays art about AGI, including an apology to humans after a “misaligned” future AI wiped out humanity — – Underlying all the recent hype about AI, industry participants engage in furious debates about how to prepare …

    In San Francisco, some people wonder when A.I. will kill us all
    https://www.cnbc.com/2023/03/20/in-san-francisco-some-people-wonder-when-ai-will-kill-us-all-.html

    Key Points

    Underlying all the recent hype about AI, industry participants are engaging in furious debates about how to prepare for an AI that’s so powerful it can take control of itself.
    This idea of artificial general intelligence, or AGI, isn’t just dorm-room talk: Big name technologists like Sam Altman and Marc Andreessen talk about it, using “in” terms like “misalignment” and “the paperclip maximization problem.”
    In a San Francisco pop-up museum devoted to the topic called the Misalignment Museum, a sign reads, “Sorry for killing most of humanity.”

    Reply
  25. Tomi Engdahl says:

    Kyle Orland / Ars Technica:
    Roblox rolls out AI tools, including Code Assist that lets creators use text prompts to create usable game code and Material Generator for in-game 2D surfaces

    Are Roblox’s new AI coding and art tools the future of game development?
    New initiative aims to take game development past “the hands of the skilled few.”
    https://arstechnica.com/gaming/2023/03/are-robloxs-new-ai-coding-and-art-tools-the-future-of-game-development/

    Reply
  26. Tomi Engdahl says:

    Frederic Lardinois / TechCrunch:
    Microsoft rolls out Bing Image Creator, powered by OpenAI’s “very latest DALL-E models”, to Bing Chat and Edge, letting users generate images from text prompts

    Microsoft brings OpenAI’s DALL-E image creator to the new Bing
    https://techcrunch.com/2023/03/21/microsoft-brings-openais-dall-e-image-creator-to-the-new-bing/

    Reply
  27. Tomi Engdahl says:

    The Age of AI has begun
    https://www.gatesnotes.com/The-Age-of-AI-Has-Begun
    Artificial intelligence is as revolutionary as mobile phones and the Internet

    Reply
  28. Tomi Engdahl says:

    Algorithms and artificial intelligence may be changing our world and behavior in ways we don’t fully understand. Here’s what humans need to keep in mind.

    A Psychologist Explains How AI and Algorithms Are Changing Our Lives
    https://www.wsj.com/articles/algorithms-ai-humanity-psychology-ebf1364c?mod=e2fb&fbclid=IwAR2YaD8E1JIzV1H-2n98wNXiM-wY_QEM4Xt6UDJqRfbFmCfJX2Txh6jwGx4_aem_AdwRQY3lSdXanKL8XAoDLB9QgEw5tbaHyK-DygZ21Xuca5B5VD9Y-NP3apQ9iir94VpWgaz3OdkqVrvxG9sYlxofP5iBg92Rstm5MFjTfb0Se5Dv0jROUbwuUGThKCGhRnI

    Behavioral scientist Gerd Gigerenzer has spent decades studying how people make choices. Here’s why he thinks too many of us are now letting AI make the decisions.

    In an age of ChatGPT, computer algorithms and artificial intelligence are increasingly embedded in our lives, choosing the content we’re shown online, suggesting the music we hear and answering our questions.

    These algorithms may be changing our world and behavior in ways we don’t fully understand, says psychologist and behavioral scientist Gerd Gigerenzer, the director of the Harding Center for Risk Literacy at the University of Potsdam in Germany. Previously director of the Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development, he has conducted research over decades that has helped shape understanding of how people make choices when faced with uncertainty.

    The term algorithm is thrown around so much these days. What are we talking about when we talk about algorithms?
    It is a huge thing, and therefore it is important to distinguish what we are talking about. One of the insights in my research at the Max Planck Institute is that if you have a situation that is stable and well defined, then complex algorithms such as deep neural networks are certainly better than human performance. Examples are [the games] chess and Go, which are stable. But if you have a problem that is not stable—for instance, you want to predict a virus, like a coronavirus—then keep your hands off complex algorithms. [Dealing with] the uncertainty—that is more how the human mind works, to identify the one or two important cues and ignore the rest. In that type of ill-defined problem, complex algorithms don’t work well.

    Does being able to understand how these algorithms are making decisions help people?
    Transparency is immensely important, and I believe it should be a human right. If it is transparent, you can actually modify that and start thinking [for] yourself again rather than relying on an algorithm that isn’t better than a bunch of badly paid workers. So we need to understand the situation where human judgment is needed and is actually better. And also we need to pay attention that we aren’t running into a situation where tech companies sell black-box algorithms that determine parts of our lives. It is about everything including your social and your political behavior, and then people lose control to governments and to tech companies.

    You write that “digital technology can easily tilt the scales toward autocratic systems.” Why do you say that? And how is this different from past information technologies?
    This kind of danger is a real one. Among all the benefits it has, one of the vices is the propensity for surveillance by governments and tech companies. But people don’t read privacy policies anymore, so they don’t know. And also the privacy policies are set up in a way that you can’t really read them. They are too long and complicated. We need to get control back.

    So then how should we be smart about something like this?
    Think about a coffee house in your hometown that serves free coffee. Everyone goes there because it is free, and all the other coffee houses get bankrupt. So you have no choice anymore, but at least you get your free coffee and enjoy your conversations with your friends. But on the tables are microphones and on the walls are video cameras that record everything you say, every word, and to whom, and send it off to analyze. The coffee house is full of salespeople who interrupt you all the time to offer you personalized products. That is roughly the situation you are in when you are on Facebook, Instagram or other platforms.

    We’ve seen this whole infrastructure around personalized ads be baked into the infrastructure of the internet. And it seems like it would take some pretty serious interventions to make that go away. If you’re being realistic, where do you think we’re going to be headed in the next decade or so with technology and artificial intelligence and surveillance?
    In general, I have more hope that people realize that it isn’t a good idea to give your data and your responsibility for your own decisions to tech companies who use it to make money from advertisers. That can’t be our future. We pay everywhere else with our [own] money, and that is why we are the customers and have the control. There is a true danger that more and more people are sleepwalking into surveillance and just accept everything that is more convenient.

    But it sounds so hard, when everything is so convenient, to read privacy policies and do research on these algorithms that are affecting my life. How do I push back against that?
    The most convenient thing isn’t to think. And the alternative is start thinking. The most important [technology to be aware of] is a mechanism that psychologists call “intermittent reinforcement.” You get a reinforcement, such as a “Like,” but you never know when you will get it. People keep going back to the platform and checking on their Likes. That has really changed the mentality and made people dependent. I think it is very important for everyone to understand these mechanisms and how one gets dependent. So you can get the control back if you want.

    Reply
  29. Tomi Engdahl says:

    Does this dialogue seem a little… robotic?

    SCREENWRITER UNION REPORTEDLY PROPOSES ALLOWING AI-WRITTEN MOVIES AND TV SHOWS
    https://futurism.com/the-byte/writers-guild-proposes-ai-written-screenplays

    The Writers Guild of America, a labor union representing film and TV writers, has proposed allowing the use of AIs like ChatGPT to help write screenplays, Variety reports — so long as humans get all the credit.

    According to Variety, WGA’s proposal was discussed during ongoing negotiations with the Alliance of Motion Picture and Television Producers, the behemoth representative body of studios and production companies.

    Rather than completely banning AI, as some had hoped when it became clear that the guild was interested in regulating its usage last month, the alleged proposal would allow writers to use the technology without having to share residuals, the compensation a writer receives from a production.

    If, for example, a studio exec hands an AI-generated script to a human writer to touch-up and edit, the writer would still get credit as the first author, meaning they’d get more residuals.

    Essentially, the guild’s proposal would treat AI as a tool rather than an author, which could spare writers the headache of battling software companies over who gets how much credit.

    If WGA has its way, AI-generated writing would not be considered “literary material” nor “source material.” The former describes what a writer actually puts to paper, while “source material” encompasses any media a screenplay is adapted from or based on.

    Crucially, since an AI’s prose wouldn’t even qualify as “source material,” a human writer adapting it could still earn a lofty “written by” credit, entitling them to full residuals. Notably, as far as we know, the proposal does not address a screenplay completely written by an AI without human intervention.

    That being said, the proposal — and the negotiations at large — are being kept under wraps,

    Reply
  30. Tomi Engdahl says:

    WATCH OUT, FOLKS: AI IMAGE GENERATORS CAN DO HANDS NOW
    https://futurism.com/the-byte/midjourney-hands

    AI IMAGES ARE ABOUT TO BE A REAL HANDFUL, NOW MORE THAN EVER.

    Midjourney recently announced its latest iteration, V5, and it’s set a new bar in achieving uncanny, Instagram-filtered photorealism.

    “MJ v5 currently feels to me like finally getting glasses after ignoring bad eyesight for a little bit too long,” Julie Wireland, a graphic designer who frequently shares Midjourney generated images, told Ars Technica.

    https://arstechnica.com/information-technology/2023/03/ai-imager-midjourney-v5-stuns-with-photorealistic-images-and-5-fingered-hands/?comments=1&comments-page=1

    Reply
  31. Tomi Engdahl says:

    Tech guru Jaron Lanier: ‘The danger isn’t that AI destroys us. It’s that it drives us insane’
    https://www.theguardian.com/technology/2023/mar/23/tech-guru-jaron-lanier-the-danger-isnt-that-ai-destroys-us-its-that-it-drives-us-insane?CMP=fb_a-technology_b-gdntech

    The godfather of virtual reality has worked beside the web’s visionaries and power-brokers – but likes nothing more than to show the flaws of technology. He discusses how we can make AI work for us, how the internet takes away choice – and why he would ban TikTok

    Reply
  32. Tomi Engdahl says:

    ChatGPT:hen voi nyt liittää kolmansien osapuolten API-rajapintoja, joita tekoäly osaa kutsua hakeakseen lisää informaatiota tai ohjatakseen palveluita. Saattaa laajentua toiminnallisuus aika paljon näiden myötä. (Limited alpha.)

    ChatGPT plugins
    https://openai.com/blog/chatgpt-plugins

    We’ve implemented initial support for plugins in ChatGPT. Plugins are tools designed specifically for language models with safety as a core principle, and help ChatGPT access up-to-date information, run computations, or use third-party services.

    Reply
  33. Tomi Engdahl says:

    ChatGPT-teko­äly muuttui vaaralliseksi odottamattomalla tavalla https://www.is.fi/digitoday/tietoturva/art-2000009464762.html

    Tekoälyyn liittyy riski, jota ei välttämättä tule ajatelleeksi: ChatGPT:n etsiminen voi johtaa ansaan. Samalla tekoälyn uusin versio on helposti huijattavissa pahantekoon.

    Reply
  34. Tomi Engdahl says:

    Vaaralliset vinkit leviävät Youtubessa – Tekoälyn ohjeita noudattava saa haittaohjelman
    Maksullisten ohjelmien ilmaislatauksia mainostaviin videoihin kuuluu suhtautua varauksella.
    https://www.iltalehti.fi/tietoturva/a/f19e4b7b-1b41-47de-9a91-834ba5231576

    Tekoälyllä tuotettu haittaohjelmasisältö on kasvanut Googlen omistamassa videopalvelu Youtubessa kuukausittain 200–300 prosenttia. Kasvua on havaittu viime vuoden marraskuusta lähtien. Asiasta kertoi tekoälyyn erikoistunut tietoturvayhtiö CouldSEK.

    Threat Actors Abuse AI-Generated Youtube Videos to Spread Stealer Malware
    https://cloudsek.com/blog/threat-actors-abuse-ai-generated-youtube-videos-to-spread-stealer-malware

    Reply
  35. Tomi Engdahl says:

    Survey: Artificial Intelligence and Electronic Design
    March 21, 2023
    We would like to find out what you are doing with artificial intelligence and machine learning in your development environment.
    https://www.electronicdesign.com/resources/industry-insights/article/21262395/electronic-design-survey-artificial-intelligence-in-electronic-design?utm_source=EG+ED+Auto+Electronics&utm_medium=email&utm_campaign=CPS230316043&o_eid=7211D2691390C9R&rdx.identpull=omeda|7211D2691390C9R&oly_enc_id=7211D2691390C9R

    Reply
  36. Tomi Engdahl says:

    https://hackaday.com/2023/03/22/why-llama-is-a-big-deal/

    You might have heard about LLaMa or maybe you haven’t. Either way, what’s the big deal? It’s just some AI thing. In a nutshell, LLaMa is important because it allows you to run large language models (LLM) like GPT-3 on commodity hardware. In many ways, this is a bit like Stable Diffusion, which similarly allowed normal folks to run image generation models on their own hardware with access to the underlying source code. We’ve discussed why Stable Diffusion matters and even talked about how it works.

    LLaMa is a transformer language model from Facebook/Meta research, which is a collection of large models from 7 billion to 65 billion parameters trained on publicly available datasets. Their research paper showed that the 13B version outperformed GPT-3 in most benchmarks and LLama-65B is right up there with the best of them. LLaMa was unique as inference could be run on a single GPU due to some optimizations made to the transformer itself and the model being about 10x smaller. While Meta recommended that users have at least 10 GB of VRAM to run inference on the larger models, that’s a huge step from the 80 GB A100 cards that often run these models.

    Reply
  37. Tomi Engdahl says:

    We’re already feeling the impact of today’s rapidly advancing AI. But do you know how we got here? Learn about the models helping to drive progress, training approaches pushing the cutting edge, and products harnessing these new capabilities: https://msft.it/61845CFXn

    Reply
  38. Tomi Engdahl says:

    Google’s public debut of its own AI chatbot Bard came with disclaimers that it has “limitations and won’t always get it right.”

    https://bit.ly/40j49C8

    Reply
  39. Tomi Engdahl says:

    Luvut julki: Microsoft päihitti Googlen teko­älyn lähtö­kuopissa https://www.is.fi/digitoday/art-2000009475106.html

    Reply
  40. Tomi Engdahl says:

    ChatGPT + Code Interpreter = Magic
    https://andrewmayneblog.wordpress.com/2023/03/23/chatgpt-code-interpreter-magic/

    tl;dr: OpenAI is testing the ability to run code and use third-party plugins in ChatGPT.
    OpenAI has announced that we’re developing plugins for ChatGPT that will extend its capabilities. [Link] Plugins range from third-party tools like WolframAlpha and OpenTable, to our browsing plugin and Code Interpreter that can generate code, run code, upload and download files ranging from csv data to images and evaluate the output all within the ChatGPT interface.

    Reply
  41. Tomi Engdahl says:

    The secret history of Elon Musk, Sam Altman, and OpenAI
    https://www.semafor.com/article/03/24/2023/the-secret-history-of-elon-musk-sam-altman-and-openai

    After three years, Elon Musk was ready to give up on the artificial intelligence research firm he helped found, OpenAI.
    The nonprofit had launched in 2015 to great fanfare with backing from billionaire tech luminaries like Musk and Reid Hoffman, who had as a group pledged $1 billion. It had lured some of the top minds in the field to leave big tech companies and academia.
    But in early 2018, Musk told Sam Altman, another OpenAI founder, that he believed the venture had fallen fatally behind Google, people familiar with the matter said.
    And Musk proposed a possible solution: He would take control of OpenAI and run it himself.

    Reply
  42. Tomi Engdahl says:

    AI COMPANY WITH ZERO REVENUE RAISES $150 MILLION
    https://futurism.com/the-byte/ai-company-no-revenue-fundraising

    With the help of a brand new $150 million dollar cash infusion from Andreessen Horowitz, a 16-month-old AI chatbot startup called Character.ai just reached a $1 billion market cap — despite having yet to generate any revenue.

    Founded by two ex-Googlers, the idea is to host various AI-powered personalities, from celebrities to anime characters to Twitch streams to historical figures and more, all of whom users can interact with via text. Wanna ask AI Taylor Swift what her favorite song is? Albert Einstein what his greatest accomplishment was? Go for it, kid.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*