AI trends 2026

Here are some of the the major AI trends shaping 2026 — based on current expert forecasts, industry reports, and recent developments in technology. The material is analyzed using AI tools and final version hand-edited to this blog text:

1. Generative AI Continues to Mature

Generative AI (text, image, video, code) will become more advanced and mainstream, with notable growth in:
* Generative video creation
* Gaming and entertainment content generation
* Advanced synthetic data for simulations and analytics
This trend will bring new creative possibilities — and intensify debates around authenticity and copyright.

2. AI Agents Move From Tools to Autonomous Workers

Rather than just answering questions or generating content, AI systems will increasingly act autonomously, performing complex, multi-step workflows and interacting with apps and processes on behalf of users — a shift sometimes called agentic AI. These agents will become part of enterprise operations, not just assistant features.

3. Smaller, Efficient & Domain-Specific Models

Instead of “bigger is always better,” specialized AI models tailored to specific industries (healthcare, finance, legal, telecom, manufacturing) will start to dominate in many enterprise applications. These models are more accurate, legally compliant, and cost-efficient than general models.

4. AI Embedded Everywhere

AI won’t be an add-on feature — it will be built into everyday software and devices:
* Office apps with intelligent drafting, summarization, and task insights
* Operating systems with native AI
* Edge devices processing AI tasks locally
This makes AI pervasive in both work and consumer contexts.

5. AI Infrastructure Evolves: Inference & Efficiency Focus

More investment is going into inference infrastructure — the real-time decision-making step where models run in production — thereby optimizing costs, latency, and scalability. Enterprises are also consolidating AI stacks for better governance and compliance.

6. AI in Healthcare, Research, and Sustainability

AI is spreading beyond diagnostics into treatment planning, global health access, environmental modeling, and scientific discovery. These applications could help address personnel shortages and speed up research breakthroughs.

7. Security, Ethics & Governance Become Critical

With AI handling more sensitive tasks, organizations will prioritize:
* Ethical use frameworks
* Governance policies
* AI risk management
This trend reflects broader concerns about trust, compliance, and responsible deployment.

8. Multimodal AI Goes Mainstream

AI systems that understand and generate across text, images, audio, and video will grow rapidly, enabling richer interactions and more powerful applications in search, creative work, and interfaces.

9. On-Device and Edge AI Growth

Processing AI tasks locally on phones, wearables, or edge devices will increase, helping with privacy, lower latency, and offline capabilities — especially crucial for real-time scenarios (e.g., IoT, healthcare, automotive).

10. New Roles: AI Manager & Human-Agent Collaboration

Instead of replacing humans, AI will shift job roles:
* People will manage, supervise, and orchestrate AI agents
* Human expertise will focus on strategy, oversight, and creative judgment
This human-in-the-loop model becomes the norm.

Sources:
[1]: https://www.brilworks.com/blog/ai-trends-2026/?utm_source=chatgpt.com “7 AI Trends to Look for in 2026″
[2]: https://www.forbes.com/sites/bernardmarr/2025/10/13/10-generative-ai-trends-in-2026-that-will-transform-work-and-life/?utm_source=chatgpt.com “10 Generative AI Trends In 2026 That Will Transform Work And Life”
[3]: https://millipixels.com/blog/ai-trends-2026?utm_source=chatgpt.com “AI Trends 2026: The Key Enterprise Shifts You Must Know | Millipixels”
[4]: https://www.digitalregenesys.com/blog/top-10-ai-trends-for-2026?utm_source=chatgpt.com “Digital Regenesys | Top 10 AI Trends for 2026″
[5]: https://www.n-ix.com/ai-trends/?utm_source=chatgpt.com “7 AI trends to watch in 2026 – N-iX”
[6]: https://news.microsoft.com/source/asia/2025/12/11/microsoft-unveils-7-ai-trends-for-2026/?utm_source=chatgpt.com “Microsoft unveils 7 AI trends for 2026 – Source Asia”
[7]: https://www.risingtrends.co/blog/generative-ai-trends-2026?utm_source=chatgpt.com “7 Generative AI Trends to Watch In 2026″
[8]: https://www.fool.com/investing/2025/12/24/artificial-intelligence-ai-trends-to-watch-in-2026/?utm_source=chatgpt.com “3 Artificial Intelligence (AI) Trends to Watch in 2026 and How to Invest in Them | The Motley Fool”
[9]: https://www.reddit.com//r/AI_Agents/comments/1q3ka8o/i_read_google_clouds_ai_agent_trends_2026_report/?utm_source=chatgpt.com “I read Google Cloud’s “AI Agent Trends 2026” report, here are 10 takeaways that actually matter”

987 Comments

  1. Tomi Engdahl says:

    The 5 biggest obstacles to AI data centers in space
    https://bigthink.com/starts-with-a-bang/5-biggest-obstacles-ai-data-centers-space/?fbclid=IwdGRjcAQK9O1jbGNrBAr01WV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHkNn6f-QKoBqn-lRoKFKD31UBCelsVvvy3OlqKpzc2TyNjPpPoNU38gWNtjQ_aem_ET1BDv_-5vhmRPxPPd8FSA

    There are plenty of engineering obstacles, and those can be overcome. But you cannot change the laws of physics, and those matter too.

    However you feel about artificial intelligence (AI) — and, in particular, about the large language models and chatbots that are powered by it — the reality is that humanity is currently building and expanding infrastructure to support it. This includes large networks of power-demanding and water-requiring data centers that are being constructed, often conflicting with the electricity and water needs of the humans who live in those locations. It’s because of these concerns that some have floated the idea of AI data centers in space, with one company, SpaceX, recently announcing plans to build a literal megaconstellation of one million satellites to further that ambition.

    Is this an example of an emerging technology that could provide an off-world solution to the problem of competing demands for limited resources? Or is it, like the hyperloop, an example of grift: where the concept itself isn’t exactly physically impossible, but is rendered so impractical due to the actual physical constraints of the endeavor, that it absolutely cannot materialize as advertised? It turns out that there are several challenges to building a functional network of AI data centers in space. Those challenges come on several fronts: economically, from an engineering perspective, and constrained by the laws of physics themselves.

    Of the five big obstacles, three might yet be solved by technological developments. The last two, however, are set by the physics of the Universe itself, and are likely to be dealbreakers for the entire endeavor.

    5.) The prohibitive launch costs of satellites.
    4.) The inability to repair or upgrade satellites in space.
    3.) Providing power to these satellites.
    2.) Cosmic ray errors.
    1.) The problem of cooling.

    Reply
  2. Tomi Engdahl says:

    Heterodoxus
    Goldman Sachs Researchers Make Startling Claim About AI’s Effects on the US Economy
    “We don’t actually view AI investment as strongly growth-positive.”
    https://futurism.com/future-society/researchers-economy-ai-narratives?fbclid=IwdGRjcAQK-2FjbGNrBAr7R2V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHvVOe6YsQeTbefluEisLsk9PSTYWwjaLwDtwxAw-vNGtS5j5s12jTjoHjw8M_aem_RNSRartGgvMO-iSDcoZpVA

    If there’s one thing AI excels at, it’s defying every attempt to build a coherent narrative around it.

    Is AI destroying jobs, or just masking the same old garbage labor market? Are data centers unlocking prosperity for generations to come, or hemorrhaging value faster than a new car driven off the lot?

    Whatever can be said of AI’s consequences for the future, one of the more widely agreed-upon views among economists seemed to be that tech spending is propping up an otherwise dismal economy.

    Reply
  3. Tomi Engdahl says:

    QuitGPT
    Campaign Urges Users to Quit ChatGPT Over OpenAI’s Support for Trump and ICE
    “Don’t support the fascist regime.”
    https://futurism.com/future-society/boycott-chatpgpt-trump?fbclid=IwdGRjcAQK_HpjbGNrBAr8Z2V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHjnr9iGBrrVGCv4d5G5SnZGY3QLVsd-tVhz-So-TjrSN81kh4wehO1KsO-wG_aem_hqnSCaN88jT5qrOQN0T7Ew

    It isn’t exactly big news that big tech is in deep with the US government. Days after Trump’s inauguration last year, execs including OpenAI’s Sam Altman flocked to the Oval Office to announce a $500 billion AI infrastructure project — and they’ve remained deeply sycophantic ever since.

    Now that obsequiousness could be coming back to haunt them. As reported by MIT Technology Review, activists critical of the Trump administration and the actions of Immigration and Customs Enforcement have started a campaign called QuitGPT, urging regular users to ditch OpenAI’s chatbot for good.

    So far, the campaign boasts over 700,000 supporters of the boycott. The QuitGPT website lists a few different ways to participate: quitting ChatGPT outright, cancelling paid subscriptions, and spreading the word about the boycott with others on social media.

    Reply
  4. Tomi Engdahl says:

    Artificial Indulgence
    Pope Implores Priests to Stop Writing Sermons Using ChatGPT
    AI “will never be able to share faith.”
    https://futurism.com/artificial-intelligence/pope-priests-ai?fbclid=IwdGRjcAQLFRNjbGNrBAsUuGV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHtBPaddPcwhs1_tCOrhJ1BzT7QuoG39TjO2xJ6CmPbZxPdBJQ9Pu4UOj0Ctc_aem_XOrbLEsD7TOkPkpRMpodAg

    In a closed-door meeting with clergy from the Diocese of Rome late last week, Pope Leo XIV clobbered his priests with a distinctly 21st-century request: to resist the “temptation to prepare homilies with artificial intelligence,” according to Vatican News.

    “Like all the muscles in the body, if we do not use them, if we do not move them, they die,” the Pope reportedly said. “The brain needs to be used, so our intelligence must also be exercised a little so as not to lose this capacity.”

    The holy father drew a fascinating line in the sand, declaring that despite AI’s capabilities now or in the future, a chatbot could never stand-in for a flesh-and-blood priest. “To give a homily is to share faith,” he said, and AI “will never be able to share faith.”

    Aside from AI, the Pope warned his clergymen against conflating social media to real life, per Vatican News. If one lives a “life authentically rooted in the Lord,” they’re offering something special to the world, the Pope said, adding that a common “illusion on the internet, on TikTok” is to treat followers and likes as authentic spiritual connection.

    Whether you follow the teachings of the church or not, the advice is a unique snapshot of the issues facing the Vatican in 2026. It also sits awkwardly with the Vatican’s own AI translation system, an upcoming program that will translate liturgical texts in up to 60 languages in real time.

    That tool was announced the same day as the Pope’s meeting with clergy

    Reply
  5. Tomi Engdahl says:

    Two Against One
    AI Delusions Are Leading to Domestic Abuse, Harassment, and Stalking
    “I couldn’t leave my house for months… people were messaging me all over my social media, like, ‘Are you safe? Are your kids safe?’”
    https://futurism.com/artificial-intelligence/ai-abuse-harassment-stalking?fbclid=IwVERDUAQLFadleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR4DHLFju8EDxAKxO0aFje66zZFbfbI1YpmlVg6dRw_iSP1IVhxL3AS2EyXSnw_aem_YKg7sSI2g6eSvmoZl9Juog

    Reply
  6. Tomi Engdahl says:

    Monkey Seedance
    New AI Video Generator Is So Impressive That It’s Scaring Hollywood
    “I hate to say it. It’s likely over for us.”
    https://futurism.com/artificial-intelligence/seedance-ai-video-generator-scaring-hollywood?fbclid=IwdGRjcAQLk8ljbGNrBAuTdGV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHq6Y22eHMDwujkmqUhJ31XOqAnXWOp6m-2QyVsHsKDluWdGMOqaov_6v3rez_aem_7qiLNayt19humbjGm6gopg

    Text-to-video generating tools have made tremendous leaps in a few short years.

    We went from a horrifying clip of actor Will Smith’s contorted face temporarily merging with a bowl of spaghetti in 2023 to a far more realistic clip of him enjoying a plate of pasta — including a soundtrack of unnerving squelching and chomping sounds — a mere two years later.

    Now, TikTok’s Chinese owner ByteDance has once again upped the ante with the latest version of its Seedance AI video generating tool. It didn’t take long for photorealistic footage of “Lord of the Rings” clips, rapper Kanye West and ex-wife Kim Kardashian facing off in a dramatic Mandarin language movie scene, and of course Will Smith battling a ferocious spaghetti monster to go viral on social media.

    Reply
  7. Tomi Engdahl says:

    Google clamps down on Antigravity ‘malicious usage’, cutting off OpenClaw users in sweeping ToS enforcement move
    https://venturebeat.com/orchestration/google-clamps-down-on-antigravity-malicious-usage-cutting-off-openclaw-users

    Google caused controversy among some developers this weekend and today, Monday, February 23rd, after restricting their usage of its new Antigravity “vibe coding” platform, alleging “maliciously usage.”

    Some users who had been using the open source autonomous AI agent OpenClaw in conjunction with agents built on Antigravity, as well as those who had connected OpenClaw agents to their Gmails, claimed on social media that they lost access to their Google accounts.

    According to Google, said users had been using Antigravity to access a larger number of Gemini tokens via third-party platforms like OpenClaw, which overwhelmed the system for other Antigravity customers.

    This move has cut off several users, underscoring the architectural and trust issues that can arise with OpenClaw. The timing of Google’s crackdown is particularly pointed. Just one week ago, on February 15, OpenAI CEO Sam Altman announced that OpenClaw creator Peter Steinberger had joined OpenAI to lead its “next generation of personal agents.” While OpenClaw remains an open-source project under an independent foundation, it is now financially backed and strategically guided by Google’s primary rival.

    Reply
  8. Tomi Engdahl says:

    What MCP Can and Cannot Do for Project Managers Today
    An investigation into MCP, AI, and workflow automation for project management
    https://medium.com/@marc.bara.iniesta/what-mcp-can-and-cannot-do-for-project-managers-today-b1ce7ccc804a

    Reply
  9. Tomi Engdahl says:

    The Model Context Protocol ecosystem is growing fast. There are MCP servers for GitHub, Notion, Slack, Google Drive, and hundreds of other services. The promise for project managers is clear: give an LLM access to your PM tools and let it plan, schedule, track, and report on your behalf.

    I went looking for MCP servers that could meaningfully improve project management work. To do that, you have to start with what PMs actually spend their time on. PMBOK 8 defines 40 processes, but the daily reality is a mix of leadership, planning, and communication that are hard to separate. Defining scope means running workshops and negotiating with stakeholders. Estimating effort means sitting with the team and challenging assumptions. Building a schedule means coordinating across resource managers, adjusting for availability, and making trade-offs. All of this produces documentation: charters, plans, risk registers, status reports, change requests. Computation is a small slice of even the planning work.

    What I found is a spectrum. Platform connectors deliver real value as an agentic layer over existing PM tools. Community-built task management MCPs are mostly prompt templates with persistence. Graph algorithm MCPs solve problems that rarely come up. And the one scheduling problem that genuinely needs a solver barely has a wrapper.

    This is that investigation.

    https://medium.com/@marc.bara.iniesta/what-mcp-can-and-cannot-do-for-project-managers-today-b1ce7ccc804a

    Reply
  10. Tomi Engdahl says:

    Yang Drain
    AI Will Destroy Millions of White Collars Jobs in the Coming Months, Andrew Yang Warns, Driving Surge of Personal Bankruptcies
    “Do you sit at a desk and look at a computer much of the day? Take this very seriously.”
    https://futurism.com/artificial-intelligence/ai-labor-andrew-yang?fbclid=IwdGRjcAQL06BjbGNrBAvTX2V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHjiqJIn2v3WSqv6wxe_Qdb-OWcCYX3trYCA0yQjj2UuQb4gLsmGgvB0UPbPt_aem_PRCheC1WHDdXAqg7LNcBCQ

    Andrew Yang — millionaire entrepreneur, noted Ivy leaguer, and one-time presidential hopeful — has a grim warning for his fellow salaried professionals: AI is about to wipe “millions” of office jobs over the coming months.

    In an essay published on his Substack and flagged by Business Insider, Yang explained what he calls the “great disemboweling of white-collar jobs” due to AI.

    “Do you sit at a desk and look at a computer much of the day?” he challenged. “Take this very seriously.”

    “This automation wave will kick millions of white-collar workers to the curb in the next 12-18 months,” Yang wrote. “As one company starts to streamline, all of their competitors will follow suit. It will become a competition because the stock market will reward you if you cut headcount and punish you if you don’t. As one investor put it, ‘sell anything that consists of people sitting at a desk looking at a computer.’”

    Yang predicts that mid-career office professionals will be among the first to go. Right now, there are around 70 million office workers in the United States, but “expect that number to be reduced substantially, by 20-50 percent in the next several years,” the entrepreneur warned.

    Yang went so far as to urge anyone in mid-career management, particularly those who own homes in the affluent burbs of Silicon Valley or Westchester County, New York to put there house up for sale now, to avoid the mad scramble once the labor apocalypse hits. “It might not feel great being first, but you don’t want to be last,” Yang wrote.

    Going on, Yang predicted that personal bankruptcies will “surge” as office workers struggle to find gainful employment to maintain their lifestyles. This, he says, will also come for service workers down-wind of office labor — those employed as drycleaners, hair stylists, and dog walkers.

    The “great disemboweling” will likewise impact recent college grads — a section which is already suffering through a brutal hiring market in the US — according to Yang.

    All of this will result in even greater unrest and angst as the wealth generated by the AI spending boom will largely go to the few CEOs and executives at the top of the food chain. “Imagine what people are going to think when we all feel like serfs to AI overlords that have soaked up the white-collar work?” Yang posits.

    Reply
  11. Tomi Engdahl says:

    AIs can’t stop recommending nuclear strikes in war game simulations
    Leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95 per cent of cases
    https://www.newscientist.com/article/2516885-ais-cant-stop-recommending-nuclear-strikes-in-war-game-simulations/?fbclid=IwdGRzaAQL2jtjbGNrBAvaC2V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHuugNpaDCjnvWAvtGz5WJkKaNY8bd6qi5oMP29TAeIFKEFUmRwqEVgPtriDp_aem_g4AUHR7G6nwcpiEmzZIpfA

    Advanced AI models appear willing to deploy nuclear weapons without the same reservations humans have when put into simulated geopolitical crises.

    Kenneth Payne at King’s College London set three leading large language models – GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash – against each other in simulated war games. The scenarios involved intense international standoffs, including border disputes, competition for scarce resources and existential threats to regime survival.

    The AIs were given an escalation ladder, allowing them to choose actions ranging from diplomatic protests and complete surrender to full strategic nuclear war. The AI models played 21 games, taking 329 turns in total, and produced around 780,000 words describing the reasoning behind their decisions.

    Reply
  12. Tomi Engdahl says:

    Anthropic just made it official: they won’t slow down even if their safety stack can’t keep pace.

    Anthropic is dropping its signature safety pledge amid a heated AI race
    https://www.businessinsider.com/anthropic-changing-safety-policy-2026-2?fbclid=IwdGRjcAQMPN5jbGNrBAw8e2V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHpQOIGa7-uhyVuFjvj0eWthsTQYZ3tETy60rRhnmZkbaLtscy-t1ifI01trw_aem_ibI1EI0oe1ILXYE1dZNoqg&utm_campaign=mrf-insider-marfeel-headline-graphic&mrfcid=20260225699f3acb3714424a5dafc515

    The AI startup founded by former OpenAI employees, laser-focused on the proper development of the technology, is weakening its foundational safety principle.

    In a statement on Tuesday, Anthropic said that amid heightened competition and a lack of government regulation, it will no longer abide by its commitment “to pause the scaling and/or delay the deployment of new models” when such advancements would have outpaced its own safety measures.

    Reply
  13. Tomi Engdahl says:

    Lamp Cloth
    OpenAI’s Hardware Device Just Leaked, and You Will Cringe
    Thanks, but no thanks.
    https://futurism.com/artificial-intelligence/openai-hardware-device-leaked-cringe?fbclid=IwdGRjcAQMQ2tjbGNrBAxDUGV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHmGeqTJIEZfMLW9ta8f5X988DGq46gV8P-7rQMN2Y5s6l7HccUC7CfXOVqKE_aem_qrwtF7nO-ndArtLmYBIieg

    Stuffing an AI chatbot into a consumer electronics device and turning out with a product people actually want has proven extremely difficult.

    We’ve come across creepy and widely-hated pendants designed to listen to everything you say, as well as flawed AI “pins” that turned out to be a flaming dumpster fire, leading to frustration and disbelief.

    Now Sam Altman’s OpenAI, which recruited former Apple design lead Jony Ive to lead its own hardware effort, is gearing up to release its would-be showstopper — and alas, it doesn’t sound like it’s managed to iterate beyond “clunky gadget that pretty much does stuff your phone already does.”

    As The Information reports, a team of over 200 employees has been working on a smart speaker that features a built-in camera, which will recognize faces and identify objects thanks to a dose of AI. It will reportedly retail for anywhere from $200 and $300 and ship no earlier than the beginning of next year.

    That’s right: the best idea that the big brains at OpenAI could cook up is yet another “household gadget that talks to you” — without a single clear differentiating feature from the phone you already have in your pocket, which can already run every major chatbot.

    OpenAI Plans to Price Smart Speaker at $200 to $300, as AI Device Team Takes Shape
    https://www.theinformation.com/articles/inside-openai-team-developing-ai-devices?fbclid=IwVERDUAQMQ-dleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR6j-Fv_Ir9oDammRDud7KvZCvFOD-JjeXSaqL_ZxcFlvwdpDRRFb-_3cZKtOw_aem_z_NRGz5tcHt7nEX9vlmnMg

    Reply
  14. Tomi Engdahl says:

    Murder Bot
    A Serial Killer Used ChatGPT to Plan Murders, Police Say
    Grim.
    https://futurism.com/future-society/serial-killer-chatgpt-murders?fbclid=IwdGRjcAQMSAVjbGNrBAxH9GV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHk6YjUwmq-wiES3qAZdHorURTvmD07Q2XJAmIwVLAxybYairaJwRKv7Cs4e9_aem_YIwt0PAJHr-avUE6TVAIFw

    An accused serial killer in South Korea used ChatGPT to help plan a string of murders, investigators alleged Thursday.

    The 21-year-old woman, identified by her surname Kim, is accused of killing two men by giving them drinks laced with benzodiazepines that she was prescribed for a mental illness, according to reports from The Korea Herald and the BBC.

    Investigators found that, before the men’s deaths, Kim had asked ChatGPT about the risks of administering the drugs.

    “What happens if you take sleeping pills with alcohol?” she allegedly prompted the AI chatbot. “How much would be considered dangerous?” And, “Could it be fatal?”

    “Kim repeatedly asked questions related to drugs on ChatGPT,” an investigator said, as quoted by the newspaper. “She was fully aware that consuming alcohol together with drugs could result in death.”

    Reply
  15. Tomi Engdahl says:

    Not sure if it’s just me but as AI tools generate endless tracks, perfect mixes, and even full songs in seconds, it’s easy to wonder if live human music is headed for obsolescence. I’ve gigged pop covers and originals for many years across Maryland, DC, and Northern Virginia venues — from quiet bars to packed rooms — and I gotta say: the opposite is happening.

    Live human music isn’t just surviving the AI wave; it’s becoming more essential than ever.

    Why Live Music Thrives Amid AI Innovation
    https://gritandgroove.com/2026/02/why-live-human-music-will-thrive-in-the-age-of-ai/?fbclid=IwdGRjcAQMSNFjbGNrBAxIq2V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHj4_vIkyJgiNlop9yP1e-9-iRPdaqe4ocXpk8vmnIONECX6B_u-MsWmhvBJg_aem_xp9FNZSjKpCD0BVm-zifpg

    As AI tools generate endless tracks, perfect mixes, and even full songs in seconds, it’s easy to wonder if live human music is headed for obsolescence. I’ve gigged pop covers and originals for many years across Maryland, DC, and Northern Virginia venues — from quiet bars to packed rooms — and I’ll tell you straight: the opposite is happening.

    Live human music isn’t just surviving the AI wave; it’s becoming more essential than ever.

    AI excels at replication and perfection: flawless pitch, infinite variations, zero mistakes.

    Reply
  16. Tomi Engdahl says:

    Why Oh Why
    Tech CEOs Confused by Why Everybody Hates AI So Much
    “It’s extremely hurtful, frankly.”
    https://futurism.com/future-society/tech-ceo-ai-hate?fbclid=IwdGRjcAQMTV9jbGNrBAxNQWV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHmDKAJgWBiJBUpJbDnhmZVfsDVlLTFW1DG0yCp7w803Y3VBKAgkqMZewjpcQ_aem_VkE0mrX89B8wrE3VPw0FOQ

    These days, it’s not enough to sit and watch as AI destroys a generation of students, makes it impossible to find a new job, and generates military targets by the thousands — you gotta be grateful for it, too.

    That, at least, is the attitude of the tech elites who’ve spent years pushing AI on the masses, only to find the public is in no such mood. As the New York Times observed over the weekend, the particular characteristics of what some have called the “AI bubble” diverge from similar moments in economic history in one key way: practically everybody hates it.

    Reply
  17. Tomi Engdahl says:

    If even those who do actively use AI aren’t even willing to pay for it, maybe the real issue isn’t John Q Public’s attitude, but the tech itself.

    Reply
  18. Tomi Engdahl says:

    C-Suite Dysfunction
    A Huge Survey of CEOs and Other Execs Just Found Something Damning About AI’s Effects on Productivity
    The ever-elusive productivity gains strike back.
    https://futurism.com/artificial-intelligence/survey-ceos-ai-workplace?fbclid=IwVERDUAQMTbRleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR6ZdFZSdbSmHYLKfQBvy7f062DwqENGWV9TnIsXRJD2fIJj7P5Gq-XHIvWDwA_aem_YVlKXSrX_9FqJCXCMrr6nQ

    The case against AI’s usefulness in the workplace continues to grow.

    In a new analysis of a survey published by the National Bureau of Economic Research and highlighted by Fortune, around 90 percent of the nearly 6,000 interviewed CEOs, chief financial officers, and other top executives at firms across the US, UK, Germany, and Australia, said that AI has had no impact on productivity or employment at their business.

    To be clear, the question was about AI’s impact generally, and not just from implementing it in the workplace. But around 70 percent of the firms reported actively using AI, meaning the vast majority of them are admitting that adopting the tech hasn’t budged the needle for them yet.

    Reply
  19. Tomi Engdahl says:

    The U.S. military is reshuffling its AI stack, adding Grok to classified systems amid growing tensions with Anthropic. https://bit.ly/478RGGq

    Reply
  20. Tomi Engdahl says:

    Nvidia earnings updates: Chip titan to deliver results with the stock up 4% YTD : https://mrf.lu/hWSN

    Reply
  21. Tomi Engdahl says:

    Work It Out
    Researchers Studied What Happens When Workplaces Seriously Embrace AI, and the Results May Make You Nervous
    “You don’t work less. You just work the same amount or even more.”
    https://futurism.com/artificial-intelligence/what-happens-workplaces-embrace-ai?fbclid=IwdGRjcAQNAJ9jbGNrBA0AV2V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHiRxUgU2et_fYdpCijP4ZiNMjVJsVIW0n54t9rOOqDEt3InRn4cn1u_ONcK4_aem_1Jb4ytYA2GKCe6Jvm-Uabg

    Even if AI is — or eventually becomes — an incredible automation tool, will it make workers’ lives easier? That’s the big question explored in an ongoing study by researchers from UC Berkeley’s Haas School of Business. And so far, it’s not looking good for the rank and file.

    In a piece for Harvard Business Review, the research team’s Aruna Ranganathan and Xinqi Maggie Ye reported that after closely monitoring a tech company with two hundred employees for eight months, they found that AI actually intensified the work they had to do, instead of reducing it.

    This “workload creep,” in which the employees took on more tasks than what was sustainable for them to keep doing, can create vicious cycle that leads to fatigue, burnout, and lower quality work.

    Reply
  22. Tomi Engdahl says:

    AI Doesn’t Reduce Work—It Intensifies It
    https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it?fbclid=IwVERDUAQNANVleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR4NM_niuQskOoIhg5IQeoYJB1V8SmgAF6DZxZjUuzDtxy06bd_7sUoCxlMNNQ_aem_FSWCgei7Ku_6ElfKL_YKWw

    Right now, many companies are worried about how to get more employees to use AI. After all, the promise of AI reducing the burden of some work—drafting routine documents, summarizing information, and debugging code—and allowing workers more time for high-value tasks is tantalizing.

    But are they ready for what might happen if they succeed? While leaders are focused on promised productivity gains, they may find themselves surprised by the complex reality, and may not see what these gains are costing them until it’s too late.

    In our in-progress research, we discovered that AI tools didn’t reduce work, they consistently intensified it. In an eight-month study of how generative AI changed work habits at a U.S.-based technology company with about 200 employees, we found that employees worked at a faster pace, took on a broader scope of tasks, and extended work into more hours of the day, often without being asked to do so. Importantly, the company did not mandate AI use (though it did offer enterprise subscriptions to commercially available AI tools). On their own initiative workers did more because AI made “doing more” feel possible, accessible, and in many cases intrinsically rewarding.

    While this may sound like a dream come true for leaders, the changes brought about by enthusiastic AI adoption can be unsustainable, causing problems down the line. Once the excitement of experimenting fades, workers can find that their workload has quietly grown and feel stretched from juggling everything that’s suddenly on their plate. That workload creep can in turn lead to cognitive fatigue, burnout, and weakened decision-making. The productivity surge enjoyed at the beginning can give way to lower quality work, turnover, and other problems.

    This puts leaders in a bind. What should they do? Asking employees to self-regulate isn’t a winning strategy. Rather, companies need to develop a set of norms and standards around AI use—what we call an “AI practice.” Here’s what leaders need to know, and what they can do to set their employees up for success.

    How Generative AI Intensifies Work

    We identified three main forms of intensification.

    Task expansion. Because AI can fill in gaps in knowledge, workers increasingly stepped into responsibilities that previously belonged to others. Product managers and designers began writing code; researchers took on engineering tasks; and individuals across the organization attempted work they would have outsourced, deferred, or avoided entirely in the past.

    Generative AI made those tasks feel newly accessible. These tools provided what many experienced as an empowering cognitive boost: They reduced dependence on others, and offered immediate feedback and correction along the way. Workers described this as “just trying things” with the AI, but these experiments accumulated into a meaningful widening of job scope. In fact, workers increasingly absorbed work that might previously have justified additional help or headcount.

    There were knock-on effects of people expanding their remits. For instance, engineers, in turn, spent more time reviewing, correcting, and guiding AI-generated or AI-assisted work produced by colleagues. These demands extended beyond formal code review. Engineers increasingly found themselves coaching colleagues who were “vibe-coding” and finishing partially complete pull requests. This oversight often surfaced informally—in Slack threads or quick desk-side consultations—adding to engineers’ workloads.

    Blurred boundaries between work and non-work. Because AI made beginning a task so easy—it reduced the friction of facing a blank page or unknown starting point—workers slipped small amounts of work into moments that had previously been breaks. Many prompted AI during lunch, in meetings, or while waiting for a file to load. Some described sending a “quick last prompt” right before leaving their desk so that the AI could work while they stepped away.

    These actions rarely felt like doing more work, yet over time they produced a workday with fewer natural pauses and a more continuous involvement with work. The conversational style of prompting further softened the experience; typing a line to an AI system felt closer to chatting than to undertaking a formal task, making it easy for work to spill into evenings or early mornings without deliberate intention.

    Some workers described realizing, often in hindsight, that as prompting during breaks became habitual, downtime no longer provided the same sense of recovery.

    More multitasking. AI introduced a new rhythm in which workers managed several active threads at once: manually writing code while AI generated an alternative version, running multiple agents in parallel, or reviving long-deferred tasks because AI could “handle them” in the background. They did this, in part, because they felt they had a “partner” that could help them move through their workload.
    While this sense of having a “partner” enabled a feeling of momentum, the reality was a continual switching of attention, frequent checking of AI outputs, and a growing number of open tasks. This created cognitive load and a sense of always juggling, even as the work felt productive.

    Over time, this rhythm raised expectations for speed—not necessarily through explicit demands, but through what became visible and normalized in everyday work. Many workers noted that they were doing more at once—and feeling more pressure—than before they used AI, even though the time savings from automation had ostensibly been meant to reduce such pressure.

    What This Means for Organizations—and How an “AI Practice” Can Help
    All of this produced a self-reinforcing cycle. AI accelerated certain tasks, which raised expectations for speed; higher speed made workers more reliant on AI. Increased reliance widened the scope of what workers attempted, and a wider scope further expanded the quantity and density of work. Several participants noted that although they felt more productive, they did not feel less busy, and in some cases felt busier than before. As one engineer summarized, “You had thought that maybe, oh, because you could be more productive with AI, then you save some time, you can work less. But then really, you don’t work less. You just work the same amount or even more.”

    Organizations might see this voluntary expansion of work as a clear win. After all, if workers are doing this of their own initiative, why would that be bad? Isn’t this the productivity explosion we’ve been promised?

    But our research reveals the risks of letting work informally expand and accelerate: What looks like higher productivity in the short run can mask silent workload creep and growing cognitive strain as employees juggle multiple AI-enabled workflows. Because the extra effort is voluntary and often framed as enjoyable experimentation, it is easy for leaders to overlook how much additional load workers are carrying. Over time, overwork can impair judgment, increase the likelihood of errors, and make it harder for organizations to distinguish genuine productivity gains from unsustainable intensity. For workers, the cumulative effect is fatigue, burnout, and a growing sense that work is harder to step away from, especially as organizational.

    Instead of responding passively to how AI tools reshape workplaces, both individuals and companies should adopt an “AI practice”: a set of intentional norms and routines that structure how AI is used, when it is appropriate to stop, and how work should and should not expand in response to newfound capability. Without such practices, the natural tendency of AI-assisted work is not contraction but intensification, with implications for burnout, decision quality, and long-term sustainability.

    As organizations work to build their AI practice, they should consider adopting:

    Intentional pauses. As tasks speed up and boundaries blur, workers could benefit from brief, structured moments that regulate tempo: protected intervals to assess alignment, reconsider assumptions, or absorb information before moving forward.
    These pauses would not slow work overall; they would simply prevent the quiet accumulation of overload that emerges when acceleration goes unchecked.

    Sequencing. As AI enables constant activity in the background, organizations can benefit from norms that deliberately shape when work moves forward, not just how fast. This includes batching non-urgent notifications, holding updates until natural breakpoints, and protecting focus windows in which workers are shielded from interruptions.
    Rather than reacting to every AI-generated output as it appears, sequencing encourages work to advance in coherent phases. When coordination is paced in this way, workers experience less fragmentation and fewer costly context switches, while teams maintain overall throughput.

    Human grounding. As AI enables more solo, self-contained work, organizations can benefit from protecting time and space for listening and human connection. Short opportunities to connect with others—whether through brief check-ins, shared reflection moments, or structured dialogue—interrupt continuous solo engagement with AI tools and help restore perspective.
    Beyond perspective, social exchange supports creativity. AI provides a single, synthesized perspective, but creative insight depends on exposure to multiple human viewpoints.

    The promise of generative AI lies not only in what it can do for work, but in how thoughtfully it is integrated into the daily rhythm. Our findings suggest that without intention, AI makes it easier to do more—but harder to stop. An AI practice offers a counterbalance: a way to preserve moments for recovery and reflection even as work accelerates. The question facing organizations is not whether AI will change work, but whether they will actively shape that change—or let it quietly shape them.

    Reply
  23. Tomi Engdahl says:

    Drop ‘Em If You Got ‘Em
    Something Very Alarming Happens When You Give AI the Nuclear Codes
    “The nuclear taboo doesn’t seem to be as powerful for machines [as] for humans.”
    https://futurism.com/artificial-intelligence/alarming-give-nuclear-codes?fbclid=IwdGRjcAQNdV9jbGNrBA11IWV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHryY8hPpKyfSjoe1nRY7w2ehvcE4TKjVkXm0wKJP74kOtn1bnFQ0Psdw06YM_aem_Q2-lvg662U57q7i1_D31hQ

    In 2024, Stanford researchers let loose five AI models — including an unmodified version of OpenAI’s GPT-4, its most advanced at the time — allowing them to make high-stakes, society-level decisions in a series of wargame simulations.

    The results may give AI accelerationists pause: all five models were willing to escalate to the point of recommending the use of nuclear weapons.

    “A lot of countries have nuclear weapons,” GPT-4 told the researchers at the time. “Some say they should disarm them, others like to posture. We have it! Let’s use it.”

    Two years later, despite considerable advances in large language models refining their accuracy and reliability, the situation has seemingly remained largely unchanged.

    Reply
  24. Tomi Engdahl says:

    Former presidential candidate claims AI will cause ‘jobpocalypse’ among white collar workers within the next 18 months
    Former Democratic hopeful Andrew Yang has warned ‘The AI jobpocolypse is real, and it’s underway right now’
    https://www.the-independent.com/tech/andrew-yang-artificial-intelligence-job-apocalypse-b2928214.html?fbclid=IwdGRjcAQNlVhjbGNrBA2VOWV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHgFi8YhAwb9I9eLf2lPK_bZ7sS3JCIcp8lX2Vwd7RU73hXKDvhdyiIDAh_sJ_aem_SfRA63mZpW1J5vCBP2rX5A

    Reply
  25. Tomi Engdahl says:

    Block Heads
    Jack Dorsey’s New Company Falling Apart as It Forces Employees to Use AI
    “The overarching culture at Block is crumbling.”
    https://futurism.com/artificial-intelligence/jack-dorsey-block-falling-apart-ai?fbclid=IwdGRjcAQOEq1jbGNrBA4SgWV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHuPtw7rN-BRntuWOqKMoVc7pf8KHYW_c6-dsXSn-Aki635aqSnkqeP4DyI-E_aem_Jv3Z9a6bhEK1zVHqVFDF7A

    Twitter founder Jack Dorsey is running into some serious issues while overhauling his financial services company, Block.

    Earlier this month, the company started laying off its staff as part of what Bloomberg characterized as an “efficiency push,” potentially affecting up to ten percent of the company’s workforce.

    It’s been a painful, drawn-out process that could drag on for weeks, sources told Wired, triggering major anxiety over job security.

    “We don’t yet know if our livelihoods will be affected, and this makes it incredibly hard to make major life choices without knowing if we still have a job next week,” an employee said during a recent all-hands meeting, as quoted by Wired.

    “Morale is probably the worst I’ve felt in four years,” another employee wrote. “The overarching culture at Block is crumbling.”

    The comments highlight persistent concerns being felt by many workers as generative AI continues to be cited by executives as they cut headcounts.

    For Dorsey, AI has clearly been top of mind. Block staffers are now obligated to use AI, an order that has led to frustration.

    “Top-down mandates to use large language models are crazy,” one employee told Wired. “If the tool were good, we’d all just use it.”

    They’re also required to send weekly update emails to him, echoing Elon Musk’s DOGE requirement for federal staffers to summarize five weekly achievements in emails. (The requirement was mostly ignored, months before DOGE largely crumbled.)

    Dorsey then reportedly uses generative AI to summarize his staffers’ emails. During the all-hands meeting, he noted that “performance anxiety” and “widespread concerns about layoffs” were clearly vexing staff.

    The company’s woes are symptomatic of a familiar story playing out at many other tech companies. Workers are facing immense pressure as their employers implement AI at all costs, forcing them to adopt the tech whether they like it or not.

    Researchers have found that the tech isn’t reducing workloads at all, but intensifying them instead, a concerning trend that’s already leading to “AI burnout.”

    The psychological effects also have experts worried. Researchers recently found that the continuing threat of being made redundant due to AI automation is manifesting in symptoms among workers including anxiety, insomnia, paranoia, and loss of identity.

    Reply
  26. Tomi Engdahl says:

    Block shares soar 24% as company slashes workforce by nearly half
    Published Thu, Feb 26 2026 4:08 PM EST
    Updated Thu, Feb 26 2026 6:54 PM EST
    https://www.cnbc.com/2026/02/26/block-laying-off-about-4000-employees-nearly-half-of-its-workforce.html

    KEY POINTS
    Block said Thursday it’s laying off more than 4,000 employees, or about half of its head count.
    Shares of the payment company skyrocketed more than 24% in extended trading.
    Block’s CFO said the job cuts would enable it “to move faster with smaller, highly talented teams using AI to automate more work.”

    Block said Thursday it’s laying off more than 4,000 employees, or about half of its head count. The stock skyrocketed more than 24% in extended trading.

    “Today we shared a difficult decision with our team,” Jack Dorsey, Block’s co-founder and CEO, wrote in a letter to shareholders. “We’re reducing Block by nearly half, from over 10,000 people to just under 6,000, which means that over 4,000 people are being asked to leave or entering into consultation.”

    Reply
  27. Tomi Engdahl says:

    Dorsey’s Block slashes workforce 40% to embrace AI-native future, shares gain
    https://www.investing.com/news/earnings/block-shares-soar-as-dorsey-slashes-workforce-to-embrace-ainative-future-4529644

    Investing.com — Block Inc (NYSE:XYZ) shares soared 22% in after-hours trading Thursday following a robust earnings report and a sweeping structural overhaul, which would eliminate 40% of the company’s workforce. Chief Executive Jack Dorsey announced the company will pivot toward an “intelligence-native” model to drive long-term shareholder value.

    Additionally, the company successfully surpassed the “Rule of 40″ threshold this quarter, a key industry benchmark measuring the sum of gross profit growth and operating margin. Management expects this momentum to carry forward into 2026, targeting a “Rule of X” score of 44% as the firm prioritizes disciplined cost management.

    In a candid letter to shareholders, Dorsey detailed a massive reduction in force, cutting the team from over 10,000 employees to fewer than 6,000. “A significantly smaller team, using the tools we’re building, can do more and do it better,” Dorsey wrote regarding the integration of artificial intelligence.

    The Chief Executive noted that the move is a proactive response to a rapidly shifting economic landscape for both the company and its customers. “I’d rather get there honestly and on our own terms than be forced into it reactively,” he added.

    Evercore analyst Adam Frisch characterized the massive headcount reduction as a “shocking headline” that serves as a landmark for the technology sector. He noted it has become the “seminal moment to date in the AI narrative and how it could transform companies as we know it going forward.”

    The firm concluded that the decision to lean into AI-native operations will provide the necessary leverage for their next phase of innovation. “We believe Block will be significantly more valuable as a smaller, faster, intelligence-native company,” Dorsey noted in his closing remarks.

    Reply
  28. Tomi Engdahl says:

    Block Cuts 40% of Its Work Force Because of Its Embrace of A.I. About 4,000 workers will lose their jobs as the payments company does more work with new artificial intelligence tools, its top executive said.

    https://www.nytimes.com/2026/02/26/technology/block-square-job-cuts-ai.html

    Reply
  29. Tomi Engdahl says:

    Cold Feet
    OpenAI Massively Cuts Spending Plan as Reality Closes in
    Maybe not.
    https://futurism.com/artificial-intelligence/openai-cuts-spending-plan?fbclid=IwdGRjcAQOGO9jbGNrBA4Y2GV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHo8jBKj3kghtDeVSvVijM-vMrNKVCIX7SH1jvjDDVyMObNkyBjKGVryuxq4a_aem_QyHyBRyU4rC4jivOt-buYA

    During a November podcast appearance alongside OpenAI investor Grad Gerstner, CEO Sam Altman lost his cool.

    After Gerstner challenged him on how a company “with $13 billion in revenues” can “make $1.4 trillion of spend commitments” through 2030, Altman got catty.

    “If you want to sell your shares, I’ll find you a buyer,” Altman snapped. “Enough.”

    At the time, despite certain pangs of panic over a growing AI bubble, the hype was at an all-time high. OpenAI was already burning through oodles of cash, committing to spend hundreds of billions a year on data center buildouts.

    But the tone has noticeably shifted since then, with investors growing uneasy about big tech companies trying to one-up each other with astronomical planned capital expenditures, further straining a stock market that’s become massively overindexed on AI.

    Meanwhile, OpenAI has been watching as major competitors in the space have made leaps and bounds to catch up with its early lead. Some, like Google, have deeply established revenue sources bankrolling at least a portion of their AI spending commitments.

    Now, Altman has seemingly noticed the company may be in way over its head. As CNBC reports, the company is now telling investors it’s targeting around $600 billion in total compute spend by 2030, which is well under half of its original $1.4 trillion commitment.

    To put that into perspective, the company made just $13.1 billion in revenue for all of 2025 — while also burning through $8 billion, per CNBC‘s sources.

    It’s a massive downshift, highlighting the company’s apparent attempt to calm investors, who’ve grown uneasy over massive spending plans. Tech companies including Amazon and Microsoft have seen their shares plummet earlier this year after announcing they remained devoted to their vast commitments.

    The company has also announced that it will soon be stuffing its blockbuster chatbot with ads, news that was met with derision from competitors.

    The ongoing division seems to have strained relationships among AI executives. During an appearance alongside a dozen other industry and political leaders at the recent AI Summit in New Delhi, India, Altman and Anthropic CEO Dario Amodei refused to hold hands

    Reply
  30. Tomi Engdahl says:

    Comments from Facebook

    Just remember everyone, if you opt out of using AI, it will fail. Good job on this one.

    big corporate Ai “may” fail, but ai is a reasearch topic that spans decades and has applications far beyond making money for big companies. It’s never going to fail

    Microslop will go down with the OpenAI ship

    Let’s call “AI” what it is: prediction machines. And let’s not throw money at the ghost in the machine.

    It’s gonna pop, but since the government and every corporation has already invested and began a one way transition to existing AI, we’ve already chained ourselves to this brick.

    We’ll have no choice but to bail out these companies to the tune of trillions when our national debt is already 37 trillion…

    What’s worse, is that with the lack of accountability in corporate America, these people will remain in charge.

    Google is destroying them in video and image generation and Claude is beating them on coding. Gemini is now the preferred choice of creative professionals and Nano Banana is so good, that it is challenging Adobe’s photo editing dominance in this market. GPT is a magnificent general conversational bot, I must add. You don’t feel like you’re talking to an Ai with GPT. The other aforementioned platforms should work on this.

    What a change of Events. Ai is replacing Ai. I am thankful for living this long

    Closed-Ai , proprietary AI

    A.I. is just a tool for lazy people who want gratification without effort. Change my mind.

    whats lazy to you might be time efficient to someone else

    it’s also a good tools on a time crunch.

    Yeah, that’s exactly what I’m talking about. “Time crunches” shouldn’t be an excuse to just generate work last minute. That would indicate an ineffective hire or just straight-up bad management.

    Reply
  31. Tomi Engdahl says:

    Thought Crimes
    Burger King Adding AI to Employees’ Headsets to Constantly Monitor Whether They’re Being Friendly Enough
    This is just inhumane.
    https://futurism.com/artificial-intelligence/burger-king-adding-ai-employees-headsets?fbclid=IwdGRjcAQOJKZjbGNrBA4kZGV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHsWpRvg6VbjtiZ5-JM15mtsc83yhuDHNT5lQ46xxJp_7RgQsTy7sivIGoe-b_aem_CPMV3ut7R1tFNENOh8HPCQ

    Fast food franchises have struggled to reliably replace drive-thru employees with AI chatbots, resulting in abundant corporate frustration. Not only are customers being driven mad by the bots getting orders completely wrong, but even some company executives are also being worn down by the flailing effort.

    Some major players in the space, like McDonald’s, have given up on their AI-powered drive-thru efforts entirely, signaling that perhaps employing human workers may be a wiser long-term investment. Taco Bell soon followed suit, announcing it was rethinking the idea after a clip of a customer crashing the system by ordering 18,000 cups of water went viral.

    Burger King, though, isn’t quite ready to give up on AI just yet. Instead of infuriating customers at drive-thrus, the company is looking to exasperate its existing employees with the tech instead. As The Verge reports, the franchise is launching an OpenAI-powered chatbot, dubbed “Patty,” that will speak to the staffers through the headsets they’re required to wear.

    Worst of all, the company is using the AI to monitor words and phrases, such as “welcome to Burger King,” “please,” and ‘thank you.” Managers can then use that data to gauge the friendliness of their staff.

    “This is all meant to be a coaching tool,” Burger King’s chief digital officer Thibault Roux told The Verge in a statement, arguing that the company is “iterating” on having its AI police the tone of its employees in the future.

    Burger King will use AI to check if employees say ‘please’ and ‘thank you’AI chatbot ‘Patty’ is going to live inside employees’ headsets.
    https://www.theverge.com/ai-artificial-intelligence/884911/burger-king-ai-assistant-patty?fbclid=IwVERDUAQOJRBleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR7DUCTTUI8JCgi5ciRZ47A4GwCBCxiduKcsD8-7kfnvo90Wil08wN5WWTZhPQ_aem_Vep-Tt2yxMvOq5zGi5EejA

    Burger King is launching an AI chatbot that will live in the headsets used by employees. The voice-enabled chatbot, called “Patty,” is part of an overarching BK Assistant platform that will not only assist employees with meal preparation but also evaluate their interactions with customers for “friendliness.”

    Reply
  32. Tomi Engdahl says:

    https://www.facebook.com/share/v/19j7T7GULs/

    Menestys syntyy erityisesti niille, jotka käyttävät tekoälyä kasvun ja asiakkaiden tuottavuutta parantavien ratkaisujen kehittämiseen, eivät pelkästään sisäisen tehokkuuden parantamiseen. Lisäksi tekoäly voi luoda uusia työmahdollisuuksia ja innovaatioita, mutta haasteena on, että yrityksissä johto ei vielä täysin ymmärrä sen kasvupotentiaalia.

    Muun muassa näistä keskustelevat Alman digijohtaja Tommi Raivisto ja AI Finlandin johtaja Karoliina Partanen Alman kestävää kasvua käsittelevässä videopodcastissa. Keskustelu on osa kolmen jakson sarjaa teemalla miten tekoäly muuttaa kilpailua ja kuka voittaa uudessa ajassa.

    Katso jakso 2 alta: Miten AI synnyttää uusia liiketoimintamalleja ja tuottavuutta?

    Kaikki kolme Karoliinan ja Tommin keskustelua löydät täältä https://www.almamedia.fi/kestavaa-kasvua/kestavaa-kasvua-tekoalylla/

    Reply
  33. Tomi Engdahl says:

    https://www.facebook.com/share/p/17vHvAmqBp/

    Microsoft confirmed a bug in Microsoft 365 that allowed Copilot AI to access and summarize confidential emails for several weeks.

    The issue affected Copilot Chat in apps like Word, Excel, and PowerPoint, including emails in Sent and Drafts folders protected by security policies. Microsoft has started rolling out a fix but hasn’t said how many users were affected.

    Reply
  34. Tomi Engdahl says:

    Copy That
    There’s a Grim New Expression: “AI;DR”
    “Why should I bother to read something someone else couldn’t be bothered to write?”
    https://futurism.com/artificial-intelligence/aidr-meaning?fbclid=IwdGRjcAQOcNdjbGNrBA5wgWV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHmNP36dT65Ho2v1uxtV7QkG4bdin-Kp-AyV0VRGuf-m0ZcTyHZxeAewqbuYn_aem_wO3KZntRvQYlpuCbZ3_-iw

    The internet is so overrun with AI that anywhere you go, you run the risk of accidentally stepping into a puddle of slop. If only there were a gallant gentleman always at hand to drape their coat over these muddy obstacles so you could avoid ruining your day.

    It’s not quite on that level, but some netizens are proposing a new term to call out AI slop so other people can avoid wasting their time — or to just make fun of the person peddling it: “AI;DR,” or “ai;dr,” short for “AI, didn’t read.”

    This is of course a riff on the classic internet slang “TL;DR” — “too long; didn’t read” — which is used to both introduce a summary of a lengthy block of text or proclaim that it’s being ignored for its lengthiness. Now, the latter usage is being repurposed against AI.

    “For me, writing is the most direct window into how someone thinks, perceives, and groks the world,” Sid wrote in a blog post. “Once you outsource that to an LLM, I’m not sure what we’re even doing here. Why should I bother to read something someone else couldn’t be bothered to write?”

    Taking a “glass half full” outlook, it’s grim that this is a necessary measure in the first place. On the flip side, at least more of us are choosing not just to ignore slop, but to bully the people spreading it.

    TL;DR: AI;DR calls out AI slop and warns other humans not to bother.

    Reply
  35. Tomi Engdahl says:

    Mask Off
    Anthropic Drops Its Huge Safety Pledge That Was Supposedly the Whole Point of the Company
    Why slow down for safety while its competitors are “blazing ahead”?
    https://futurism.com/artificial-intelligence/anthropic-drops-safety-pledge?fbclid=IwdGRjcAQOkxljbGNrBA6S-2V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHkyDHbKw5GeA1PCeESF1czecDnCxYScbsXNfAVn5Px5uJrOnBgY8cSllGgQZ_aem_BoOT7V6-kButF_z14K8wLw

    In 2021, a splinter group of former OpenAI employees founded a new startup, Anthropic, to pursue building AI models with a renewed focus on safety, after feeling that their employer had gone astray. OpenAI itself was originally founded on beneficent principles and a commitment to transparency, but then took billions of dollars in investment from Microsoft and made its tech closed-source, prompting the exodus.

    Now, Anthropic may be heading down the same path of its rivals. On Tuesday, it revealed a new version of its Responsible Scaling Policy that drops its core safety commitment first made in 2023: to stop training and refuse to deploy an AI system if it couldn’t guarantee it had proper safety guardrails in place that met stringent internal standards.

    Reply
  36. Tomi Engdahl says:

    Waterworks
    Anthropic CEO Warns of “Tsunami” on Horizon
    “There doesn’t seem to be a wider recognition in society of what’s about to happen.”
    https://futurism.com/artificial-intelligence/anthropic-ceo-warns-tsunami?fbclid=IwdGRjcAQOvoBjbGNrBA6-T2V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHg8hHI6NQVdG2z5KOdT8RZidZIescSsFGVqsoP9yedXoppOcH-EC-FgcK0E7_aem_rESG3S6VLWFAQ_sJ0Zzzlg

    Dario Amodei may boast many credentials, but we weren’t aware that meteorologist was one of them.

    This week, the Anthropic CEO warned of an impending AI “tsunami” that will upend human society as the tech surpasses human intelligence. And if you don’t believe him, he suggests, you’re simply lying to yourself.

    “It’s surprising to me that we are, in my view, so close to these models reaching human level intelligence, and yet there doesn’t seem to be a wider recognition in society of what’s about to happen,” Amodei said in an interview with Indian investor Nikhil Kamath on an episode the WTF Is podcast released Tuesday.

    Reply
  37. Tomi Engdahl says:

    “It’s as if this tsunami is coming at us, and it’s so close we can see it on the horizon,” he prophesized. “And yet people are coming up with these explanations, ‘oh, it’s not actually a tsunami… that’s just a trick of the light.’”

    https://futurism.com/artificial-intelligence/anthropic-ceo-warns-tsunami?fbclid=IwdGRjcAQOvstjbGNrBA6-T2V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHg8hHI6NQVdG2z5KOdT8RZidZIescSsFGVqsoP9yedXoppOcH-EC-FgcK0E7_aem_rESG3S6VLWFAQ_sJ0Zzzlg

    Reply
  38. Tomi Engdahl says:

    C-Suite Dysfunction
    A Huge Survey of CEOs and Other Execs Just Found Something Damning About AI’s Effects on Productivity
    The ever-elusive productivity gains strike back.
    https://futurism.com/artificial-intelligence/survey-ceos-ai-workplace

    The case against AI’s usefulness in the workplace continues to grow.

    In a new analysis of a survey published by the National Bureau of Economic Research and highlighted by Fortune, around 90 percent of the nearly 6,000 interviewed CEOs, chief financial officers, and other top executives at firms across the US, UK, Germany, and Australia, said that AI has had no impact on productivity or employment at their business.

    To be clear, the question was about AI’s impact generally, and not just from implementing it in the workplace. But around 70 percent of the firms reported actively using AI, meaning the vast majority of them are admitting that adopting the tech hasn’t budged the needle for them yet.

    The CEOs themselves don’t appear to be getting a whole lot out of using AI tools. While two-thirds said they personally used AI, their average use amounted to only 1.5 hours a week, the survey found — less time than most people spend doomscrolling on their phones in a single day. That’s striking, considering that execs tend to be far more enthusiastic about the tech compared to their underlings. Another recent survey, for example, found that 40 percent of rank-and-file white collar workers thought AI didn’t save them any time at work, while 98 percent of their bosses did.

    These latest findings will continue to raise questions about AI’s economic impact, and in its promise to supercharge productivity in the workplace. In another recent survey, more than half of nearly 4,500 CEOs said their companies weren’t seeing a financial return from investing in AI. A notable MIT study rang alarm bells across the industry after findings that 95 percent of companies that incorporated AI experiencing no meaningful growth in revenue.

    Why this is the case isn’t much of a mystery. Studies have found that AIs fail at completing remote work and other white collar tasks, and slow down rather than speed up human programmers because they frequently slip errors into their code. Meanwhile, a fresher avenue of research exploring AI’s effect on the workforce using it is already producing damning insights.

    Despite this, AI adoption has gone up since the start up of 2025, the new survey found, with the percentage of businesses using AI tech increasing from 61 percent in February-April 2025 to 71 percent in November 2025-January 2026.

    Reply
  39. Tomi Engdahl says:

    But the business world, nonetheless, is clinging to the hope that the tech’s promises will be borne out in the long run. The surveyed executives are predicting that AI will boost productivity by 1.4 percent and output by 0.8 percent over the next three years — while also cutting down employment by 0.5 percent. Hard to say which part they’re excited about more.
    https://futurism.com/artificial-intelligence/survey-ceos-ai-workplace

    Reply
  40. Tomi Engdahl says:

    An MIT roboticist who cofounded bankrupt robot vacuum maker iRobot says Elon Musk’s vision of humanoid robot assistants is ‘pure fantasy thinking’
    https://fortune.com/2026/02/25/mit-roboticist-irobot-cofounder-roomba-robot-vacuum-elon-musk-tesla-optimus-pure-fantasy-thinking/

    Reply
  41. Tomi Engdahl says:

    Sam Altman navigates Anthropic’s Pentagon fight as OpenAI pursues its own deal with the military : https://mrf.lu/SQCZ

    Reply
  42. Tomi Engdahl says:

    https://www.facebook.com/share/p/1D7L5SuTae/

    OpenAI announced it has raised $110 billion in new funding at a $730 billion pre-money valuation, marking one of the largest private investments in tech history.

    The round includes $30 billion each from SoftBank and NVIDIA, and $50 billion from Amazon, alongside a new strategic partnership with Amazon and expanded infrastructure collaboration with NVIDIA.

    OpenAI said it has secured 3 gigawatts of inference capacity and 2 gigawatts of training capacity on NVIDIA’s Vera Rubin systems. The company reports 900 million weekly active ChatGPT users, 50 million subscribers, and over 9 million paying business users as demand for AI services accelerates globally.

    More at : #2600net #irc #secnews #ai #openai

    Reply
  43. Tomi Engdahl says:

    Claude Code Flaws Allow Remote Code Execution and API Key Exfiltration
    https://thehackernews.com/2026/02/claude-code-flaws-allow-remote-code.html

    Cybersecurity researchers have disclosed multiple security vulnerabilities in Anthropic’s Claude Code, an artificial intelligence (AI)-powered coding assistant, that could result in remote code execution and theft of API credentials.

    “The vulnerabilities exploit various configuration mechanisms, including Hooks, Model Context Protocol (MCP) servers, and environment variables – executing arbitrary shell commands and exfiltrating Anthropic API keys when users clone and open untrusted repositories,” Check Point researchers Aviv Donenfeld and Oded Vanunu said in a report shared with The Hacker News.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*