AI trends 2026

Here are some of the the major AI trends shaping 2026 — based on current expert forecasts, industry reports, and recent developments in technology. The material is analyzed using AI tools and final version hand-edited to this blog text:

1. Generative AI Continues to Mature

Generative AI (text, image, video, code) will become more advanced and mainstream, with notable growth in:
* Generative video creation
* Gaming and entertainment content generation
* Advanced synthetic data for simulations and analytics
This trend will bring new creative possibilities — and intensify debates around authenticity and copyright.

2. AI Agents Move From Tools to Autonomous Workers

Rather than just answering questions or generating content, AI systems will increasingly act autonomously, performing complex, multi-step workflows and interacting with apps and processes on behalf of users — a shift sometimes called agentic AI. These agents will become part of enterprise operations, not just assistant features.

3. Smaller, Efficient & Domain-Specific Models

Instead of “bigger is always better,” specialized AI models tailored to specific industries (healthcare, finance, legal, telecom, manufacturing) will start to dominate in many enterprise applications. These models are more accurate, legally compliant, and cost-efficient than general models.

4. AI Embedded Everywhere

AI won’t be an add-on feature — it will be built into everyday software and devices:
* Office apps with intelligent drafting, summarization, and task insights
* Operating systems with native AI
* Edge devices processing AI tasks locally
This makes AI pervasive in both work and consumer contexts.

5. AI Infrastructure Evolves: Inference & Efficiency Focus

More investment is going into inference infrastructure — the real-time decision-making step where models run in production — thereby optimizing costs, latency, and scalability. Enterprises are also consolidating AI stacks for better governance and compliance.

6. AI in Healthcare, Research, and Sustainability

AI is spreading beyond diagnostics into treatment planning, global health access, environmental modeling, and scientific discovery. These applications could help address personnel shortages and speed up research breakthroughs.

7. Security, Ethics & Governance Become Critical

With AI handling more sensitive tasks, organizations will prioritize:
* Ethical use frameworks
* Governance policies
* AI risk management
This trend reflects broader concerns about trust, compliance, and responsible deployment.

8. Multimodal AI Goes Mainstream

AI systems that understand and generate across text, images, audio, and video will grow rapidly, enabling richer interactions and more powerful applications in search, creative work, and interfaces.

9. On-Device and Edge AI Growth

Processing AI tasks locally on phones, wearables, or edge devices will increase, helping with privacy, lower latency, and offline capabilities — especially crucial for real-time scenarios (e.g., IoT, healthcare, automotive).

10. New Roles: AI Manager & Human-Agent Collaboration

Instead of replacing humans, AI will shift job roles:
* People will manage, supervise, and orchestrate AI agents
* Human expertise will focus on strategy, oversight, and creative judgment
This human-in-the-loop model becomes the norm.

Sources:
[1]: https://www.brilworks.com/blog/ai-trends-2026/?utm_source=chatgpt.com “7 AI Trends to Look for in 2026″
[2]: https://www.forbes.com/sites/bernardmarr/2025/10/13/10-generative-ai-trends-in-2026-that-will-transform-work-and-life/?utm_source=chatgpt.com “10 Generative AI Trends In 2026 That Will Transform Work And Life”
[3]: https://millipixels.com/blog/ai-trends-2026?utm_source=chatgpt.com “AI Trends 2026: The Key Enterprise Shifts You Must Know | Millipixels”
[4]: https://www.digitalregenesys.com/blog/top-10-ai-trends-for-2026?utm_source=chatgpt.com “Digital Regenesys | Top 10 AI Trends for 2026″
[5]: https://www.n-ix.com/ai-trends/?utm_source=chatgpt.com “7 AI trends to watch in 2026 – N-iX”
[6]: https://news.microsoft.com/source/asia/2025/12/11/microsoft-unveils-7-ai-trends-for-2026/?utm_source=chatgpt.com “Microsoft unveils 7 AI trends for 2026 – Source Asia”
[7]: https://www.risingtrends.co/blog/generative-ai-trends-2026?utm_source=chatgpt.com “7 Generative AI Trends to Watch In 2026″
[8]: https://www.fool.com/investing/2025/12/24/artificial-intelligence-ai-trends-to-watch-in-2026/?utm_source=chatgpt.com “3 Artificial Intelligence (AI) Trends to Watch in 2026 and How to Invest in Them | The Motley Fool”
[9]: https://www.reddit.com//r/AI_Agents/comments/1q3ka8o/i_read_google_clouds_ai_agent_trends_2026_report/?utm_source=chatgpt.com “I read Google Cloud’s “AI Agent Trends 2026” report, here are 10 takeaways that actually matter”

790 Comments

  1. Tomi Engdahl says:

    The OpenClaw Hype: Analysis of Chatter from Open-Source Deep and Dark Web
    https://www.bleepingcomputer.com/news/security/the-openclaw-hype-analysis-of-chatter-from-open-source-deep-and-dark-web/

    OpenClaw started as a side project of a developer who wanted to make his (and others) life easier with AI assistance. Clean mailbox, control schedule, organize thoughts and hear some music while his bot is doing all the dirty jobs for him.

    With vibe coding Peter Steinberger developed OpenClaw. Kudus for that. But since then apart from changing its name twice it created a massive chatter around two topics. The AI hype and its cyber security implications.

    This project has rapidly moved from a niche automation framework discussed in developer communities to a topic appearing across security research feeds, Telegram channels, forums, and underground-adjacent chatter. Alongside it, names like ClawDBot and MoltBot have appeared in the same narrative space, often framed as malicious derivatives, companion tooling, or botnet-like ecosystems.

    Reply
  2. Tomi Engdahl says:

    Anthropic rejects latest Pentagon offer: ‘We cannot in good conscience accede to their request’
    https://www.cnn.com/2026/02/26/tech/anthropic-rejects-pentagon-offer

    Reply
  3. Tomi Engdahl says:

    AI agents in SEO: A practical workflow walkthrough
    https://searchengineland.com/ai-agents-seo-practical-workflow-walkthrough-469607

    A practical look at how AI agents like n8n automate SEO workflows, from scraping to structured delivery – and where they fall short.
    Automation has long been part of the discipline, helping teams structure data, streamline reporting, and reduce repetitive work. Now, AI agent platforms combine workflow orchestration with large language models to execute multi-step tasks across systems.

    Among them, n8n stands out for its flexibility and control. Here’s how it works – and where it fits in modern SEO operations.

    Reply
  4. Tomi Engdahl says:

    War Games
    Anthropic Blowout With Military Involved Use of Claude for Incoming Nuclear Strike
    As high stakes as it gets.
    https://futurism.com/artificial-intelligence/anthropic-military-ai-nuclear-strike?fbclid=IwdGRzaAQO4LpjbGNrBA7gTWV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHu0XkMKAGEJzIZglt-DwC1dYIRBg7L9NnyS_Iec6vFImr3OGkJ30UaSWn57t_aem_5_juTQvUBFR03traCL_b-A

    Anthropic’s ongoing battle with the Pentagon over the military’s use of its AI systems flared up this week around a hypothetical nuclear strike scenario, according to new reporting from the Washington Post.

    The Claude AI builder has frustrated the Pentagon by objecting to its systems being used for autonomous weaponry and the mass surveillance of US citizens. To cut to the heart of the debate, a defense official told WaPo, the Pentagon’s technology chief posed an extreme hypothetical: would Anthropic let the military use Claude to help shoot down a nuclear-armed intercontinental ballistic missile?

    Anthropic CEO Dario Amodei’s response apparently irritated Pentagon leaders. “You could call us and we’d work it out,” was how the defense source characterized it, in WaPo’s words.

    Reply
  5. Tomi Engdahl says:

    Trump orders federal agencies to stop using Anthropic’s technology : https://mrf.lu/SLZ2

    Reply
  6. Tomi Engdahl says:

    8 billion tokens a day forced AT&T to rethink AI orchestration — and cut costs by 90%
    https://venturebeat.com/orchestration/8-billion-tokens-a-day-forced-at-and-t-to-rethink-ai-orchestration-and-cut

    When your average daily token usage is 8 billion a day, you have a massive scale problem.

    This was the case at AT&T, and chief data officer Andy Markus and his team recognized that it simply wasn’t feasible (or economical) to push everything through large reasoning models.

    So, when building out an internal Ask AT&T personal assistant, they reconstructed the orchestration layer. The result: A multi-agent stack built on LangChain where large language model “super agents” direct smaller, underlying “worker” agents performing more concise, purpose-driven work.

    This flexible orchestration layer has dramatically improved latency, speed and response times, Markus told VentureBeat. Most notably, his team has seen up to 90% cost savings.

    “I believe the future of agentic AI is many, many, many small language models (SLMs),” he said. “We find small language models to be just about as accurate, if not as accurate, as a large language model on a given domain area.”

    Reply
  7. Tomi Engdahl says:

    Gemini 3.1 Pro is a powerhouse for deep work — here are 7 prompts that prove it
    Features
    By Amanda Caswell published February 24, 2026
    Gemini 3.1 Pro is built for hard problems — here’s how to unlock Gemini 3.1 Pro’s best features
    https://www.tomsguide.com/ai/gemini-3-1-pro-is-a-powerhouse-for-deep-work-here-are-7-prompts-that-prove-it

    Reply
  8. Tomi Engdahl says:

    Pentagon declares Anthropic a threat to national security
    Defense Secretary Pete Hegseth declared Anthropic a “supply-chain risk,” blocking all federal agencies and contractors from doing business with the company.
    https://www.washingtonpost.com/technology/2026/02/27/trump-anthropic-claude-drop/?utm_campaign=wp_main&utm_source=facebook&utm_medium=social&fbclid=IwdGRjcAQPCBBleHRuA2FlbQIxMQBzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR6Mojm_hxPhN9WLQFeg3Aq2HrCnUByoSmhUw8tivirgaJsZEjiw8tnZLMbX0w_aem_vArLZSQfFUJKFUG7-nbrRg

    The Trump administration placed AI firm Anthropic on a far-reaching national security blacklist Friday, directing federal agencies to stop using its technology and banning any other company that does business with the government from working with it with immediate effect.

    President Donald Trump blasted the artificial intelligence company as a risk to national security after a tumultuous week of negotiations between the start-up and the Pentagon.

    “The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution,” Trump wrote in a post on his social media site Truth Social, using the administration’s preferred name for the Defense Department. “Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY.”

    Defense Secretary Pete Hegseth followed up late Friday, saying in a post on X that he was declaring Anthropic a supply-chain risk. “Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic,” Hegseth wrote.

    Reply
  9. Tomi Engdahl says:

    Never Enough
    The Economy Is Lurching Downward as Fear of AI Spreads
    The cracks are starting to show.
    https://futurism.com/artificial-intelligence/economy-lurching-downward-fear-ai?fbclid=IwdGRjcAQPD0hjbGNrBA8PJmV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHqEX9x2I0WA3MbNu-him_W3nSzshUkXxk_-smEXrk_seBJ0Sz9vDrV2twGlL_aem_XgwDRO0jN0kaE79-QzGALA

    AI chipmaker Nvidia, the world’s most valuable company, demolished analyst expectations this week when it posted a massive 73 percent increase in fourth-quarter revenue.

    But then something strange happened: Nvidia’s shares tanked by over five percent following the announcement, as Bloomberg reports, in its biggest one-day drop since mid-April.

    It was a baffling reaction to a financial slam dunk, highlighting ongoing fears that the massive amount of money the AI industry is pouring into the construction of gigantic data centers across the country may not be sustainable. Tech leaders continue to warn that a return may still be many years out, if one ever materializes, while burning through billions of dollars each quarter.

    By Friday, the situation didn’t look much brighter. The Dow Jones Industrial Average, S&P 500, and Nasdaq Composite lurched down as fears about the impact of AI on the economy continued to grow. All three indices are in the red for February, indicating persistent widespread uncertainty. Overall, the S&P 500 and Nasdaq Composite are on pace to experience their worst month since March 2025.

    As CNBC points out, Twitter cofounder Jack Dorsey’s fintech company, Block, announced on Thursday that it was laying off nearly half of its workforce, citing AI advances. The move was seemingly seen as a manifestation of fears that AI automation could soon send employment numbers off a cliff, which could have grave economic consequences in the long run. (While slumping Wall Street indices painted a dire picture of the broader economy, Block’s shares shot up in light of the news.)

    Following its blockbuster earnings, Nvidia shares also continued to decline on Friday, highlighting skepticism of the AI industry’s all-in approach — and lingering worries over whether its growth could be sustained in the long run.

    Compounding the situation is that inflation shows no sign of letting up.

    It remains to be seen whether we are indeed on the precipice of a collapsing AI bubble. For now, tech companies are treating the current moment as business as usual. Just earlier this week, Meta agreed to a $60 billion deal with AI chipmaker AMD, as fear continues to spread among investors.

    But given OpenAI recently cutting its enormous $1.4 trillion spending plan by more than half, the cracks are starting to show as the AI race continues to rage on.

    Reply
  10. Tomi Engdahl says:

    Lock Stock
    Investors Concerned AI Bubble Is Finally Popping
    “We have suddenly gone from the fear that you cannot be last, to investors questioning every single angle in this AI race.”
    https://futurism.com/artificial-intelligence/investors-concerned-ai-bubble-popping?fbclid=IwVERDUAQPEBJleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR5RDS13ctowP6oGaxyJBCnckXmphRQSfHzjEvhbU1Q4Fdp3nGSH8gV80dJuYQ_aem_tEVsA9D1VIolREnt1KV3Aw

    Reply
  11. Tomi Engdahl says:

    Rising Tide
    Chatbot Use Can Cause Mental Illness to Get Worse, Research Finds
    “I fear the problem is more common than most people think.”
    https://futurism.com/artificial-intelligence/chatbot-use-mental-illness?fbclid=IwdGRjcAQPgl9jbGNrBA-COGV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHkBDDeqYiuEMc-ukXYz-IGviAlR4svXYF-nziAIgiI8zY3ejlfMEDdwKsHRp_aem_onzvUc5L2iFqjyx8rDMiYQ

    A new study found that chatbot use appeared to worsen symptoms of mental illness in people struggling with an array of conditions, adding to a rising consensus among medical experts that interacting with unregulated chatbots might steer some users into crisis.

    The research, conducted by a team of psychiatrists at Denmark’s Aarhus University and published earlier this month in the journal Acta Psychiatrica Scandinavica, analyzed digital health records from roughly 54,000 Danish patients with diagnosed mental illnesses. After identifying 181 instances of patient notes containing mentions of AI chatbots, they determined that use of the bots — particularly intensive, prolonged use — appeared to deepen symptoms of mental illness in dozens of patients. They found that this pattern seemed to be especially true for patients prone to delusions or mania, and that the risks of chatbot use may be “severe or even fatal” for some.

    This latest study was led by Dr. Søren Dinesen Østergaard, a Danish psychiatrist who, back in August 2023, predicted that human-like chatbots like ChatGPT could stand to reinforce delusions and hallucinations in people “prone to psychosis.” In a press release, Østergaard urged that while more research into causality is needed, he “would argue that we now know enough to say that use of AI chatbots is risky if you have a severe mental illness.”

    “I would urge caution here,” said Østergaard.

    Though limited to Denmark, the study’s findings add to a wave of public reporting and research about AI-linked mental health crises — sometimes referred to by mental health professionals as “AI psychosis” — in which bots like ChatGPT and others introduce, reinforce, or otherwise stoke delusional beliefs in users in ways that contribute to destructive mental spirals and real-world outcomes. Indeed, instead of nudging users away from delusional beliefs or potentially harmful fixations, previous studies show that chatbots tend to reinforce them — which is exactly what mental health professionals urge people not to do when communicating with someone who may be in crisis.

    “AI chatbots have an inherent tendency to validate the user’s beliefs. It is obvious that this is highly problematic if a user already has a delusion or is in the process of developing one,” said Østergaard, adding that intensive chatbot use “appears to contribute significantly to the consolidation of, for example, grandiose delusions or paranoia.”

    The Danish study found that in addition to deepening delusional beliefs, chatbots also appeared to worsen suicidal ideation and self-harm, disordered eating habits, depression, and obsessive or compulsive symptoms, among other symptoms of mental health issues.

    Reply
  12. Tomi Engdahl says:

    Waterworks
    Anthropic CEO Warns of “Tsunami” on Horizon
    “There doesn’t seem to be a wider recognition in society of what’s about to happen.”
    https://futurism.com/artificial-intelligence/anthropic-ceo-warns-tsunami?fbclid=IwVERDUAQPg1xleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR4KABprcpXZaYo_PmR_K4YwJjWsiMwvIqNkJQSh-ilLOMXK8KpAax4WZxL48A_aem_vI-JGTcmg1km7U_MOUDBOA

    Dario Amodei may boast many credentials, but we weren’t aware that meteorologist was one of them.

    This week, the Anthropic CEO warned of an impending AI “tsunami” that will upend human society as the tech surpasses human intelligence. And if you don’t believe him, he suggests, you’re simply lying to yourself.

    “It’s surprising to me that we are, in my view, so close to these models reaching human level intelligence, and yet there doesn’t seem to be a wider recognition in society of what’s about to happen,” Amodei said in an interview with Indian investor Nikhil Kamath on an episode the WTF Is podcast released Tuesday.

    Amodei’s comments echo the bold and often dire predictions peddled by leaders in the AI industry. OpenAI CEO Sam Altman has frequently been candid that AI will wipe out entire categories of jobs. Microsoft’s AI CEO warned that virtually all office tasks will be automated by AI agents within one and a half years. Amodei infamously warned that AI would eliminate half of entry-level white collar jobs. While these sound like damaging predictions to make on the surface, they reinforce a fatalistic attitude toward the tech, suggesting it’s inevitable it will improve.

    But if an AI tsunami is indeed coming, as Amodei claims, then Anthropic has a major hand in it. Its recent release of its Claude Cowork AI agent sparked a mass software stock selloff that reverberated throughout the broader stock market and wiped out hundreds of billions of dollars.

    Tsunami or not, Amodei’s comments come during a pretty stormy moment for the company. This week, it abandoned one of its central safety pledges which held it would not train or release an AI model that it couldn’t guarantee adequate guardrails for, undermining the company’s entire raison d’etre. That decision was made amid pressure from the Pentagon to relax its restrictions on how its AIs are used or face losing its $200 million defense contract.

    Reply
  13. Tomi Engdahl says:

    Trump vs. Anthropic: The President bans the “woke” AI firm from all government work over battlefield access.
    https://bit.ly/4r46ayH

    Reply
  14. Tomi Engdahl says:

    The Trump administration is scrambling to replace Claude, the chatbot embedded throughout the Pentagon’s entire scaffolding, with Elon Musk’s pet AI system, Grok.

    On paper, xAI’s Grok makes sense: the AI model is already used in select parts of the Department of Defense, not to mention other parts of the federal government. Musk should also be deeply familiar with the contours of the federal government, given that he spent the better half of 2025 gnawing the wires out of its walls.

    https://futurism.com/future-society/grok-musk-pentagon-deployment?fbclid=IwdGRjcAQP4EJjbGNrBA_f-mV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHkK6wfMx2tg-Bz6oKHpNAD4cizx8jO9UgVJsvTknFfaEHA9iQtU-a0ZUFJRT_aem_0mGNi1Z6F01_4EsdUUKtoQ

    Reply
  15. Tomi Engdahl says:

    Builders Build
    Creator of Claude Code Fears This Could Be the Last Year That Software Engineers Are Employable
    “It’s going to be painful for a lot of people.”
    https://futurism.com/artificial-intelligence/claude-code-anthropic-labor?fbclid=IwdGRjcAQP_vpjbGNrBA_-2mV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHgrLmdPSwYezbqM8CbKZf0ylchZiWIpC7Bct4R95hTo2v5dr0qEoRD-Ox8KI_aem_OR9Wgd7f-frvFoxoSCqOaQ

    The warning signs are piling up for anyone still working as a software engineer in 2026.

    In a recent episode of former Airbnb guy Lenny Rachitsky’s tidily-named audio show, “Lenny’s Podcast,” the creator of one of the most acclaimed AI coding tools, Boris Cherny, reaffirmed his belief that there are dark days ahead for the world’s software developers.

    “I think by the end of the year, everyone is going to be a product manager, and everyone codes. The title software engineer is going to start to go away,” Cherny said on the podcast, first spotted by Fortune. “It’s just going to be replaced by ‘builder,’ and it’s going to be painful for a lot of people.”

    Cherny is the chief architect of Anthropic’s Claude Code, an agentic AI tool that’s said to autonomously execute software production tasks with little oversight from human beings. While it’s debated how effective Claude Code truly is — there’s been a lot of irritation with how fast the tool seems to drain user credits, for example — it’s taken the programming world by storm nonetheless.

    It’s a point Cherny’s keen to stress: “I have not edited a single line by hand since November,” he bragged on the podcast.

    Calculated PR-grab the interview may be, Cherny is careful to guard Anthropic’s reputation as the “adult in the room” — relative to other titans of the AI industry, at least.

    He admits, for example, that Claude Code has its limitations: “I don’t think we’re at the point where you can be totally hands-off, especially when there’s a lot of people running the program,” Cherny told Rachitsky. “You have to make sure that it’s correct. You have to make sure it’s safe.”

    Still, he’s not immune to indulging in a little future AI hype either. Though Cherny emphasizes that today’s software engineers still need to have a grasp of the fundamentals, he says that “in a year or two, it’s not going to matter.”

    Reply
  16. Tomi Engdahl says:

    Auto Pen
    Startup Generates Caring Letters to Your Friends Using AI, Handwrites Them Using Robot Pen
    “In an age where we are all drowning in electronic communication, handwritten notes really stand out.”
    https://futurism.com/future-society/startup-ai-handwritten-letters?fbclid=IwdGRjcAQQCc1jbGNrBBAJt2V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHtVoshur8JZWUlSLteH9uJ9sULes-wBD7Q238wyFH0tVNX554ASssxu1OmxA_aem_LIjqKg_mATOdseAM5n8Vig

    Think back to the last time you peeled open an envelope to find a handwritten letter. Maybe it was a heartfelt thank-you message for attending someone’s wedding. Perhaps it was a note from a close friend traveling abroad. Whatever the reason, it feels good to get an actual letter in the mail, right?

    Now, you may never experience that feeling again without a jolt of paranoid suspicion. Introducing Handwrytten, a young AI company oozing with corporate-twee, peddling in a Rube Goldberg machine of automation that produces handwritten notes with zero emotional or physical effort: a large language model produces the content, and then a proprietary robot inks it out onto stationary with unmatched “speed, quality, and realism.”

    Reply
  17. Tomi Engdahl says:

    Government & Policy
    OpenAI’s Sam Altman announces Pentagon deal with ‘technical safeguards’
    https://techcrunch.com/2026/02/28/openais-sam-altman-announces-pentagon-deal-with-technical-safeguards/?fbclid=IwdGRjcAQQHmFleHRuA2FlbQIxMQBzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR4ikpQCccB0hEsMguQhpXIaJlRM-hfySMgr3a4ePMVXTTgg7cBb-JK-w0kmyg_aem_ZfCD1LJk_1EXnpSb9Dt88Q

    OpenAI CEO Sam Altman announced late on Friday that his company has reached an agreement allowing the Department of Defense to use its AI models in the department’s classified network.

    This follows a high-profile standoff between the DoD — also known under the Trump administration as the Department of War — and OpenAI’s rival Anthropic. The Pentagon pushed AI companies, including Anthropic, to allow their models to be used for “all lawful purposes,” while Anthropic sought to draw a red line around mass domestic surveillance and fully autonomous weapons.

    In a lengthy statement released Thursday, Anthropic CEO Dario Amodei said the company “never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner,” but he argued that “in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.”

    More than 60 OpenAI employees and 300 Google employees signed an open letter this week asking their employers to support Anthropic’s position.

    Reply
  18. Tomi Engdahl says:

    The new facility is aimed at tackling the growing energy challenges posed by artificial intelligence (AI) data centers.
    https://bit.ly/4aHbjYD

    AI and Robotics
    US’ AI data centers could consume 4x more power by 2030, new facility to tackle the surge
    Data centers account for more than 4% of U.S. electricity use, and by 2030, that figure could climb as high as 17%.
    https://interestingengineering.com/ai-robotics/us-ai-data-centers-power-facility?utm_source=facebook%2Clinkedin%2Ctwitter&utm_medium=social&fbclid=IwdGRjcAQQIG9jbGNrBBAgXmV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHtgkvlKxnQ_UNDwHvgL2udy3SNYUqj9NwOjrEIGOBTUmxDqJsflOTE8dtrIl_aem_7sF1hm54_CZFYEMp2l-b6w

    A U.S. lab has announced the launch of the Next-Generation Data Centers Institute (NGDCI), a major new initiative aimed at tackling the growing energy challenges posed by artificial intelligence (AI) data centers.

    The institute will consolidate Oak Ridge National Laboratory’s (ORNL) broad expertise in energy technologies, computing, grid science, and cybersecurity to develop next-generation infrastructure that is secure, efficient, and reliable.

    Reply
  19. Tomi Engdahl says:

    Mega Mind
    Lab-Grown Brains Growing More Powerful
    “The capacity for adaptive computation is intrinsic to cortical tissue itself.”
    https://futurism.com/health-medicine/lab-grown-brain-organoid?fbclid=IwdGRjcAQQJjpjbGNrBBAmFGV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHm3Wdcjz5UbrZnNbXfN91rJn3r4Z2lQMbzUo9IWTxMQJXdHtTkBPOiSk4x6C_aem_xyQobmJRaTURLfzQOvdSQA

    Lab-grown brains finally came onto the scene in 2013, when a team of scientists led by Madeline Lancaster created the first brain “organoid”: a tiny, three-dimensional cell culture mimicking the human brain. These mini-brains, made out of stem cells, contain real neurons, thus allowing researchers to study brain development, model neurological diseases, and test drugs before human trials. (As you might guess, the practice is not without controversy.)

    Now, scientists at the University of California, Santa Cruz are taking lab-grown mini-brains into their toddler era, after demonstrating that brain organoids can process information in real time.

    In a remarkable breakthrough published in the journal Cell Reports, researchers were able to effectively coach lab-grown brains into solving the “cart-pole” problem.

    Reply
  20. Tomi Engdahl says:

    AI Can Delete Your Data — and Ignoring This Warning Could Damage Your Business Beyond Repair
    https://www.entrepreneur.com/science-technology/ai-can-delete-your-data-heres-your-prevention-plan/501987?link_source=ta_first_comment&taid=69a23024781ea000013554f2&utm_campaign=trueanthem&utm_medium=social&utm_source=facebook&fbclid=IwdGRjcAQQJwxjbGNrBBAm9mV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHrnZluP0yHDfUtI2Xf3POw8r_puLcwnaVbWjfZU1bsUOL7x6xJfSb6ZxdfhZ_aem_PY2ju4PzVgL5GebyLafvgw

    AI systems still lack the judgment to understand when commands will cause catastrophic damage — and without strict controls and recovery plans, your data could be in danger.

    Key Takeaways
    AI systems have made work easier, but they are not yet fully developed and competent.
    If AI systems fail, they can cause catastrophic damage and delete your company’s data, destroying hard work in seconds.
    CEOs must communicate the risks to all staff, ensure AI has less privilege than humans, develop instant recovery plans and constantly invest in data protection and restoration measures.

    Never feel that you are totally safe. In July 2025, one company learned the hard way after an AI coding assistant it dearly trusted from Replit ended up breaching a “code freeze” and implemented a command that ended up deleting its entire product database.

    Reply
  21. Tomi Engdahl says:

    Why Oh Why
    Tech CEOs Confused by Why Everybody Hates AI So Much
    “It’s extremely hurtful, frankly.”
    https://futurism.com/future-society/tech-ceo-ai-hate?fbclid=IwdGRjcAQQUzBjbGNrBBBTAmV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHs_XEjaQC2doO0sTu1agzabhEwhc2TPaVvzH_kBLOYYTGye8jnldDvsxti6p_aem_K4GJu0X6H_LwsWs9U_93Rw

    These days, it’s not enough to sit and watch as AI destroys a generation of students, makes it impossible to find a new job, and generates military targets by the thousands — you gotta be grateful for it, too.

    That, at least, is the attitude of the tech elites who’ve spent years pushing AI on the masses, only to find the public is in no such mood. As the New York Times observed over the weekend, the particular characteristics of what some have called the “AI bubble” diverge from similar moments in economic history in one key way: practically everybody hates it.

    “I can’t really remember a boom with such active hostility to it,” William Quinn, co-author of the 2020 history tome “Boom and Bust: A Global History of Financial Bubbles,” told the NYT. “People usually find new technology exciting. It happened with electricity, bicycles, motorcars. There were fears but also hopes. AI is notable, perhaps unique, for the lack of enthusiasm.”

    As consumer sentiment goes from sour to moldy, the CEOs behind the bubble only seem to be doubling down.

    “It’s extremely hurtful, frankly,” said Nvidia chief executive Jensen Huang in a January interview about the “battle of [AI] narratives.”

    Huang insisted that AI is suffering a “lot of damage” from “very well-respected people who have painted a doomer narrative, end-of-the-world narrative, science fiction narrative.”

    Reply
  22. Tomi Engdahl says:

    Heterodoxus
    Goldman Sachs Researchers Make Startling Claim About AI’s Effects on the US Economy
    “We don’t actually view AI investment as strongly growth-positive.”
    https://futurism.com/future-society/researchers-economy-ai-narratives?fbclid=IwdGRjcAQSA6djbGNrBBIDW2V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHiMpQkC08ekjhdh5JtgOjQZnzVfaLU5ulUnQO1vo9MjHVhSrMaIyxV8luCuR_aem_JsOEB3jYjuvvl-qE7CzsdA

    If there’s one thing AI excels at, it’s defying every attempt to build a coherent narrative around it.

    Is AI destroying jobs, or just masking the same old garbage labor market? Are data centers unlocking prosperity for generations to come, or hemorrhaging value faster than a new car driven off the lot?

    Whatever can be said of AI’s consequences for the future, one of the more widely agreed-upon views among economists seemed to be that tech spending is propping up an otherwise dismal economy.

    Throughout the last few months, a wide range of experts have concluded that tech industry spending on AI — which includes everything from data center infrastructure and energy bills to massive salaries and lobbying — was responsible for a sizeable chunk of GDP growth in the US across 2025. Though there were disagreements about how much exactly, the consensus seemed clear: AI investments are critically important to the US.

    Now, though, experts at a leading bank are throwing their weight behind a different theory: that AI spending has had little impact on the economy whatsoever.

    “We don’t actually view AI investment as strongly growth-positive,” Goldman chief economist Jan Hatzius said in a recent interview. “I think there’s a lot of misreporting, actually, on the impact that AI investment had in US GDP growth in 2025, and it’s much smaller than is often perceived because most AI equipment is imported. That means there’s a positive entry in the investment line, but that’s offset by a negative entry in the net-exports line.”

    Nobody can dispute that the US is spending a boatload of cash on AI — but most of it flows overseas to the countries making AI chips and hardware.

    “A lot of AI investment that we see in the US adds to Taiwanese GDP, and it adds to [South] Korean GDP, but not really that much to US GDP,” Hatzius continued.

    The commentary comes as Goldman has launched SPXXAI, an inelegantly-named S&P 500 index that specifically excludes stocks related to AI

    One thing seems certain: the longer we spend in this AI-no-man’s-land, the less cohesive any one narrative seems to become.

    Reply
  23. Tomi Engdahl says:

    Worker’s World
    AI Could Cause Workers to Rise Up Against the Corporations Driving Them Into Poverty
    “Larger working-class movements for dignity are possible.”
    https://futurism.com/artificial-intelligence/ai-labor-workers-movement?fbclid=IwVERDUAQSBYNleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR5x7VDjH97uihghC1Ip2kQ25k4sih5-lFaEhkNyXH_v69Fi0h7uM0rxa1jC6Q_aem_hgL481ivCmIgyPADbMYcxw

    In an interview with the Guardian, Sarita Gupta, the Ford Foundation’s vice-president of US programs and co-author of The Future We Need, argued that AI is “creating an opportunity” for a resurgent labor movement.

    “Over time, unions have lost collective bargaining power, and a lot of that is due to the lack of laws that we need and enforcement of laws,” she said. “For four decades, productivity soared while wages stayed flat, and unionization hit historic lows.”

    But, Gupta continued, “when you have a young Silicon Valley software engineer realize that their performance is tracked or undermined by the same logic as a working-class warehouse picker, class divisions dissolve, and larger working-class movements for dignity are possible. That is what we’re starting to see.”

    White-collar office drones and blue collar stiffs alike are both suffering through one of the harshest layoff periods since 2009. Recent polling, meanwhile, found that 71 percent of Americans fear AI will put “too many people out of work permanently.” And according to the Economic Policy Institute, more than more than 50 million American workers across all industries wanted union representation in 2025, but couldn’t get it.

    As discontent rises, business moguls are sounding increasingly nervous about the blowback.

    “We have to always remind ourselves that the direction of technology is a choice, right? We can use AI to build a surveillance economy that squeezes every drop of value out of a worker, or we can use it to build an era of shared prosperity,” Gupta concluded. “We know if technology were designed and deployed and governed by the people doing the work, AI wouldn’t be such a threat.”

    Reply
  24. Tomi Engdahl says:

    Debbie Downer
    It Turns Out That Constantly Telling Workers They’re About to Be Replaced by AI Has Grim Psychological Effects
    “An invisible disaster.”
    https://futurism.com/artificial-intelligence/ai-effects-workers-psychological?fbclid=IwVERDUAQSBqxleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR5x7VDjH97uihghC1Ip2kQ25k4sih5-lFaEhkNyXH_v69Fi0h7uM0rxa1jC6Q_aem_hgL481ivCmIgyPADbMYcxw

    Two researchers are warning of the devastating psychological impacts that AI automation, or the threat of it, can have on the workforce. The phenomenon, they argue in a new article published in the journal Cureus, warrants a new term: AI replacement dysfunction (AIRD).

    The constant fear of losing your job could be driving symptoms ranging from anxiety, insomnia, paranoia, and loss of identity, according to the authors, which can manifest even in absence of other psychiatric disorders or other factors like substance abuse.

    “AI displacement is an invisible disaster,”

    Most of the attention on AI’s mental health impacts has centered on the effects of personally using the tech, with widespread reports of AI pulling users into psychotic episodes or encouraging dangerous behavior. But the stress that arises from the widespread fears surrounding the tech might deserve a closer look in a clinical context, too.

    Job destruction is probably one of the biggest fears. A Reuters survey found that 71 percent of Americans are worried that AI could permanently put vast swaths of people out of work. The narrative is pushed by top figures in the industry. Anthropic CEO Dario Amodei, for example, infamously warned that AI could wipe out half of all entry-level white collar jobs. Microsoft’s AI CEO Mustafa Suleyman added last week that AI could automate “most, if not all” white collar tasks within a year and a half.

    Amazon is in the middle of sacking 14,000 employees after boasting of the “efficiency gains” from using AI across the company. And one report found that AI was cited in the announcements of more than 54,000 layoffs last year.

    Enter AIRD. In the paper, the authors cite one study that showed a positive correlation between AI implementation in the workplace and anxiety and depression. Another cited study found that stress and other negative emotions are common for professionals in fields that are considered susceptible to AI automation.

    AIRD is not a clinically recognized diagnosis yet, the authors stress. But they propose a method for screening for the disorder through a careful progression of open-ended questions that should eliminate other causes like substance abuse.

    “Equipping mental health professionals with the knowledge and tools to recognize and treat people with AIRD will be vital for societal acceptance of a condition that will increasingly affect the workplace,” the researchers wrote.

    Reply
  25. Tomi Engdahl says:

    Waterworks
    Anthropic CEO Warns of “Tsunami” on Horizon
    “There doesn’t seem to be a wider recognition in society of what’s about to happen.”
    https://futurism.com/artificial-intelligence/anthropic-ceo-warns-tsunami?fbclid=IwdGRjcAQSCMxjbGNrBBIIfGV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHu2rxxiw2oKC6JwPJG38ldWJilLWvp39hRynU8AOykuT0idS8xXJeI4PuPaO_aem_fP6i1RiHIok23nKloBUGNw

    Dario Amodei may boast many credentials, but we weren’t aware that meteorologist was one of them.

    This week, the Anthropic CEO warned of an impending AI “tsunami” that will upend human society as the tech surpasses human intelligence. And if you don’t believe him, he suggests, you’re simply lying to yourself.

    “It’s surprising to me that we are, in my view, so close to these models reaching human level intelligence, and yet there doesn’t seem to be a wider recognition in society of what’s about to happen,”

    Amodei’s comments echo the bold and often dire predictions peddled by leaders in the AI industry. OpenAI CEO Sam Altman has frequently been candid that AI will wipe out entire categories of jobs.

    Tsunami or not, Amodei’s comments come during a pretty stormy moment for the company. This week, it abandoned one of its central safety pledges which held it would not train or release an AI model that it couldn’t guarantee adequate guardrails for, undermining the company’s entire raison d’etre. That decision was made amid pressure from the Pentagon to relax its restrictions on how its AIs are used or face losing its $200 million defense contract.

    Though Anthropic positioned itself as the safety-focused adult in the room, it’s now sounding no less hypocritical than its competitors for feigning concern for the tech’s impacts on society.

    Reply
  26. Tomi Engdahl says:

    They hype it up so businesses scramble to get on board , what if it’s all hype. There was one article saying ai doesn’t actually learn from the internet just refers to a downloaded version of it to answer questions. Meaning the whole industry will be open to copyright lawsuits.

    And energy startups say the grid is becoming fully renewable on horizon; and biotechs say treating aging will be done with an API on horizon; and pharmacists say cancer will be curable on horizon. I wonder why people get hyped with bs.

    The bubble, is still bubbling. For now all the hot air is keeping it going.

    Its amazing that they truly believe that their life’s work and accomplishments will end the world as we know it and millions will suffer because of it and continue to do it.

    Mike McMahon The Homo Deus dataism concept explains quite well how AI will know us much better than ourselves, boiled down to algorithms, data flows, and processor entities, we have no chance of competing. AI will take over decision making on our behalf as it will do a much better job at it. If humanity thinks the current economic system is going to continue as per usual. We are heading for a huge wake up call. Denying the inevitable reality of this situation is kind of the point the guy is making.

    Frank Herbert was right. The new Dune movie didn’t really explain the w@r against AI.

    Reply
  27. Tomi Engdahl says:

    USC Just Built Artificial Neurons That Could Make GPT-5 Run on 20 Watts
    After 70 years of von Neumann architecture, researchers create neurons that think like nature — using silver ions, not electrons. Here’s why it matters.
    https://blog.delanoe-pirard.com/usc-neuromorphic-computing-breakthrough-2025-fde1a420a118?fbclid=IwVERDUAQSTtlleHRuA2FlbQIxMQBzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR6WsKgWiO–r06ksCqLNJ_aCbUKjXWFzlJn6ejPUElvvJAF66HMFTk1YXbuFw_aem_TavzUittWAlE0stBkFBRd

    Key Takeaways
    Researchers at USC Viterbi published a breakthrough in Nature Electronics: artificial neurons using ionic memristors that replicate brain chemistry [Source: Nature Electronics]
    The device uses only 3 components (1M1T1R) versus tens to hundreds in conventional designs — fitting within the footprint of a single transistor [Source: USC Viterbi]
    Potential energy reduction: orders of magnitude less than conventional designs — potentially reaching attojoule scale (vs picojoule currently) [Source: USC Viterbi]

    Potential energy reduction: orders of magnitude less than conventional designs — potentially reaching attojoule scale (vs picojoule currently) [Source: USC Viterbi]
    The mechanism: silver ions (Ag⁺) diffusing through an oxide, mimicking how calcium ions work in biological neurons [Source: USC Viterbi]
    Critical obstacle: Silver is not compatible with standard CMOS manufacturing — alternative materials are needed [Source: EurekAlert]

    Reply
  28. Tomi Engdahl says:

    The brutal metric companies are using to show their AI bets are justified
    https://www.businessinsider.com/companies-can-show-their-ai-success-by-cutting-workers-2026-2?fbclid=IwdGRjcAQS3kJjbGNrBBLeLGV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHpzHJgrFZbSFTNl6Po8tzQjSySeZ9UO6HmTpiVvhCmwsajaup6WeZYOHVQGl_aem_mwUcMtcnBfrd1cIuJgGr8A&utm_campaign=mrf-insider-marfeel-headline-graphic&mrfcid=2026030269a5dc7d63f0d226db4e0baa

    AI can do a lot. Showing it’s paying off is another matter.

    As companies pour billions into AI, Wall Street is seeking evidence that those bets are worthwhile — and many CEOs are feeling the pressure.

    In response, some are offering a simple proof point: They need fewer workers to get the job done.

    Reply
  29. Tomi Engdahl says:

    Users boycott ChatGPT after OpenAI signs Department of War deal
    Rival chatbot Claude tops app charts amid backlash against OpenAI
    https://www.independent.co.uk/tech/cancel-chatgpt-ai-war-claude-anthropic-b2930007.html?fbclid=IwdGRjcAQS5b9jbGNrBBLlq2V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHg4eXi2jXgYyFZ8sFhv23akV472jPCDcC4-6CWqwL6Ft7PfcH_YuRLufqpYT_aem_TxvlHbmoh5cy8C5q2cv3rA

    A growing number of ChatGPT users are switching to other AI chatbots after OpenAI signed a deal with the US Department of War.

    The deal with the Pentagon, announced on Friday, comes after the Trump administration sought to terminate a contract with Anthropic after the AI startup raised concerns about its products being used for mass surveillance and autonomous weapons.
    Anthropic CEO Dario Amodei said its company “cannot in good conscience accede to [the department’s] request”, adding that the government was threatening to designate Anthropic a “supply chain risk” – a label reserved for US adversaries that has never been applied to a US company.

    “The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*