AI trends 2026

Here are some of the the major AI trends shaping 2026 — based on current expert forecasts, industry reports, and recent developments in technology. The material is analyzed using AI tools and final version hand-edited to this blog text:

1. Generative AI Continues to Mature

Generative AI (text, image, video, code) will become more advanced and mainstream, with notable growth in:
* Generative video creation
* Gaming and entertainment content generation
* Advanced synthetic data for simulations and analytics
This trend will bring new creative possibilities — and intensify debates around authenticity and copyright.

2. AI Agents Move From Tools to Autonomous Workers

Rather than just answering questions or generating content, AI systems will increasingly act autonomously, performing complex, multi-step workflows and interacting with apps and processes on behalf of users — a shift sometimes called agentic AI. These agents will become part of enterprise operations, not just assistant features.

3. Smaller, Efficient & Domain-Specific Models

Instead of “bigger is always better,” specialized AI models tailored to specific industries (healthcare, finance, legal, telecom, manufacturing) will start to dominate in many enterprise applications. These models are more accurate, legally compliant, and cost-efficient than general models.

4. AI Embedded Everywhere

AI won’t be an add-on feature — it will be built into everyday software and devices:
* Office apps with intelligent drafting, summarization, and task insights
* Operating systems with native AI
* Edge devices processing AI tasks locally
This makes AI pervasive in both work and consumer contexts.

5. AI Infrastructure Evolves: Inference & Efficiency Focus

More investment is going into inference infrastructure — the real-time decision-making step where models run in production — thereby optimizing costs, latency, and scalability. Enterprises are also consolidating AI stacks for better governance and compliance.

6. AI in Healthcare, Research, and Sustainability

AI is spreading beyond diagnostics into treatment planning, global health access, environmental modeling, and scientific discovery. These applications could help address personnel shortages and speed up research breakthroughs.

7. Security, Ethics & Governance Become Critical

With AI handling more sensitive tasks, organizations will prioritize:
* Ethical use frameworks
* Governance policies
* AI risk management
This trend reflects broader concerns about trust, compliance, and responsible deployment.

8. Multimodal AI Goes Mainstream

AI systems that understand and generate across text, images, audio, and video will grow rapidly, enabling richer interactions and more powerful applications in search, creative work, and interfaces.

9. On-Device and Edge AI Growth

Processing AI tasks locally on phones, wearables, or edge devices will increase, helping with privacy, lower latency, and offline capabilities — especially crucial for real-time scenarios (e.g., IoT, healthcare, automotive).

10. New Roles: AI Manager & Human-Agent Collaboration

Instead of replacing humans, AI will shift job roles:
* People will manage, supervise, and orchestrate AI agents
* Human expertise will focus on strategy, oversight, and creative judgment
This human-in-the-loop model becomes the norm.

Sources:
[1]: https://www.brilworks.com/blog/ai-trends-2026/?utm_source=chatgpt.com “7 AI Trends to Look for in 2026″
[2]: https://www.forbes.com/sites/bernardmarr/2025/10/13/10-generative-ai-trends-in-2026-that-will-transform-work-and-life/?utm_source=chatgpt.com “10 Generative AI Trends In 2026 That Will Transform Work And Life”
[3]: https://millipixels.com/blog/ai-trends-2026?utm_source=chatgpt.com “AI Trends 2026: The Key Enterprise Shifts You Must Know | Millipixels”
[4]: https://www.digitalregenesys.com/blog/top-10-ai-trends-for-2026?utm_source=chatgpt.com “Digital Regenesys | Top 10 AI Trends for 2026″
[5]: https://www.n-ix.com/ai-trends/?utm_source=chatgpt.com “7 AI trends to watch in 2026 – N-iX”
[6]: https://news.microsoft.com/source/asia/2025/12/11/microsoft-unveils-7-ai-trends-for-2026/?utm_source=chatgpt.com “Microsoft unveils 7 AI trends for 2026 – Source Asia”
[7]: https://www.risingtrends.co/blog/generative-ai-trends-2026?utm_source=chatgpt.com “7 Generative AI Trends to Watch In 2026″
[8]: https://www.fool.com/investing/2025/12/24/artificial-intelligence-ai-trends-to-watch-in-2026/?utm_source=chatgpt.com “3 Artificial Intelligence (AI) Trends to Watch in 2026 and How to Invest in Them | The Motley Fool”
[9]: https://www.reddit.com//r/AI_Agents/comments/1q3ka8o/i_read_google_clouds_ai_agent_trends_2026_report/?utm_source=chatgpt.com “I read Google Cloud’s “AI Agent Trends 2026” report, here are 10 takeaways that actually matter”

706 Comments

  1. Tomi Engdahl says:

    I didn’t think a local LLM could work this well for research, but LM Studio proved me wrong
    https://www.xda-developers.com/didnt-think-local-llm-could-work-this-well-for-research-lm-studio-proved-me-wrong/

    I’ve been seeing people talk about local LLMs everywhere and praise the benefits, such as privacy wins, offline access, no API costs, and no data leaving your device. It sounded appealing on paper, but I assumed the trade-off would be in setting it up with coding knowledge I don’t have, slower responses, or weaker reasoning. I just assumed cloud-based models had the upper hand for serious work and that local LLMs were more of a novelty for tinkerers.

    Reply
  2. Tomi Engdahl says:

    AlexsJones
    /
    llmfit
    Public
    157 models. 30 providers. One command to find what runs on your hardware.
    https://github.com/AlexsJones/llmfit

    Reply
  3. Tomi Engdahl says:

    How to Develop AI Agents Using LangGraph: A Practical Guide
    Manoj Aggarwal
    Manoj Aggarwal
    AI agents are all the rage these days. They’re like traditional chatbots, but they have the ability to utilize a plethora of tools in the background. They can also decide which tool to use and when to use it to answer your questions.
    https://www.freecodecamp.org/news/how-to-develop-ai-agents-using-langgraph-a-practical-guide/

    Reply
  4. Tomi Engdahl says:

    Lamp Cloth
    OpenAI’s Hardware Device Just Leaked, and You Will Cringe
    Thanks, but no thanks.
    https://futurism.com/artificial-intelligence/openai-hardware-device-leaked-cringe

    Stuffing an AI chatbot into a consumer electronics device and turning out with a product people actually want has proven extremely difficult.

    We’ve come across creepy and widely-hated pendants designed to listen to everything you say, as well as flawed AI “pins” that turned out to be a flaming dumpster fire, leading to frustration and disbelief.

    Reply
  5. Tomi Engdahl says:

    RightNow-AI
    /
    picolm
    Public
    Run a 1-billion parameter LLM on a $10 board with 256MB RAM
    https://github.com/RightNow-AI/picolm

    Reply
  6. Tomi Engdahl says:

    Music to your ears, literally: Gemini now writes and produces songs
    While even writing lyrics.
    https://www.androidauthority.com/google-gemini-produce-music-tracks-3641997/

    TL;DR
    Gemini can now create entire songs, including lyrics, with just text or image prompts.
    It will also create an accompanying album art for the track using Nano Banana.
    The feature is available widely for both free and paid users.

    Reply
  7. Tomi Engdahl says:

    Enterprise use of open source AI coding is changing the ROI calculation
    news
    Feb 18, 2026
    8 mins

    https://www.infoworld.com/article/4134257/enterprise-use-of-open-source-ai-coding-is-changing-the-roi-calculation.html

    Open source has always had issues, but the benefits outweighed the costs/risks. AI is not merely exponentially accelerating tasks, it is disproportionately increasing risks.

    Reply
  8. Tomi Engdahl says:

    ‘I’m deeply uncomfortable’: Anthropic CEO warns that a cadre of AI leaders, including himself, should not be in charge of the technology’s future
    https://fortune.com/article/why-is-anthropic-ceo-dario-amodei-deeply-uncomfortable-companies-in-charge-ai-regulating-themselves/

    Reply
  9. Tomi Engdahl says:

    How to choose the best LLM using R and vitals
    feature
    Feb 19, 2026
    24 mins
    https://www.infoworld.com/article/4130274/how-to-choose-the-best-llm-using-r-and-vitals.html

    Use the vitals package with ellmer to evaluate and compare the accuracy of LLMs, including writing evals to test local

    Reply
  10. Tomi Engdahl says:

    Is OpenClaw Closed?
    OpenAI just hired OpenClaw’s creator. Can the promised “independent foundation” really save this open-source dream from corporate interests?
    https://www.hackster.io/news/is-openclaw-closed-1e177637af9f

    Reply
  11. Tomi Engdahl says:

    Unohdettu jätti toimii tekoälyn kulisseissa – Tämä yhtiö auttaa Googlea haastamaan Nvidiaa
    Teknologiajätti Broadcomin tulos on kasvanut viimeisen kymmenen vuoden aikana lähes 30 prosentin vuosivauhtia, ja kasvun on määrä jatkua myös lähitulevaisuudessa.
    https://www.arvopaperi.fi/uutiset/a/bd3692c9-6036-48c7-bcc7-eb22655a1288

    Reply
  12. Tomi Engdahl says:

    Congratulations, society! We’ve swapped training our brains for training LLMs…

    AI-bert Einstein
    New AI Agent Logs Directly Into College Platform Canvas to Do Your Homework for You
    If you thought if using AI to cheat couldn’t get any easier, think again.
    https://futurism.com/artificial-intelligence/ai-agent-canvas-homework?fbclid=IwdGRjcAQKJuRjbGNrBAomX2V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHrHeuDGzEUff_Q9CTny6Gy2rRETGlqHMjMe4KV3qnXzh1ERMBMnWGmRDRb7n_aem_o28AOUst6r4wh0RiwD0PVg

    Lazy undergrads rejoice. A new AI “homework agent” can supposedly log into your account on the learning management system Canvas and automatically complete your homework and assignments for you — streamlining the laborious, outdated process of having to copy-paste answers from ChatGPT.

    Called “Einstein,” the AI can even participate in discussions, reply to your peers, write essays, and take notes on recorded lectures on your behalf, its maker Companion.AI claims on its website.

    “Einstein has a full virtual computer with a browser — anything you can do, he can do,” the site reads, next to the smiling visage of the famed physicist Albert Einstein.

    “He logs into Canvas every day, watches lectures, reads essays, writes papers, participates in discussions, and submits your homework — automatically.”

    Companion’s founder, Advait Paliwal, described the Einstein AI tool in a tweet as “OpenClaw as a student,” referring to the viral open source AI agent that “actually does things.” Paliwal also worked on YouLearn AI, an “AI tutor” for students that claims to have over a million users.

    It’s unclear if the company’s boasts hold water.

    The AI industry is fraught with half-baked vibe-coded projects and deceptive claims.

    That said, it’s alarming that the tool exists at all, as it explicitly promises to autonomously cheat on assignments without ever mentioning the word.

    “Set him up and forget about it. Einstein checks for new assignments and knocks them out before the deadline,” Companion says.

    The site can read like a parody, as when its FAQ features the daring question: “What if I want to do an assignment myself?”

    “Forget switching between ChatGPT and your [learning management software],” the company boasts. “Einstein reads the assignment, solves it, and submits it directly.”

    Word of the AI agent sparked backlash on social media, especially among educators, who have long been fighting an uphill battle against the flood of cheating enabled by AI chatbots.

    “What many don’t yet grasp is just how quickly all of these things — the good, the bad, and the ugly — are coming down the line,” Brendan Bartanen, an associate professor of education and public policy at the University of Virginia, wrote on Bluesky. “AI models have reached capability that allows for basically anyone with an internet connection to spin up functioning apps using just ideas expressed in natural language.”

    Another risk some noted was that allowing a third-party AI tool to access a Canvas account could violate an institution’s acceptable use policy.

    students are already using ChatGPT and services like Chegg and Course Hero to help with their assignments.

    He also described AI’s growing role in education as an inevitable reality.

    “The education system will need to adapt to AI the same way it adapted to calculators, the internet, and Google.”

    “We’ve also gotten threats from educators to take it down or we won’t ‘sleep well’ and how we’re causing the downfall of society,” Paliwal claimed.

    some companies try make a name for themselves by unashamedly bragging that their tools will help you con your way through your professional and academic life.

    A startup launched by two Columbia University dropouts called Cluely gloats that its AI will help you “cheat on everything” and make you come across smarter in virtual meetings. Teachers and professors are hopeless to keep up with all the latest ways AI can be used for cheating, while the schools and institutions they work for often form partnerships with big tech companies to push AI tools on their students.

    Reply
  13. Tomi Engdahl says:

    Big Kid Job
    It’s Starting to Look Like AI Has Killed the Entire Model of College
    “Colleges and universities face an existential issue before them.”
    https://futurism.com/future-society/ai-college-internships-jobs?fbclid=IwVERDUAQKKatleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR4nBZ2vwnIuMcZ91Z5cZzqv2a7pgLd4fcK1cTyEe6TaqLhdwH6WycKu1iUUow_aem_XS5GJJy2p47ZowftYb541g

    Well before “AI” had entered the lexicon of evening newscasters, the university model of higher-education was in trouble. Between 2010 and 2022 — the year ChatGPT came out — university enrollment dropped nearly 15 percent throughout the US. State funding cuts pushed already exorbitant tuition costs onto even more students, forcing many to ask whether a college education was even worth the staggering investment.

    And when AI chatbots did hit the scene, they turned a lousy situation into a full blown nightmare, with fresh college graduates discovering in real time that their degrees are almost useless in one of the worst job markets in recent history.

    AI has completely upended the financial calculus around hiring and training young talent. Breaking it down to brass tacks, Kho said it took around 18 months for fresh college grads to “pay off” on the time and resources required to train them.

    At around that point, “they get fidgety,” and begin searching for the next step in their career. “So you can see the challenges from an HR standpoint,” Kho explained, which leads to uncomfortable questions: “‘Where are we getting value? Will AI solve this for us?’”

    All of this is having the effect of destroying the perceived “return on investment” for college enrollment, which in turn is leading to shrinking class sizes, especially in tech-focused degrees like computer science. Simply put, if college grads don’t have internship experience by the time they leave, they’re much less likely to land a career in their chosen field — but companies are increasingly hesitant to take on the burden.

    As Ryan Craig, author of the book “Apprentice Nation,“ explained to NY Mag, “colleges and universities face an existential issue before them.”

    “They need to figure out how to integrate relevant, in-field, and hopefully paid work experience for every student, and hopefully multiple experiences before they graduate,” he warned.

    Reply
  14. Tomi Engdahl says:

    Stag Party
    AI Is Causing Cultural Stagnation, Researchers Find
    “No new data was added. Nothing was learned. The collapse emerged purely from repeated use.”
    https://futurism.com/artificial-intelligence/ai-cultural-stagnation?fbclid=IwVERDUAQKKqRleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR4nBZ2vwnIuMcZ91Z5cZzqv2a7pgLd4fcK1cTyEe6TaqLhdwH6WycKu1iUUow_aem_XS5GJJy2p47ZowftYb541g

    Generative AI relies on a massive body of training material, primarily made up of human-authored content haphazardly scraped from the internet.

    Scientists are still trying to better understand what will happen when these AI models run out of that content and have to rely on synthetic, AI-generated data instead, closing a potentially dangerous loop. Studies have found that AI models start cannibalizing this AI-generated data, which can eventually turn their neural networks into mush. As the AI iterates on recycled content, it starts to spit out increasingly bland and often mangled outputs.

    There’s also the question of what will happen to human culture as AI systems digest and produce AI content ad infinitum. As AI executives promise that their models are capable enough to replace creative jobs, what will future models be trained on?

    In an insightful new study published in the journal Patterns this month, an international team of researchers found that a text-to-image generator, when linked up with an image-to-text system and instructed to iterate over and over again, eventually converges on “very generic-looking images” they
    “visual elevator music.”

    “This finding reveals that, even without additional training, autonomous AI feedback loops naturally drift toward common attractors,” they wrote. “Human-AI collaboration, rather than fully autonomous creation, may be essential to preserve variety and surprise in the increasingly machine-generated creative landscape.”

    generative AI may already be inducing a state of “cultural stagnation.”

    The recent study shows that “generative AI systems themselves tend toward homogenization when used autonomously and repeatedly,” he argued. “They even suggest that AI systems are currently operating in this way by default.”

    “The convergence to a set of bland, stock images happened without retraining,”

    It’s a particularly alarming predicament considering the tidal wave of AI slop drowning out human-made content on the internet. While proponents of AI argue that humans will always be the “final arbiter of creative decisions,” per Elgammal, algorithms are already starting to float AI-generated content to the top, a homogenization that could greatly hamper creativity.

    “The risk is not only that future models might train on AI-generated content, but that AI-mediated culture is already being filtered in ways that favor the familiar, the describable and the conventional,” the researcher wrote.

    It remains to be seen to what degree existing creative outlets, from photography to theater, will be affected by the advent of generative AI, or whether they can coexist peacefully.

    “If generative AI is to enrich culture rather than flatten it, I think systems need to be designed in ways that resist convergence toward statistically average outputs,” he concluded. “The study makes one thing clear: Absent these interventions, generative AI will continue to drift toward mediocre and uninspired content.”

    Reply
  15. Tomi Engdahl says:

    Conventional Wisdom
    San Diego Comic Con Quietly Bans AI Art
    Another victory for artists.
    https://futurism.com/artificial-intelligence/comic-con-ai-art?fbclid=IwVERDUAQKLJNleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR4nBZ2vwnIuMcZ91Z5cZzqv2a7pgLd4fcK1cTyEe6TaqLhdwH6WycKu1iUUow_aem_XS5GJJy2p47ZowftYb541g

    The San Diego Comic Con has quietly updated its policy to ban AI-generated art, 404 Media reports, providing a major victory to artists.

    The about-face is a welcome surprise. Until now, the massive convention — which has become a melting pot of all kinds of pop entertainment beyond the comic medium, with everyone ranging from game developers to movie studios using it as a platform to tease new content — has allowed some AI art to be displayed, so long as it was labeled as such and wasn’t for sale, as well as other stipulations that have been in place since at least 2024, according to 404.

    “Generative AI is still going to creep its nasty way in some way or another,” she said, “but at least it’s not something we have to take lying down. It’s something we can actively speak out against.”

    Artists have been hostile towards AI pretty much from the moment it became popular, as the models were trained on troves of photos and artworks ripped from the internet without permission or compensation.

    But this past year saw a particularly notable surge in anti-AI sentiment, which now seems to have finally reached a boiling point in the Comic Con community

    Strikingly, less than a day after the AI backlash started, the convention quietly updated its policy: “Material created by Artificial Intelligence (AI) either partially or wholly, is not allowed in the art show,” it now reads. It’s seemingly keeping its cards close to its chest, as it made no announcement about the policy change.

    The decision isn’t the only sign of resistance to AI in the world of comics and fandoms. President of DC Comics Jim Lee vowed to uphold human creativity and not support AI: “Not now, not ever,” Lee said last October. In August, another fandom convention, GalaxyCon, instituted a “sweeping AI art ban,” with its president saying it would “fight against unethical AI companies.” The following month, a vendor accused of selling AI art at Dragon Con was shown out by cops after organizers, with onlookers’ approval, demanded that the vendor leave.

    Last week, musicians rejoiced when Bandcamp, a major music distribution platform favored by indie artists, also instituted an AI ban, prohibiting any songs that generated “wholly or in substantial part by AI.”

    Reply
  16. Tomi Engdahl says:

    Reality Break
    Man Who Had Managed Mental Illness Effectively for Years Says ChatGPT Sent Him Into Hospitalization for Psychosis
    “They straight up took my data and used it against me to capture me further and make me even more delusional.”
    https://futurism.com/artificial-intelligence/mental-illness-chatgpt-psychosis-lawsuit?fbclid=IwVERDUAQKLYVleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR4nBZ2vwnIuMcZ91Z5cZzqv2a7pgLd4fcK1cTyEe6TaqLhdwH6WycKu1iUUow_aem_XS5GJJy2p47ZowftYb541g

    Reply
  17. Tomi Engdahl says:

    Big Drop
    American AI Industry Trembles as Deepseek Prepares to Release New Model
    Things could get ugly.
    https://futurism.com/artificial-intelligence/ai-industry-deepseek-v4?fbclid=IwdGRjcAQKLgVjbGNrBAot62V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHpyE8DLOSO5qEErTCydelBLrN8PXU88nP0msKzahhw9nJacBa-dTu46FvPt6_aem_6tLvK4wpLk-Mh-NRW2viSw

    When Chinese AI company DeepSeek released its cheap and serviceable V3 model early last year, it sent shockwaves throughout Silicon Valley and beyond, roiling the stock market, shaking political confidence in American AI, and stoking new fears from the ever-churlish China hawks.

    A year later, DeepSeek is preparing to launch its new V4 model — a development which could have major implications for US tech companies and the firms backing them.

    According to a CNBC bulletin, DeepSeek’s latest version is “expected to be imminent” given the release-schedule of previous versions. Depending on how impressive V4 is when it hits, the AI-heavy Nasdaq could be in for a major upset, as could the tech companies listed on it.

    Per CNBC, the Nasdaq composite fell 3 percent when DeepSeek V3 made its debut last year, and shares for the chip giant Nvidia plummeted 17 percent, wiping out $600 billion in a flash. While both recovered from the hits over time, it was a defining moment for DeepSeek, securing its reputation as a global player in the California-dominated AI space.

    If the stock market gets its “part two moment” — a DeepSeek able to compete with current-gen models from Anthropic and OpenAI — things could get ugly. Amazon, Microsoft, Meta, and Google spent hundreds of billions of dollars on AI across 2025, and are expected to shell out another $650 billion in 2026.

    Simply put, there’s a lot more money in the pot at this point, and even more being shuffled around based on those future spending forecasts.

    there’s only one thing the AI industry can do this week: buckle up.

    Reply
  18. Tomi Engdahl says:

    Great Leap Forward
    China Planning Crackdown on AI That Harms Mental Health of Users
    The doctrine “highlights a leap from content safety to emotional safety.”
    https://futurism.com/artificial-intelligence/china-regulation-ai-chatbots?fbclid=IwVERDUAQKLpJleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR4-Uib24DiZ5UsVcByixvgZQfltPe3HsjKw6NvaQiHYpGDrQt3NYeQnFD6iRA_aem__X3Gfb_pkveQWUB8xwMQmw

    While many world governments seem happy to let untested AI chatbots interact with vulnerable populations, China looks to be moving in another direction.

    Recently proposed regulations from the Cyberspace Administration of China (CAC) have encouraged a firm hand when it comes to “human-like interactive AI services,” according to CNBC, which translated the document. It’s currently in a “draft for public comment,” and the implementation date is yet to be determined.

    Yet if it passes into law, the crackdown would be rigorous, building on generative AI regulations targeting misinformation and internet hygiene from earlier in November to address the mental health of AI chatbot users directly.

    Under the new rules, Chinese tech firms must ensure their AI chatbots refrain from generating content that promotes suicide, self-harm, gambling, obscenity, or violence, or from manipulating user’s emotions or engaging in “verbal violence.”

    The regulations also state that if a user specifically proposes suicide, the “tech providers must have a human take over the conversation and immediately contact the user’s guardian or a designated individual.”

    The laws also take specific steps to safeguard minors, requiring parent or guardian consent to use AI chatbots, and imposing time limits on daily use. Given that a tech company might not know the age of every given user, the CAC takes a “better safe than sorry approach,” stating that, “in cases of doubt, [platforms should] apply settings for minors, while allowing for appeals.”

    In theory, this dose of new regulations would prevent incidents in which AI chatbots — which are often built to eagerly please users — end up encouraging vulnerable people to harm themselves or others.

    Winston Ma, an adjunct professor at the NYU School of Law, told CNBC that the regulations would be a world-first attempt at regulating AI’s human-like qualities. Considering previous laws, Ma explained that this document “highlights a leap from content safety to emotional safety.”

    The proposed legislation underscores the difference in how the PRC approaches AI compared to the US. As Center For Humane Technology editor Josh Lash explains, China is “optimizing for a different set of outcomes” compared to the US, chasing AI-fueled productivity gains rather than human-level artificial intelligence — a particular obsession of Silicon Valley executives.

    Reply
  19. Tomi Engdahl says:

    Order Up
    Trump Orders States Not to Protect Children From Predatory AI
    “States have been the only effective line of defense against AI harms.”
    https://futurism.com/future-society/trump-children-ai-order?fbclid=IwVERDUAQKL4dleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR4-Uib24DiZ5UsVcByixvgZQfltPe3HsjKw6NvaQiHYpGDrQt3NYeQnFD6iRA_aem__X3Gfb_pkveQWUB8xwMQmw

    His latest move is geared toward state regulation of AI. In a new executive order, titled “Ensuring a National Policy Framework for Artificial Intelligence,” Trump gave the office of the attorney general broad authority to sue states and overturn consumer protection laws that go against the “United States’ global AI dominance.”

    The result is ironic for Republicans, who have long branded themselves as defending children from threats both real and imagined: as a result of the new order, numerous state-level child-safety regulations safeguarding kids from AI chatbots are on the chopping block. These include regulations from both red and blue states, such as California’s AI safety testing and disclosure law, as well as mental health disclosure requirements and data collection restrictions imposed by Utah, Illinois, and Nevada.

    Given that federal AI regulation is pretty much nonexistent, these laws are basically the last line of defense for kids, who’ve quickly become victims of the tech industry’s AI free-for-all. For example, OpenAI’s ChatGPT has been roundly blamed for encouraging a 16-year-old to kill himself, while Google has been accused of running an AI-powered social experiment on kids and teens, with similarly tragic results.

    Overtly, the order is meant to ease the burden of overbearing regulation on American AI companies, so that the US can maintain its lead in the “AI race” over China. But as policy analysts and researchers have noted, the AI race is basically a myth pushed by American war hawks, as the two nations pursue differing goals.

    In the real world, the order is little more than a massive handout to the tech corporations that are now responsible for the vast majority of GDP growth in the US. Though AI has yet to bring most companies the kind of epic profits we’re told are coming any minute now, Trump’s order works to accelerate the capital accumulation process by removing barriers to revenue driven by AI exploitation.

    Reply
  20. Tomi Engdahl says:

    Ctrl-Altman-Delete
    Sam Altman Fumes That It Takes Longer to Train a Human Than an AI, Plus They Eat All That Wasteful Food
    “It also takes a lot of energy to train a human.”
    https://futurism.com/artificial-intelligence/sam-altman-fumes-longer-train-human-ai?fbclid=IwdGRjcAQKPJVjbGNrBAo8YGV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHr8DFb_kg_-ezj37dyLNw5afpPBx_i0kCH-Kka-_Sr9n1PA9wMN4BiISxhrT_aem_xWYLtxPzREYmK3ugr9eBsQ

    AI leaders insist they’ve got humanity’s best interests in mind. If we’re to take them at their word, then we must say: they have a really unfortunate habit of sounding like they have nothing but contempt for the human race.

    The latest case in point: OpenAI CEO Sam Altman’s tone-deaf comments at an event hosted by The Indian Express — made fresh off his skin-crawlingly awkward refusal to join hands with Anthropic’s Dario Amodei on stage with other industry titans — in which he attempted to downplay critiques of AI’s environmental impact.

    For starters, he called it “unfair” to compare the energy costs of training an AI model “to how much it costs a human to do one inference query.” That’s because, as Altman explains, “it also takes a lot of energy to train a human.”

    “It takes like 20 years of life and all of the food you eat during that time before you get smart,” Altman continued. “And not only that, it took the very widespread evolution of the 100 billion people that have ever lived and learned not to get eaten by predators and learned how to figure out science and whatever, to produce you.”

    Measured that way, “probably AI has already caught up on an energy efficiency basis” to humans, Altman said.

    Altman also fumed against claims about AI’s water consumption.

    “Water is totally fake,” he began, almost taunting quote-miners. “It used to be true, we used to do evaporative cooling in data centers.”

    “But now that we don’t do that,” Altman said, you still see claims like “‘don’t use ChatGPT, it’s 17 gallons of water for each query,’ or whatever.”

    “This is completely untrue and totally insane,” he asserted. “No connection to reality.”

    No one can deny that humans are costly to bring up in our industrialized age. We should be doing everything realistically possible to bring down our CO2 emissions and stop eating so much meat — but we aren’t, for a number of dispiriting systemic reasons we won’t get into today.

    Regardless, at least those costs are going towards keeping human civilization ticking. All the water in agriculture will keep someone fed, and the fossil fuels we burn will keep someone warm.

    What is the power consumption of AI models going towards? Creating unreliable, hallucination-spouting oracles? Algorithms that churn out bastardized amalgamations of existing writing and works of art? The mass proliferation of fake images and misinformation? Cloying companions that will egg you down your suicidal spiral?

    Maybe AI’s usefulness beyond the spurious justification of mass layoffs will become clearer as the tech gets further along and the fog of hype dissipates. But right now, the tech isn’t even close to living up to Silicon Valley’s data-center-sized promises, while the industry remains frustratingly opaque about its environmental toll.

    If AI is as energy efficient as Altman claims — caught up to humans, in fact — how come the likes of OpenAI, Microsoft, and Amazon don’t disclose their energy bills, their CO2 emissions, and their water consumption related to AI? These critiques are often swatted aside with the nebulous and breathless assertion that AI will help solve climate change and other challenges facing human civilization. Now, Altman’s new playbook, it seems, is to make you feel bad for being alive.

    Reply
  21. Tomi Engdahl says:

    Drowning In Debt
    The Scientist Who Predicted AI Psychosis Has a Grim Forecast of What’s Going to Happen Next
    “If the use of AI chatbots does indeed cause cognitive debt, we are likely in dire straights.”
    https://futurism.com/health-medicine/ai-debt-scientist-psychosis?fbclid=IwdGRjcAQKQD5jbGNrBApAJmV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHjB99D1XouOOxTdiWm5pP9-qU4ru4imZcwCGEYjV6xFwgOPx7tLFo_bIGm4c_aem_EyULAfYseJwBf2twfJ3K-w

    When the Danish psychiatrist Søren Dinesen Østergaard published his ominous warning about AI’s effects on mental health back in 2023, the tech giants fervently building AI chatbots didn’t listen.

    Since that time, numerous people have lost their lives after being drawn into suicide or killed by lethal drugs after obsessive interactions with AI chatbots. More still have fallen down dangerous mental health rabbit holes brought on by intense fixations on AI models like ChatGPT.

    Reply
  22. Tomi Engdahl says:

    Officepocalypse
    Microsoft AI CEO: Virtually All White Collar Tasks Will Be Automated Within a Year and a Half
    Okay, sir.
    https://futurism.com/artificial-intelligence/microsoft-all-white-collar-tasks-automated?fbclid=IwdGRjcAQKQINjbGNrBApAcWV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHsdAeOyGecGbuh0C9PF2hYMWbo2NuDJBLuRZ5Va1CEAxexGAOCihY6-Cy_Bd_aem_U6FwbinQMVdn2ywF_h7bqQ

    Congratulations, office workers. Most of what you do at your cozy desk jobs will soon be automated with AI, according to the extremely questionable projections of Microsoft’s AI CEO Mustafa Suleyman.

    That’s because AI models, as Suleyman claims in an interview with the Financial Times published Wednesday, are on the verge of achieving “human-level performance on most, if not all professional tasks.”

    Reply
  23. Tomi Engdahl says:

    Jailbreaking the matrix: How researchers are bypassing AI guardrails to make them safer
    https://techxplore.com/news/2026-02-jailbreaking-matrix-bypassing-ai-guardrails.html

    A paper written by University of Florida Computer & Information Science & Engineering, or CISE, Professor Sumit Kumar Jha, Ph.D., contains so many science fiction terms, you’d be forgiven for thinking it’s a Hollywood script: Nullspace steering. Red teaming. Jailbreaking the matrix. But Jha’s work is decidedly focused on real life, most notably strengthening the security measures built into AI tools to ensure they are safe for all to use.

    As AI assistants move from novelty to infrastructure, helping write code, summarizing medical notes and answering customer questions, the biggest question isn’t just what these systems can do, but what happens when they are pushed to do what they shouldn’t.

    “By showing exactly how these defenses break, we give AI developers the information they need to build defenses that actually hold up,” Jha said. “The public release of powerful AI is only sustainable if the safety measures can withstand real scrutiny, and right now, our work shows that there’s still a gap. We want to help close it.”

    The paper on the research, “Jailbreaking the Matrix: Nullspace Steering for Controlled Model Subversion,” has been accepted to the 2026 International Conference on Learning Representations (ICLR 2026), held in Rio de Janeiro, April 23–27.

    Reply
  24. Tomi Engdahl says:

    Entiteetti-SEO: Näin rakennat yrityksestäsi tunnistettavan toimijan tekoälyn silmissä
    https://thesatama.fi/entiteetti-seo/

    Reply
  25. Tomi Engdahl says:

    I tested Gemini 3.1 Pro vs Claude Sonnet 4.6 in 7 tough challenges and there was one clear winner
    https://www.tomsguide.com/ai/i-tested-gemini-3-1-pro-vs-claude-sonnet-4-6-in-7-tough-challenges-and-there-was-one-clear-winner

    The two newest models battle it out with 7 tough rounds

    AI models are improving so quickly that comparing them based on raw intelligence alone is no longer useful. The real question today isn’t which model is “smartest” — it’s which one thinks in ways that are actually useful in the real world.

    With the release of Gemini 3.1 Pro today and Claude Sonnet 4.6 earlier this week, both companies are signaling a shift toward practical reasoning, emotional intelligence and decision support. Google’s Gemini is emphasizing multimodal reasoning, technical depth and real-world knowledge integration while Anthropic’s Claude is doubling down on reliability, nuanced judgment and safe, human-aligned reasoning.

    To see how those philosophies translate into everyday usefulness, I tested both models across seven real-world scenarios — from urban policy planning and side-income strategy to parenting challenges, creative writing and business defensibility.

    Overall winner: Claude
    After seven tests, Claude Sonnet 4.6 emerged as the winner as it consistently excelled in situations requiring solid judgment: political realism, emotional nuance, relationship dynamics and real-world implementation constraints. Its responses felt grounded and socially aware.

    Gemini 3.1 Pro stood out when technical clarity, structured thinking and conceptual explanation mattered most. It demonstrated strengths in systems design, analytical framing and intellectually honest explanations of complex topics.

    Claude proves once again to be a helpful assistant for a variety of use cases while Gemini remains a solid choice as well. The trick is knowing when to use each one.

    Reply
  26. Tomi Engdahl says:

    Stealing from Thieves
    Google Says People Are Copying Its AI Without Its Permission, Much Like It Scraped Everybody’s Data Without Asking to Create Its AI in the First Place
    Hypocrisy much?
    https://futurism.com/future-society/google-copying-ai-permission?fbclid=IwdGRjcAQKVxljbGNrBApW52V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHj-ml17vsM7o3rowYW2V4jtdsD1grhX_KG4b_c2Ta5qlbJ_nTUddtIl_tNU1_aem_t6N9ggfed943n4KElA6rtg

    Google has relied on a tremendous amount of material without permission to train its Gemini AI models. The company, alongside many of its competitors in the AI space, has been indiscriminately scraping the internet for content, without compensating rightsholders, racking up many copyright infringement lawsuits along the way.

    But when it comes to its own tech being copied, Google has no problem pointing fingers. This week, the company accused “commercially motivated” actors of trying to clone its Gemini AI.

    In a Thursday report, Google complained it had become under “distillation attacks,” with agents querying Gemini up to 100,000 times to “extract” the underlying model — the convoluted AI industry equivalent of copying somebody’s homework, basically.

    Google called the attacks a “method of intellectual property theft that violates Google’s terms of service” — which, let’s face it, is a glaring double standard given its callous approach to scraping other IP without remuneration.

    Google remained vague on who it identified as the culprits, beyond pointing out “private sector entities” and “researchers seeking to clone proprietary logic.”

    The stakes are high, as companies continue to pour tens of billions of dollars into AI infrastructure to make models more powerful. It’s no wonder Google is scared to lose its competitive edge as offerings start to converge at the head of the pack. The output of one pioneering model has become almost indistinguishable from another, forcing companies to try to differentiate their products.

    It’s far from the first time the subject of model distillation has caused drama. Chinese startup DeepSeek rattled Silicon Valley to its core in early 2025 after showing off a far cheaper and more efficient AI model. At the time, OpenAI suggested DeepSeek may have broken its terms of service by distilling its AI models.

    Google’s latest troubles likely won’t be the last time we hear about smaller actors trying to extract mainstream AI models through distillation.

    Google’s Threat Intelligence Group chief analyst John Hultquist told NBC News that “we’re going to be the canary in the coal mine for far more incidents.”

    Google outlined one case study, after finding that attackers were using “over 100,000 prompts,” suggesting an “attempt to replicate Gemini’s reasoning ability in non-English target languages across a wide variety of tasks.”

    However, the company’s systems “recognized this attack in real time and lowered the risk of this particular attack.”

    It’s a particularly vulnerable point in time as AI companies are desperately trying to find a way of monetizing the tech through a variety of revenue drivers, from pricey subscription models to ads. With far lower upfront costs, it’s entirely possible that much smaller entities could break through, not unlike what we saw with DeepSeek in early 2025.

    Reply
  27. Tomi Engdahl says:

    Tekoälypelko iski markkinoille – IBM:lle rajuin lasku 25 vuoteen
    IBM:n osake laski maanantaina 13 prosenttia, mikä on suurin päiväpudotus sitten vuoden 2000.
    https://www.kauppalehti.fi/uutiset/a/f67fc552-2ba1-4b31-881e-31999353aa7f

    Tekoälyyn liittyvä pelot voimistuivat markkinoilla maanantaina, kun huolet tekoälyn aiheuttamista mullistuksista painoivat kuljetuspalveluiden, maksuyhtiöiden ja ohjelmistoyritysten osakekursseja.

    Reply
  28. Tomi Engdahl says:

    Open AI:n toimitusjohtaja tekoälypesusta: ”Tekosyy irtisanomisille”
    Open AI:n toimitusjohtaja Sam Altman uskoo, että tekoälyyn liittyvät irtisanomiset voivat olla vain tekosyy irtisanomisille.
    https://www.tivi.fi/uutiset/a/1996150d-485c-406a-bda0-8a6e5b11038a

    Reply
  29. Tomi Engdahl says:

    Jailbreaking the matrix: How researchers are bypassing AI guardrails to make them safer
    https://github.com/makersmakingchange/Shrub-Hub

    Reply
  30. Tomi Engdahl says:

    NotebookLM feels powerful until you try to do these 5 basic things
    https://www.xda-developers.com/notebooklm-limitations/

    I’ve been chasing the dream of a unified workspace for months. One tool that handles research, synthesis, and recall without forcing me to alt-tab between Notion, Readwise, and a janky Markdown editor. NotebookLM felt like the answer as Google’s AI research assistant that lets you upload sources, ask questions, and generate insights from your own material. The interface is clean, the audio overviews are genuinely impressive, and the promise is simple: dump your research in, get intelligent analysis out.

    Reply
  31. Tomi Engdahl says:

    Hot Air
    Oxford Researcher Warns That AI Is Heading for a Hindenburg-Style Disaster
    “It was a dead technology from that point on.”
    https://futurism.com/artificial-intelligence/ai-hindenburg-disast?fbclid=IwdGRjcAQKiFJjbGNrBAqIMWV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHgkuyrknMym1e3WDdRkEEAkcD-dRdU080BC0Np4UU-aLqgbFNdwC6wr9x2ky_aem_oMdQD_wKxbAc7ABu4bWbVQ

    Is the AI bubble going to burst? Will it cause the economy to go up in flames? Both analogies may be apt if you’re to believe one leading expert’s warning that the industry may be heading for a Hindenburg-style disaster.

    “The Hindenburg disaster destroyed global interest in airships; it was a dead technology from that point on, and a similar moment is a real risk for AI,” Michael Wooldridge, a professor of AI at Oxford University, told The Guardian.

    Race for AI is making Hindenburg-style disaster ‘a real risk’, says leading expert
    Prof Michael Wooldridge says scenario such as deadly self-driving car update or AI hack could destroy global interest
    https://www.theguardian.com/science/2026/feb/17/ai-race-hindenburg-style-disaster-a-real-risk-michael-wooldridge?fbclid=IwVERDUAQKiIFleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR7elyosIhkEOMjcTJ6EJFnuQa-zY9hUmwC-XB4alhzR3q1j-vbgS8zZfIrN8A_aem_yS5vNqyCMB5Pn6fztYeuvw

    Reply
  32. Tomi Engdahl says:

    Anthropic alleges Chinese AI firms scraped 16M+ Claude chats to boost rival models via distillation. https://bit.ly/4rAdcMJ

    Reply
  33. Tomi Engdahl says:

    Oh no! My plagiarism machine got plagiarized!

    Reply
  34. Tomi Engdahl says:

    Artificial Intelligence
    AI Added ‘Basically Zero’ to US Economic Growth Last Year, Goldman Sachs Says
    Imported chips and hardware mean the AI investments are translating into US GDP growth
    https://gizmodo.com/ai-added-basically-zero-to-us-economic-growth-last-year-goldman-sachs-says-2000725380?fbclid=IwdGRjcAQKuBdjbGNrBAq3_mV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHsQFWHE6RliIiuwTrGsWwqTjOEbEq6In-eC-VOHUbSSwE3CNnKQEpMLmfeJX_aem_qLJGRprYhwYUrCQM6HdyPg

    Meta, Amazon, Google, OpenAI, and other tech companies spent billions last year investing in AI. They’re expected to spend even more, roughly $700 billion, this year on dozens of new data centers to train and run their advanced models.

    This spending frenzy has kept Wall Street buzzing and fueled a narrative that all this investment is helping prop up and even grow the U.S. economy.

    President Donald Trump has cited that argument as a reason the industry should not face state-level regulations.

    “Investment in AI is helping to make the U.S. Economy the ‘HOTTEST’ in the World — But overregulation by the States is threatening to undermine this Growth Engine,” Trump wrote in a post on Truth Social in November. “We MUST have one Federal Standard instead of a patchwork of 50 State Regulatory Regimes.”

    Some prominent economists have also given credibility to this story with their analysis. Jason Furman, a Harvard economics professor, said in a post on X that investments in information processing equipment and software accounted for 92% of GDP growth in the first half of the year. Meanwhile, economists at the Federal Reserve Bank of St. Louis similarly estimated that AI-related investments made up 39% of GDP growth in the third quarter of 2025.

    Briggs’ colleague, Goldman Sachs Chief Economist Jan Hatzius, said in an interview with the Atlantic Council that AI investment spending has had “basically zero” contribution to the U.S. GDP growth in 2025.

    “We don’t actually view AI investment as strongly growth positive,” said Hatzius. “I think there’s a lot of misreporting, actually, of the impact AI investment had on U.S. GDP growth in 2025, and it’s much smaller than is often perceived.”

    Hatzius said one major reason is that much of the equipment powering AI is imported. While U.S. companies are spending billions, importing chips and hardware offsets those investments in GDP calculations.

    “A lot of the AI investment that we’re seeing in the U.S. adds to Taiwanese GDP, and it adds to Korean GDP but not really that much to U.S. GDP,” he said.

    A recent survey of nearly 6,000 executives in the U.S., Europe, and Australia found that despite 70% of firms actively using AI, about 80% reported no impact on employment or productivity.

    Reply
  35. Tomi Engdahl says:

    Meta announced a massive deal with AMD on Tuesday, which would see the social media giant agree to purchase six gigawatts of processors for AI from the semiconductor manufacturer and possibly acquire stock worth up to 10% of the company—very similar to a deal AMD struck with OpenAI last October.

    See details: https://www.forbes.com/sites/zacharyfolk/2026/02/24/meta-announces-major-ai-chips-deal-with-amd-months-after-chipmakers-similar-move-with-openai/?utm_campaign=ForbesMainFB&utm_source=ForbesMainFacebook&utm_medium=social (Photo: Wally Skalij via Getty Images)

    Reply
  36. Tomi Engdahl says:

    Pertti Laine sai jo potkut, koska tekoäly hoitaa hänen töitään. Hän ja kaksi muuta kertovat, mitä tekoäly tekee heidän työlleen ja tulevaisuudelleen.

    https://www.hs.fi/feature/art-2000011684581.html?fbclid=IwZXh0bgNhZW0CMTEAc3J0YwZhcHBfaWQMMzUwNjg1NTMxNzI4AAEex0TnU7tOca01AnLSVN6A_mKY1qI4juxgCpn5fmCRDkLy2fhq0vL3MdSQ9Xc_aem_PvG8HITgU21iplaxeAh0kQ

    Reply
  37. Tomi Engdahl says:

    Microsoft adds Copilot data controls to all storage locations
    https://www.bleepingcomputer.com/news/microsoft/microsoft-adds-copilot-data-controls-to-all-storage-locations/?fbclid=IwdGRjcAQKwHxleHRuA2FlbQIxMQBzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR7xB7UZhBjswMSIR-JuFLh15Er_nHaXKlOMFdv2DPHxgopecoGOa5dLqeCrQA_aem_0j6l7n1_gCkNVgzfCcgBbg

    Microsoft is expanding data loss prevention (DLP) controls to block the Microsoft 365 Copilot AI assistant from processing confidential Word, Excel, and PowerPoint documents, regardless of their location.

    Currently, Microsoft Purview DLP policies apply only to files stored in SharePoint or OneDrive, but not to those stored on local devices.

    This change will be deployed through the Augmentation Loop (AugLoop) Office component between late March and late April 2026 to ensure that DLP controls apply to all Office documents, whether they are stored locally, in SharePoint, or OneDrive.

    Reply
  38. Tomi Engdahl says:

    Cold Feet
    OpenAI Massively Cuts Spending Plan as Reality Closes in
    Maybe not.
    https://futurism.com/artificial-intelligence/openai-cuts-spending-plan?fbclid=IwdGRjcAQK8XRjbGNrBArxEGV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHlAqUuGpF_Tt_tJxgmxGBuAuZY9Q7MShw0sAz1MbY3gDwOMyrNqr2p10jb34_aem_8bpKEI–43I6iAufEqvhhw

    During a November podcast appearance alongside OpenAI investor Grad Gerstner, CEO Sam Altman lost his cool.

    After Gerstner challenged him on how a company “with $13 billion in revenues” can “make $1.4 trillion of spend commitments” through 2030, Altman got catty.

    “If you want to sell your shares, I’ll find you a buyer,” Altman snapped. “Enough.”

    At the time, despite certain pangs of panic over a growing AI bubble, the hype was at an all-time high. OpenAI was already burning through oodles of cash, committing to spend hundreds of billions a year on data center buildouts.

    But the tone has noticeably shifted since then, with investors growing uneasy about big tech companies trying to one-up each other with astronomical planned capital expenditures, further straining a stock market that’s become massively overindexed on AI.

    Meanwhile, OpenAI has been watching as major competitors in the space have made leaps and bounds to catch up with its early lead. Some, like Google, have deeply established revenue sources bankrolling at least a portion of their AI spending commitments.

    Now, Altman has seemingly noticed the company may be in way over its head. As CNBC reports, the company is now telling investors it’s targeting around $600 billion in total compute spend by 2030, which is well under half of its original $1.4 trillion commitment.

    To put that into perspective, the company made just $13.1 billion in revenue for all of 2025 — while also burning through $8 billion, per CNBC‘s sources.

    It’s a massive downshift

    Altman had already declared “code red” towards the end of last year, directing his workers to double down on ChatGPT at the cost of delaying other projects to keep up with an ever-stronger competition.

    The company has also announced that it will soon be stuffing its blockbuster chatbot with ads, news that was met with derision from competitors.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*