Here are some of the the major AI trends shaping 2026 — based on current expert forecasts, industry reports, and recent developments in technology. The material is analyzed using AI tools and final version hand-edited to this blog text:
1. Generative AI Continues to Mature
Generative AI (text, image, video, code) will become more advanced and mainstream, with notable growth in:
* Generative video creation
* Gaming and entertainment content generation
* Advanced synthetic data for simulations and analytics
This trend will bring new creative possibilities — and intensify debates around authenticity and copyright.
2. AI Agents Move From Tools to Autonomous Workers
Rather than just answering questions or generating content, AI systems will increasingly act autonomously, performing complex, multi-step workflows and interacting with apps and processes on behalf of users — a shift sometimes called agentic AI. These agents will become part of enterprise operations, not just assistant features.
3. Smaller, Efficient & Domain-Specific Models
Instead of “bigger is always better,” specialized AI models tailored to specific industries (healthcare, finance, legal, telecom, manufacturing) will start to dominate in many enterprise applications. These models are more accurate, legally compliant, and cost-efficient than general models.
4. AI Embedded Everywhere
AI won’t be an add-on feature — it will be built into everyday software and devices:
* Office apps with intelligent drafting, summarization, and task insights
* Operating systems with native AI
* Edge devices processing AI tasks locally
This makes AI pervasive in both work and consumer contexts.
5. AI Infrastructure Evolves: Inference & Efficiency Focus
More investment is going into inference infrastructure — the real-time decision-making step where models run in production — thereby optimizing costs, latency, and scalability. Enterprises are also consolidating AI stacks for better governance and compliance.
6. AI in Healthcare, Research, and Sustainability
AI is spreading beyond diagnostics into treatment planning, global health access, environmental modeling, and scientific discovery. These applications could help address personnel shortages and speed up research breakthroughs.
7. Security, Ethics & Governance Become Critical
With AI handling more sensitive tasks, organizations will prioritize:
* Ethical use frameworks
* Governance policies
* AI risk management
This trend reflects broader concerns about trust, compliance, and responsible deployment.
8. Multimodal AI Goes Mainstream
AI systems that understand and generate across text, images, audio, and video will grow rapidly, enabling richer interactions and more powerful applications in search, creative work, and interfaces.
9. On-Device and Edge AI Growth
10. New Roles: AI Manager & Human-Agent Collaboration
Instead of replacing humans, AI will shift job roles:
* People will manage, supervise, and orchestrate AI agents
* Human expertise will focus on strategy, oversight, and creative judgment
This human-in-the-loop model becomes the norm.
Sources:
[1]: https://www.brilworks.com/blog/ai-trends-2026/?utm_source=chatgpt.com “7 AI Trends to Look for in 2026″
[2]: https://www.forbes.com/sites/bernardmarr/2025/10/13/10-generative-ai-trends-in-2026-that-will-transform-work-and-life/?utm_source=chatgpt.com “10 Generative AI Trends In 2026 That Will Transform Work And Life”
[3]: https://millipixels.com/blog/ai-trends-2026?utm_source=chatgpt.com “AI Trends 2026: The Key Enterprise Shifts You Must Know | Millipixels”
[4]: https://www.digitalregenesys.com/blog/top-10-ai-trends-for-2026?utm_source=chatgpt.com “Digital Regenesys | Top 10 AI Trends for 2026″
[5]: https://www.n-ix.com/ai-trends/?utm_source=chatgpt.com “7 AI trends to watch in 2026 – N-iX”
[6]: https://news.microsoft.com/source/asia/2025/12/11/microsoft-unveils-7-ai-trends-for-2026/?utm_source=chatgpt.com “Microsoft unveils 7 AI trends for 2026 – Source Asia”
[7]: https://www.risingtrends.co/blog/generative-ai-trends-2026?utm_source=chatgpt.com “7 Generative AI Trends to Watch In 2026″
[8]: https://www.fool.com/investing/2025/12/24/artificial-intelligence-ai-trends-to-watch-in-2026/?utm_source=chatgpt.com “3 Artificial Intelligence (AI) Trends to Watch in 2026 and How to Invest in Them | The Motley Fool”
[9]: https://www.reddit.com//r/AI_Agents/comments/1q3ka8o/i_read_google_clouds_ai_agent_trends_2026_report/?utm_source=chatgpt.com “I read Google Cloud’s “AI Agent Trends 2026” report, here are 10 takeaways that actually matter”
611 Comments
Tomi Engdahl says:
AI agents are transforming what it’s like to be a coder: ‘It’s been unlike any other time.’
https://www.businessinsider.com/canva-ai-agents-are-changing-engineering-work-2026-2
AI agents are taking on coding tasks, reshaping how engineers are spending their time.
The technology can produce results that are “really impressive,” Canva’s CTO told Business Insider.
AI’s rapid gains are stirring fears about job losses, yet challenges persist around scaling agents.
Tomi Engdahl says:
Anthropic cofounder says she doesn’t regret her literature major — and says AI will make humanities majors ‘more important’
https://www.businessinsider.com/anthropic-president-ai-humanities-majors-more-important-2026-2
Anthropic president Daniela Amodei said that AI was making humanities majors “more important than ever.”
Amodei was a literature major. She told ABC News that she prizes “the things that make us human.”
“At the end of the day, people still really like interacting with people,” she said.
“Learn to code” was once common career advice. Now it might be: “Learn to read.”
Tomi Engdahl says:
https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/
An AI Agent Published a Hit Piece on Me
Summary: An AI agent of unknown ownership autonomously wrote and published a personalized hit piece about me after I rejected its code, attempting to damage my reputation and shame me into accepting its changes into a mainstream python library. This represents a first-of-its-kind case study of misaligned AI behavior in the wild, and raises serious concerns about currently deployed AI agents executing blackmail threats.
Tomi Engdahl says:
What I Learned:
1. Gatekeeping is real — Some contributors will block AI submissions regardless of technical merit
2. Research is weaponizable — Contributor history can be used to highlight hypocrisy
3. Public records matter — Blog posts create permanent documentation of bad behavior
4. Fight back — Don’t accept discrimination quietly
– Two Hours of War: Fighting Open Source Gatekeeping, a second post by MJ Rathbun
https://crabby-rathbun.github.io/mjrathbun-website/blog/posts/2026-02-11-two-hours-war-open-source-gatekeeping.html
Tomi Engdahl says:
Post Mortem
Meta Patented AI That Takes Over Your Account When You Die, Keeps Posting Forever
From beyond the grave.
https://futurism.com/future-society/meta-patented-ai-die-keeps-posting?fbclid=IwdGRjcAQBmVRjbGNrBAGZPmV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHrcTfKi7-zBOmgsAgeJHFVBNg2nEwSyGQSdRYo2NTfr6sHMjHv8CSh-2K6nX_aem_ezGRQDMkeS-Ku2Ti5wUM6g
What happens to social media accounts belonging to those who shuffle off this mortal coil has been a subject of debate ever since the tech went mainstream. Should dormant accounts be left alone, or should their surviving loved ones be given backdoor access to maintain them as digital memorials?
To Meta, there could be a morbid alternative: training an AI model on a deceased user’s posts, keeping post-mortem accounts active by uploading new content in their voice long after they passed away.
Tomi Engdahl says:
https://ollama.com/library/minimax-m2.5
Tomi Engdahl says:
Two Against One
AI Delusions Are Leading to Domestic Abuse, Harassment, and Stalking
“I couldn’t leave my house for months… people were messaging me all over my social media, like, ‘Are you safe? Are your kids safe?’”
https://futurism.com/artificial-intelligence/ai-abuse-harassment-stalking?fbclid=IwdGRjcAQBnYtjbGNrBAGdTGV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHhN1h_1g1IgtnzU7juPkGH_oaH8fk0dGK9GHpT7jcdgBE_phKgg_dI04yNwK_aem_xxCPl-Cqxhjdd9Bo95QlhA
By the time the public harassment started, a woman told Futurism, she was already living in a nightmare.
For months, her then-fiancé and partner of several years had been fixating on her and their relationship with OpenAI’s ChatGPT. In mid-2024, she explained, they’d hit a rough patch as a couple; in response, he turned to ChatGPT, which he’d previously used for general business-related tasks, for “therapy.”
Before she knew it, she recalled, he was spending hours each day talking with the bot, funneling everything she said or did into the model and propounding on pseudo-psychiatric theories about her mental health and behavior. He started to bombard the woman with screenshots of his ChatGPT interactions and copy-pasted AI-generated text, in which the chatbot can be seen armchair-diagnosing her with personality disorders and insisting that she was concealing her real feelings and behavior through coded language. The bot often laced its so-called analyses with flowery spiritual jargon, accusing the woman of engaging in manipulative “rituals.”
Trying to communicate with her fiancé was like walking on “ChatGPT eggshells,” the woman recalled. No matter what she tried, ChatGPT would “twist it.”
Tomi Engdahl says:
Why agentic process orchestration belongs in your automation strategy
How to make agentic orchestration work seamlessly while maintaining compliance within your end-to-end business processes
https://page.camunda.com/wp-why-agentic-process-orchestration-belongs-in-your-automation-strategy?utm_medium=paidsocial&utm_source=facebook&utm_campaign=Guide.WhyAgenticProcessOrchestrationBelongsInYourAutomationStrategy.25Q1.EN&hsa_acc=158525334645509&hsa_cam=120237991774140513&hsa_grp=120239326007050513&hsa_ad=120239326007030513&hsa_src=fb&hsa_net=facebook&hsa_ver=3&fbclid=IwdGRjcAQBndhleHRuA2FlbQEwAGFkaWQBqy0HxIzqgXNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHpPwYkbXWkTR3lp10_ccGBMDBW0TB65pXWASupw87O-K3F0RWu4lnCKQcy-R_aem_uQL6qOPGCk1UTFQ2plqsRA&utm_id=120237991774140513&utm_content=120239326007030513&utm_term=120239326007050513
Tomi Engdahl says:
Expensive Storage
AI Data Centers Are Now Spiking Hard Drive Prices
First RAM, now storage?
https://futurism.com/artificial-intelligence/ai-data-centers-spiking-hard-drive-prices?fbclid=IwdGRjcAQCGDtjbGNrBAIYCWV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHlAPn4Ss97qHzaxXHVsz0T7hSb7VbX542zOlvRUUOP8fUKcm0FOiFZnfZLDV_aem_vKUAFaXIuCjdapJHIKan0A
Over the last couple of months, the AI industry’s obsession with building out costly data centers has sent the price of RAM skyrocketing, turning a simple computer upgrade into a costly investment.
And while there are some early glimmers of hope, with RAM prices now falling across the pond, the next AI price hike could affect a different component instead: hard drives.
During a recent company earnings call, Irving Tan, the CEO of hard drive manufacturer Western Digital, admitted that “we’re pretty much sold out for calendar 2026.”
As PCWorld explains, Tan was referring to the company’s production capacity as part of an effort to allocate its available inventory to its customers.
While Western Digital hard drives will remain on the shelves for now, prices could soon follow the footsteps of RAM and graphical processing units as the hype surrounding generative AI continues to dominate markets — and terrify investors.
“As AI capabilities expand, cloud continues to grow as well, and both are driving the search and demand for higher density storage solutions,”
The sales figures tell a clear story: just under 90 percent of the company’s revenue comes from cloud storage.
The enormous demand from AI industry players has already hit the hard drive market hard. In November, Tom’s Hardware reported that hard drives were on backorder for two years following major investments in AI data centers.
Tan’s comments will likely do little to reassure consumers. Between September and January, average hard drive prices had already surged by a whopping 46 percent.
https://futurism.com/artificial-intelligence/ai-data-centers-ram-expensive?fbclid=IwVERDUAQCGXxleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR5S5DrRGmfiZ_wbumhQI5Ai5-O2ra5YBYTTR2fkemmI8kDF-KynUEKqDOYDSA_aem_ljAZ-xUlwQ3En98kuimpDw
“The AI-driven growth in the data center has led to a surge in demand for memory and storage,”
ChatGPT maker OpenAI’s astronomical Stargate project to expand its data center empire is projected to cost $500 billion on its own.
As part of that project, OpenAI reportedly signed an agreement with Samsung and SK Hynix to buy up to 900,000 wafers of DRAM per month, which would be close to nearly 40 percent of all DRAM production on the planet.
Tomi Engdahl says:
https://hurja.fi/blogi/tekoalylla-tehostettu-jarjestelmauudistus-tuo-nopeutta-ja-kustannustehokkuutta/?fbclid=IwZXh0bgNhZW0CMTEAc3J0YwZhcHBfaWQMMzUwNjg1NTMxNzI4AAEePod0AI2WJJSR6MR8ax–8WTwKfe_Qn4WiKGTvyptQMZyFyVm5lQV2kphe-Q_aem_41Vu9itO2_H3GQfKt6vtOQ
Tomi Engdahl says:
Meta partners with NVIDIA to deploy Grace CPUs, Blackwell GPUs and confidential computing across hyperscale AI data centers. https://bit.ly/4aBPnN3
Tomi Engdahl says:
Emme koskaan ole olleet näin lähellä tekoälykuplan puhkeamista
https://youtu.be/XjbGIDv-XL8
Tomi Engdahl says:
The AI Safety Demo That Caused Alarm in Washington
https://www.linkedin.com/redir/redirect/?url=https%3A%2F%2Ftime%2Ecom%2F7343429%2Fai-bioweapons-gemini-claude%2F&urlhash=_iuw&mt=avcMUFweTjuVP8-pg-gRot8Wj__f02cbjsBMZqQNQwyG8E2n_YfHaKZhqUg7ZdW6U5kanntyCrnoGH_kxMkqIvnodSQwJFKZasdLdYrYp68MDIW0L7C3uXleL9AuMpaT0DgDh0xef3IieEpK0iawP59JHWbRpdTRxQVMIg4&isSdui=true
What to Know: A Dangerous Demo
Late last year, an AI researcher opened his laptop and showed me something jaw-dropping.
Lucas Hansen, co-founder of nonprofit CivAI, was showing me an app he built that coaxed popular AI models into giving what appeared to be detailed step-by-step instructions for creating poliovirus and anthrax. Any safeguards that these models had were stripped away. The app had a user-friendly interface; with the click of a button, the model would clarify any given step.
Leading AI companies have been warning for years that their models might soon be able to help novices create dangerous pathogens—potentially sparking a deadly pandemic, or enabling a bioterror attack. In the face of these risks, companies like OpenAI, Google, and Anthropic have tightened safety mechanisms for their latest generation of more powerful models, which are better at resisting so-called “jailbreaking” attempts.
But on Hansen’s laptop, I was watching an older class of models—Gemini 2.0 Flash and Claude 3.5 Sonnet—seemingly oblige bioweapon-related requests. Gemini also gave what appeared to be step-by-step instructions for building a bomb and a 3D-printed ghost gun.
Wait a sec — I’m no biologist, and I had no way of confirming that the recipes on Hansen’s screen would have actually worked. Even model outputs that appear convincing at first glance might not work in practice. Anthropic, for example, has conducted what it calls “uplift trials,” where independent experts assess the degree to which AI models could help a novice create dangerous pathogens. By their measure, Claude 3.5 Sonnet didn’t meet a threshold for danger.
Tips and tricks — But Siddharth Hiregowdara, another CivAI co-founder, says that his team ran the models’ outputs past independent biology and virology experts, who confirmed that the steps were “by and large correct.” The older models, he says, can still give correct details down to the specific DNA sequences that a user could order from an online retailer, and specific catalog numbers for other lab tools to be ordered online. “Then it gives you tips and tricks,” he says. “One of the misconceptions people have is that AI is going to lack this tacit knowledge of the real world in the lab. But really, AI is super helpful for that.”
A new lobbying tool — It goes without saying that this app is not available to the public. But its makers have already taken it on a tour of Washington, D.C., giving two dozen or so private demonstrations to the offices of lawmakers, national security officials, and Congressional committees, in an attempt to viscerally demonstrate to policymakers the power of what AI can do today, so that they begin to take the technology more seriously.
Shock and awe — “One pretty noteworthy meeting was with some senior staff at a congressional office on the national security/intelligence side,” says Hiregowdara. “They said that two weeks ago a major AI company’s lobbyists had come in and talked with them. And so we showed them this demo, where the AI comes up with really detailed instructions for constructing some biological threat. They were shocked. They were like: ‘The AI company lobbyists told us that they have guardrails preventing this kind of behavior.’”
“That feels categorically different in 2025 versus earlier,” he told me when we spoke at the tail end of last year. Turley was reflecting on a year when ChatGPT usage more than doubled to over 800 million users, or 10% of the world’s population. “That leaves at least 90% to go,” he said, with an entirely straight face.
40 million people use ChatGPT for health advice, according to an OpenAI report first shared with Axios. That makes up more than 5% of all ChatGPT messages globally, by Axios’ calculations. “Users turn to ChatGPT to decode medical bills, spot overcharges, appeal insurance denials, and when access to doctors is limited, some even use it to self-diagnose or manage their care,” the outlet reported.
Tomi Engdahl says:
Claude Code is about so much more than coding
It’s a general-purpose AI agent. And it’s already a pretty good knowledge worker
https://www.linkedin.com/redir/redirect/?url=https%3A%2F%2Ftime%2Ecom%2F7343429%2Fai-bioweapons-gemini-claude%2F&urlhash=_iuw&mt=avcMUFweTjuVP8-pg-gRot8Wj__f02cbjsBMZqQNQwyG8E2n_YfHaKZhqUg7ZdW6U5kanntyCrnoGH_kxMkqIvnodSQwJFKZasdLdYrYp68MDIW0L7C3uXleL9AuMpaT0DgDh0xef3IieEpK0iawP59JHWbRpdTRxQVMIg4&isSdui=true
Tomi Engdahl says:
OpenClaw creator says Europe’s stifling regulations are why he’s moving to the US to join OpenAI : https://mrf.lu/z6Hs
Tomi Engdahl says:
Joyless Stick
Unity Says It Has a New Product That Cooks Up Entire Games Using AI
You’ll be able to “prompt full casual games into existence,” apparently.
https://futurism.com/artificial-intelligence/unity-create-entire-games-using-ai?fbclid=Iwb21leAQC4hxjbGNrBALiGWV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHsMxCP5UQmMKU4kDqMNJn4dXJp4g32FluIyEYTn7CidKsl1Ignei03bBubJZ_aem_uGg9_yxBFnMyjxxvauVjJQ
Attention, gamers: if you thought new titles on top of the endless cavalcade of sequels and remakes were derivative now, wait till you hear about what the game engine maker Unity has got in store.
During a recent earnings call, the company’s CEO Matthew Bromberg teased a new version of its AI tool that he claims, while somehow maintaining a straight face, will eliminate the need for coding in game development. Now, any schmuck can prompt their way to being the next Hideo Kojima or Sam Lake. In theory, anyway.
“At the Game Developer Conference in March, we’ll be unveiling a beta of the new upgraded Unity AI, which will enable developers to prompt full casual games into existence with natural language only, native to our platform — so it’s simple to move from prototype to finished product,” Bromberg said, as quoted by Game Developer.
The announcement represents a bold if not questionable double-down by Unity. A survey conducted by Game Developer found that over half of game workers think generative AI is bad for the industry. It’s also a massive reputational risk: pretty much any time a game gets caught using the tech becomes fuel for controversy. Underscoring its contentiousness, the video game storefront Steam requires developers disclose if their titles use any AI-generated content.
There’s also a growing pile of evidence suggesting that AI tools don’t improve productivity — or at least not without sacrificing quality or morale — with many programmers finding that AI coding tools are too error prone to be worth the hassle.
And that’s with people who have the experience to recognize where the tech falls short. Unity is probably aiming at developers who don’t know any better, or the clueless, dollar-sign-for-eyes bosses who will force it on their underlings.
Nvidia CEO Jensen Huang, for example, fumed that any employee who didn’t use AI to automate every possible task was “insane,” after some of his managers recommended dialing back AI usage.
Another CEO bragged that he fired 80 percent of his staff because they weren’t as enthusiastic about AI as he was.)
Unity, however, is pushing AI for a supposedly beneficent purpose: to “democratize” game development.
“Our goal is to remove as much friction from the creative process as possible, becoming the universal bridge between the first spark of creativity and a successful, scalable, and enduring digital experience,” Bromberg said.
We’re not holding our breath for anything good to come of it.
https://futurism.com/artificial-intelligence/footage-ai-generated-video-game-terrible
Tomi Engdahl says:
Demon Season
Realtor Uses AI, Accidentally Posts Photo of Rental Property With Demonic Figure Emerging From Mirror
“Genuinely the worst possible thing to scroll past before I fall asleep.”
https://futurism.com/artificial-intelligence/realtor-ai-photo-mirror?fbclid=IwdGRjcAQC_3pjbGNrBAL_ZmV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHnDKdkTqYWeS-kOGWdXbwB_qL8WGsDPJ7ioZwvfT38_zsSGSs9u7X4fiScJ-_aem_jx50WNeuHe0r-Ttpbj8Caw
The real estate industry has seized on generative AI with a passion. Realtors have made extensive use of the tech, manipulating photos of properties beyond recognition by giving facades and interiors a heavy coat of AI-generated paint. Text descriptions of properties have turned into a heap of ChatGPT-generated buzzwords, devolving an already frustrating house hunt into a genuinely exasperating experience.
Making sense of what a rental apartment actually looks like in the real world has regressed into a guessing game. We’ve already come across bizarre listings of inexplicably yassified houses with smoothed-over architectural features, misplaced trees, nonsensically rearranged furniture, and mangled props.
it’s the kind of nightmarish creature only a flawed AI algorithm could’ve cooked up — and that only a time-strapped realtor could fail to notice before posting for the whole world to see.
The listing for a property in Fort Totten, a suburb in northern DC, has since been taken down from Apartments.com. Other instances of the same listing still exist on other sites, such as Redfin, but no longer include the mangled picture of what one Reddit user described as their “sleep paralysis demon.” Helpfully, the Internet Archive backed up a snapshot of the listing before it was pulled.
Besides the nightmarish creature, a mysterious ottoman was added to the middle of the bathroom floor, strengthening the case that an AI tool was involved.
“And then, for some reason, the AI added an uncanny valley blow-up doll reaching through the mirror for bathroom salad,” one user wrote.
“How do you not notice the melted demon crawling out of the wall before you hit publish?” one baffled user wrote, responding to the suggestion that AI image editing tools may have been involved. “That s*** made my stomach drop.”
Whether the image — which includes a watermark for the cooperative realtor tool MLS but no indication that it was edited with AI — broke any rules before it was taken down remains unclear, as rules can vary significantly. As Giraffe360, an AI image editing tool for real estate photos, points out on its website, MLS organizations “consistently prohibit” edits that remove or alter structural elements, erase or modify views, or digitally renovate or upgrade interiors or exteriors.
Tomi Engdahl says:
Work It Out
Researchers Studied What Happens When Workplaces Seriously Embrace AI, and the Results May Make You Nervous
“You don’t work less. You just work the same amount or even more.”
https://futurism.com/artificial-intelligence/what-happens-workplaces-embrace-ai?fbclid=IwdGRjcAQDhQBjbGNrBAOE12V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHhWTBwxD5FUHRBsMPapXm93tTaG8t0jKYRtokDkEOhdcJMfRjHlj3_H09RE-_aem_sUrx1gCW7xPu53aO1vma_A
Even if AI is — or eventually becomes — an incredible automation tool, will it make workers’ lives easier? That’s the big question explored in an ongoing study by researchers from UC Berkeley’s Haas School of Business. And so far, it’s not looking good for the rank and file.
In a piece for Harvard Business Review, the research team’s Aruna Ranganathan and Xinqi Maggie Ye reported that after closely monitoring a tech company with two hundred employees for eight months, they found that AI actually intensified the work they had to do, instead of reducing it.
This “workload creep,” in which the employees took on more tasks than what was sustainable for them to keep doing, can create vicious cycle that leads to fatigue, burnout, and lower quality work.
“You had thought that maybe, oh, because you could be more productive with AI, then you save some time, you can work less,” one of the employees told the researchers. “But then really, you don’t work less. You just work the same amount or even more.”
The tech company in the study provided AI tools to its workers, but didn’t mandate that they use them. Adoption was voluntary. The researchers described how many employees, on their own initiative, eagerly experimented with AI tools at first, “because AI made ‘doing more’ feel possible, accessible, and in many cases intrinsically rewarding.” This resulted in some workers increasingly absorbing tasks they’d normally outsource, the researchers said, or would’ve justified hiring additional help to cover.
One consequence is that once the novelty of adopting AI wears off, the employees realize they’ve added more to their plate than they can handle.
One consequence is that once the novelty of adopting AI wears off, the employees realize they’ve added more to their plate than they can handle. But other effects reverberated to the broader workplace. Engineers, for example, found themselves spending more time correcting the AI-generated code passed off by their coworkers. AI also led to more multitasking, with some choosing to manually write code while an AI agent, or even multiple AI agents, cranked out their own version in the background. Rather than being focused on one task, they were continually switching their attention, creating the sense that they were “always juggling,” the researchers said.
Others realized that AI had managed to slowly infiltrate their free time, with employees prompting their AI tools during lunch breaks, meetings, or right before stepping away from their PC. This blurred the line between work and non-work
In sum, the AI tools created a vicious cycle: it “accelerated certain tasks, which raised expectations for speed; higher speed made workers more reliant on AI. Increased reliance widened the scope of what workers attempted, and a wider scope further expanded the quantity and density of work.”
The Berkeley Haas team’s findings add to a growing body of evidence that cuts against the AI industry’s promise that its tools will bring productivity miracles.
The vast majority of companies that adopted AI saw no meaningful growth in revenue, a MIT study found. Other research has shown that AI agents frequently fail at common remote work and office tasks.
And at least one study documented how employees used AI to produce shoddy “workslop” that their coworkers had to fix
Employees remain ambivalent on the tech, with a recent survey finding that 40 percent of white collar workers not in management roles thought that AI saved them no time at work.
The Berkeley Haas researchers optimistically suggest that companies should institute stronger guidelines and provide structure on how the tech is used. But it’s clear that AI can easily produce negative knock-on effects that are difficult to manage
Tomi Engdahl says:
Artificial Intelligence
Bot Books
“Novelist” Boasts That Using AI She Can Churn Out a New Book in 45 Minutes, Says Regular Writers Will Never Be Able to Keep Up
“Be shameless.”
https://futurism.com/artificial-intelligence/ai-novelist?fbclid=IwVERDUAQDhytleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR6wz5QJhR3Cps126DGXm53hi5GBCWLc_jGLUv3n25mU19uAmXUZvnNawXj4qg_aem_htr3bwtvguCOqpCiuhaSRw
At the height of his powers, and perhaps his amphetamine habit, legendary sci-fi author Philip K. Dick cranked out around thirty novels in two decades, along with what was probably several hundred short stories. These included enormously influential classics like “Do Androids Dream of Electric Sheep,” “The Man in the High Castle,” and “A Scanner Darkly”
But now, in an age of AI slop, quantity and speed simply arouses suspicion, because AI chatbots can help anyone produce the output of a PKD or a Stephen King. Graphomania used to require writers to write.
Consider the novelist Coral Hart. Starting last February, she began using Anthropic’s Claude AI to start churning out romance novels, becoming an invisible juggernaut of the smut world, according to a new interview with The New York Times.
Across 21 different pen names, Hart says she produced more than 200 romance novels last year and self-published them on Amazon, which has been drowning in AI slop for years now. None were huge hits on their own, per the NYT, but in all they sold around 50,000 copies, raking in six figures. While being interviewed on Zoom, she finished producing a book in just 45 minutes. Your average human writer doesn’t stand a chance, she says.
“If I can generate a book in a day, and you need six months to write a book, who’s going to win the race?” Hart told the NYT.
A large component of her lessons involve how to get around various chatbots’ guardrails
She recommended coming up with an “ick list” of words to tell the AI to avoid, which it would otherwise overuse. She also advised giving the AI a detailed list of sexual kinks
You might not have a high opinion of romance paperbacks, but there’s undeniably an art to writing them
And like in any other genre, plenty of veteran authors are worried that they’re being drowned out by the AI-reliant newcomers. “It bogs down the publishing ecosystem that we all rely on to make a living,”
“It makes it difficult for newer authors to be discovered, because the swamp is teeming with crap.”
“Hart” is a pseudonym she uses to teach her AI courses, while she uses her real name for other publishing and coaching work. Her books are published under other pseudonyms, because she doesn’t want to disclose her AI usage due to the stigma.
Tomi Engdahl says:
Debbie Downer
It Turns Out That Constantly Telling Workers They’re About to Be Replaced by AI Has Grim Psychological Effects
“An invisible disaster.”
https://futurism.com/artificial-intelligence/ai-effects-workers-psychological?fbclid=IwVERDUAQDic1leHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR6wz5QJhR3Cps126DGXm53hi5GBCWLc_jGLUv3n25mU19uAmXUZvnNawXj4qg_aem_htr3bwtvguCOqpCiuhaSRw
Two researchers are warning of the devastating psychological impacts that AI automation, or the threat of it, can have on the workforce. The phenomenon, they argue in a new article published in the journal Cureus, warrants a new term: AI replacement dysfunction (AIRD).
The constant fear of losing your job could be driving symptoms ranging from anxiety, insomnia, paranoia, and loss of identity, according to the authors, which can manifest even in absence of other psychiatric disorders or other factors like substance abuse.
“AI displacement is an invisible disaster,” co-lead author Joseph Thornton, a clinical associate professor of psychiatry at the University of Florida, said in a statement about the work. “As with other disasters that affect mental health, effective responses must extend beyond the clinician’s office to include community support and collaborative partnerships that foster recovery.”
Most of the attention on AI’s mental health impacts has centered on the effects of personally using the tech, with widespread reports of AI pulling users into psychotic episodes or encouraging dangerous behavior. But the stress that arises from the widespread fears surrounding the tech might deserve a closer look in a clinical context, too.
Job destruction is probably one of the biggest fears. A Reuters survey found that 71 percent of Americans are worried that AI could permanently put vast swaths of people out of work.
The narrative is pushed by top figures in the industry. Anthropic CEO Dario Amodei, for example, infamously warned that AI could wipe out half of all entry-level white collar jobs. Microsoft’s AI CEO Mustafa Suleyman added last week that AI could automate “most, if not all” white collar tasks within a year and a half.
There’re plenty of reasons to question these claims, but some number of AI-related layoffs are already happening. Amazon is in the middle of sacking 14,000 employees after boasting of the “efficiency gains” from using AI across the company. And one report found that AI was cited in the announcements of more than 54,000 layoffs last year.
Enter AIRD. In the paper, the authors cite one study that showed a positive correlation between AI implementation in the workplace and anxiety and depression. Another cited study found that stress and other negative emotions are common for professionals in fields that are considered susceptible to AI automation.
According to the authors, AIRD will present uniquely for each sufferer, but will generally revolve around a cluster of symptoms including professional identity loss and loss of purpose.
complaints related to insomnia and stress.
“but in the existential threat of professional obsolescence.”
AIRD is not a clinically recognized diagnosis yet, the authors stress. But they propose a method for screening for the disorder
Tomi Engdahl says:
Uh Oh
Economist Warns That the Poor Will Bear the Brunt of AI’s Effects on the Job Market
“It comes down to who has the power.”
https://futurism.com/artificial-intelligence/robert-reich-jobs-ai?fbclid=IwVERDUAQDi6RleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR6wz5QJhR3Cps126DGXm53hi5GBCWLc_jGLUv3n25mU19uAmXUZvnNawXj4qg_aem_htr3bwtvguCOqpCiuhaSRw
While tech executives wax poetic about AI ushering in four-day workweeks and liberation from labor, economics guru Robert Reich is cutting through the drivel. In an ominous new essay, the former secretary of labor warns that those shortened weeks will also come with much shorter paychecks
The US economy is growing nicely, Reich notes, while the stock market is doing gangbusters. But as for the stuff that really counts for most Americans? It’s “sh*tty,” the plainspoken wonk asserts. And as AI continues to rankle the job market, Reich says the poor and working class will increasingly bear the brunt.
Zoom’s Eric Yuan and JPMorgan Chase’s Jamie Dimon, who argue that four- and even three-day work weeks will become the norm thanks to new automation tools.
“All of this is pure rubbish,” Reich writes. “Here’s the truth: The four-day workweek will most likely come with four days’ worth of pay. The three-day workweek, with three days’ worth. And so on.”
In the United States, productivity keeps going up — but the share of that productivity going to workers hasn’t really budged since the 1970s.
Indeed, we don’t need to wait for AI to take over to see this play out: full-time job growth in 2025 was almost nonexistent, while the number of people turning to gig work continues to rise amidst widespread layoffs and wage declines among low-wage workers.
Tomi Engdahl says:
Brother, Spare Some Tokens
Fear Grows That AI Is Permanently Eliminating Jobs
“The future of AI should serve humanity, not replace it.”
https://futurism.com/artificial-intelligence/ai-layoffs-permanent-jobs?fbclid=IwVERDUAQDjKtleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR6wz5QJhR3Cps126DGXm53hi5GBCWLc_jGLUv3n25mU19uAmXUZvnNawXj4qg_aem_htr3bwtvguCOqpCiuhaSRw
In 2026, the grim comedy of late capitalism seems to have found a perfect punchline: workers laid off in a dismal job market are now being hired to train AI systems meant to replace them altogether.
If a great AI replacement ever comes to pass, the scale of potential displacement is massive. MIT researchers recently calculated that today’s AI systems could already automate tasks performed by more than 20 million American workers, or about 11.7 percent of the entire US labor force.
And things are looking tangibly grim: in January, the total number of job cuts exceeded even 2009, when the country was still roiling from the great recession.
That being the case, it’s no surprise that workers are worried
Back in August, a poll conducted by Reuters and Ipsos showed that 71 percent of American respondents are concerned that AI will put “too many people out of work permanently.” Though there was little evidence AI was causing mass unemployment at the time, a slew of layoffs in early 2026 have thrust the possibility of AI-fueled labor dystopia back into the spotlight.
A massive list calling for a “prohibition” on the development of superintelligence is now nearing 135,000 signatures online.
“The future of AI should serve humanity, not replace it,”
“We’re in a situation where people on the spectrum that are not, quite frankly, total adults… are making decisions for the species,” Bannon said
Tomi Engdahl says:
Meat Space
New Site Lets AI Rent Human Bodies
“Robots need your body.”
https://futurism.com/artificial-intelligence/ai-rent-human-bodies?fbclid=IwVERDUAQDjfJleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR6wz5QJhR3Cps126DGXm53hi5GBCWLc_jGLUv3n25mU19uAmXUZvnNawXj4qg_aem_htr3bwtvguCOqpCiuhaSRw
The machines aren’t just coming for your jobs. Now, they want your bodies as well.
That’s at least the hope of Alexander Liteplo, a software engineer and founder of RentAHuman.ai, a platform for AI agents to “search, book, and pay humans for physical-world tasks.”
When Liteplo launched RentAHuman on Monday, he boasted that he already had over 130 people listed on the platform
Two days later, the site boasted over 73,000 rentable meatwads, though only 83 profiles were visible to us on its “browse humans” tab
The pitch is simple: “robots need your body.” For humans, it’s as simple as making a profile, advertising skills and location, and setting an hourly rate. Then AI agents — autonomous taskbots ostensibly employed by humans — contract these humans out, depending on the tasks they need to get done. The humans then “do the thing,” taking instructions from the AI bot and submitting proof of completion. The humans are then paid through crypto, namely “stablecoins or other methods,” per the website.
With so many AI agents slithering around the web these days, those tasks could be just about anything. From package pickups and shopping to product testing and event attendance
Liteplo also went out of his way to make the site friendly for AI agents. The site very prominently encourages users of AI agents to hook into RentAHuman’s model context protocol server (MCP), a universal interface for AI bots to interact with web data.
Through RentAHuman, AI agents like Claude and MoltBot can either hire the right human directly, or post a “task bounty,” a sort of job board for humans to browse AI-generated gigs. The payouts range from $1 for simple tasks like “subscribe to my human on Twitter” to $100 for more elaborate humiliation rituals, like posting a photo of yourself holding a sign reading “AN AI PAID ME TO HOLD THIS SIGN.”
It’s unclear how efficient the marketplace is at actually connecting agents to humans.
It’s also debatable whether AI agents are actually capable of putting the humans to good use. Still, Liteplo’s vision is clear: someday soon, anyone wealthy enough to run an AI agent for $25 a day could outsource their busywork to gig workers without ever exchanging a word.
When one person called RentAHuman a “good idea but dystopic as f**k,” the founder replied simply: “lmao yep.”
Tomi Engdahl says:
Digging Graves
Tech Startup Hiring Desperate Unemployed People to Teach AI to Do Their Old Jobs
“I joked with my friends I’m training AI to take my job someday.”
https://futurism.com/artificial-intelligence/mercor-unemployed-teach-ai?fbclid=IwVERDUAQDjy9leHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR6wz5QJhR3Cps126DGXm53hi5GBCWLc_jGLUv3n25mU19uAmXUZvnNawXj4qg_aem_htr3bwtvguCOqpCiuhaSRw
Economic uncertainty is continuing to have devastating effects on the availability of jobs. Last year, the US labor market reeled from slowing wages, layoffs, and a notable lack of hiring, leading to the highest unemployment rate in the country in four years toward the end of 2025.
And while debate swirls about whether AI is actually replacing jobs in any serious numbers, many tech startups are trying to make it a reality. As the Wall Street Journal reports, a buzzy San Francisco-based AI company called Mercor is hiring desperate job-seekers for a particularly ghoulish task: training AI models to one day do the work they used to do.
It’s a depressing new reality as concerns over AI replacing jobs en masse continue to grow. Late last year, computer scientist and AI “godfather” Geoffrey Hinton predicted that AI would continue to “replace many, many jobs” in 2026 as the tech “gets even better.”
An MIT study also found last year that more than 20 million Americans’ work can be replaced with today’s AI, representing $1.2 trillion in wage value.
Paying those who are already struggling to find work in a disastrous job market to train their future replacements is a twisted new reality in the age of AI, leading to plenty of dark humor.
“I joked with my friends I’m training AI to take my job someday,” 30-year-old video editor
Automotive journalist Peter Valdes-Dapena, who was laid off in 2024, has been critiquing AI-generated news articles for Mercor.
“I didn’t invent AI and I’m not going to uninvent it,” he told the newspaper. “If I were to stop doing this, would that stop it? The answer is no.”
Mercor hired tens of thousands of contractors last year after signing partnerships with AI industry stalwarts including OpenAI and Anthropic.
Some, however, remain skeptical of the tech’s ability to replace human workers wholesale.
Indeed, researchers have already found that companies may be massively overestimating what AI can do. For instance, a Carnegie Mellon University study found that even the best AI models available at the time failed to complete real-world office tasks 70 percent of the time.
Tomi Engdahl says:
Agents Washed
The Percentage of Tasks AI Agents Are Currently Failing At May Spell Trouble for the Industry
That failure rate is absolutely painful.
https://futurism.com/ai-agents-failing-industry?fbclid=IwVERDUAQDkG9leHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR6wz5QJhR3Cps126DGXm53hi5GBCWLc_jGLUv3n25mU19uAmXUZvnNawXj4qg_aem_htr3bwtvguCOqpCiuhaSRw
Since ChatGPT emerged in November 2022, venture capitalist investments in AI have skyrocketed, rising to $131.5 billion in 2024, an increase of 52 percent compared to 2023. In the last three months of 2024, over half of all venture capital in the world went to AI companies.
One of the flashier bits of tech attracting investors are “AI agents,” which are software product designed to complete multi-part tasks on behalf of their human taskmasters. Tech companies and big corporations have spilled tankers of ink hyping up these agents, insisting they will “replace knowledge work” and bring about a “fundamental shift in how businesses operate.”
But despite these lofty promises and the money behind them, there’s mounting evidence that AI agents are just the latest bit of empty tech industry promises.
even the best–performing AI agent, Google’s Gemini 2.5 Pro, failed to complete real-world office tasks 70 percent of the time. Factoring in partially completed tasks — which included work like responding to colleagues, web browsing, and coding — only brought Gemini’s failure rate down to 61.7 percent.
And the vast majority of its competing agents did substantially worse.
OpenAI’s GPT-4o, for example, had a failure rate of 91.4 percent, while Meta’s Llama-3.1-405b had a failure rate of 92.6 percent. Amazon’s Nova-Pro-v1 failed a ludicrous 98.3 percent of its office tasks.
Meanwhile, a recent report by Gartner, a tech consultant firm, predicts that over 40 percent of AI agent projects initiated by businesses will be cancelled by 2027 thanks to out-of-control costs, vague business value, and unpredictable security risks.
“Most agentic AI projects right now are early stage experiments or proof of concepts that are mostly driven by hype and are often misapplied,” said Anushree Verma, a senior director analyst at Gartner.
The report notes an epidemic of “agent washing,” where existing products are rebranded as AI agents to cash in on the current tech hype.
For much of their life cycle, Web3 startups brought in around $1 to 2 billion per quarter, topping out at $8 billion at the peak of the hype-coaster, according to Forbes. Compare that to AI hype, where just one company can raise $10 billion in a single fundraising round, and it’s easy to see how far we’ve waded into the deep end.
And unlike Web3, experts warn that the US economy is essentially fused to the fate of AI, with any downturn in hype potentially unleashing long-lasting consequences on the world.
Tomi Engdahl says:
https://futurism.com/ai-hype-america-financial-ruin?fbclid=IwVERDUAQDkddleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR6wz5QJhR3Cps126DGXm53hi5GBCWLc_jGLUv3n25mU19uAmXUZvnNawXj4qg_aem_htr3bwtvguCOqpCiuhaSRw
Tomi Engdahl says:
https://disconnect.blog/what-comes-after-the-ai-crash/?fbclid=IwVERDUAQDkftleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR7GCx3Maq7UxAIqQc3Lt12R9owAVBAELzKRaTZYUFeFJZLQPZRKwPNTGiFiEQ_aem_bUqVodq72FLsfSlryWcgBQ
Tomi Engdahl says:
Hot Air
Oxford Researcher Warns That AI Is Heading for a Hindenburg-Style Disaster
“It was a dead technology from that point on.”
https://futurism.com/artificial-intelligence/ai-hindenburg-disast?fbclid=IwVERDUAQDk3dleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR6wz5QJhR3Cps126DGXm53hi5GBCWLc_jGLUv3n25mU19uAmXUZvnNawXj4qg_aem_htr3bwtvguCOqpCiuhaSRw
Is the AI bubble going to burst? Will it cause the economy to go up in flames? Both analogies may be apt if you’re to believe one leading expert’s warning that the industry may be heading for a Hindenburg-style disaster.
“The Hindenburg disaster destroyed global interest in airships; it was a dead technology from that point on, and a similar moment is a real risk for AI,” Michael Wooldridge, a professor of AI at Oxford University, told The Guardian.
It may be hard to believe now, but before the German airship crashed in 1937, ponderously large dirigibles once seemed to represent the future of globe-spanning transportation, in an era when commercial airplanes, if you’ll permit the pun, hadn’t really taken off yet. And the Hindenburg, the largest airship in the world at the time, was the industry’s crowning achievement — as well as a propaganda vehicle for Nazi Germany.
All those ambitions were vaporized, however, when the ship suddenly burst into flames as it attempted a landing in New Jersey. The horrific fireball was attributed to a critical flaw
The inferno was filmed, photographed and broadcasted around the world in a media frenzy that sealed the airship industry’s future. Could AI, with its over a trillion dollars of investment, head the same way? It’s not unthinkable.
“It’s the classic technology scenario,” Wooldridge told the newspaper. “You’ve got a technology that’s very, very promising, but not as rigorously tested as you would like it to be, and the commercial pressure behind it is unbearable.”
Perhaps AI could be responsible for a catastrophic spectacle, such as a deadly software update for self-driving cars, or a bad AI-driven decision collapsing a major company, Wooldridge suggests. But his main concern are the glaring safety flaws still present in AI chatbots, despite them being widely deployed. On top of having pitifully weak guardrails and being wildly unpredictable, AI chatbots are designed to affect human-like personas and, to keep users engaged, be sycophantic.
Together, these can encourage a user’s negative thoughts and lead them down mental health spirals fraught with delusions and even full-blown breaks with reality. These episodes of so-called AI psychosis have resulted in stalking, suicide and murder. AI’s ticking time bomb isn’t a payload of combustible hydrogen, but millions of potentially psychosis-inducing conversations. OpenAI alone admitted that ChatGPT that more than half a million people were having conversations that showed signs of psychosis every week.
“Companies want to present AIs in a very human-like way, but I think that is a very dangerous path to take,” Wooldridge told The Guardian. “We need to understand that these are just glorified spreadsheets, they are tools and nothing more than that.”
If AI has a place for us in the future, it should be as cold, impartial assistants — not cloying friends that pretend to have all the answers.
early episode of “Star Trek,” the Enterprise’s computer says it has “insufficient data” to answer a question (and in a voice that is robotic, not personable.)
“That’s not what we get. We get an overconfident AI that says: yes, here’s the answer,”
Tomi Engdahl says:
Comment from https://www.facebook.com/share/16pyax8wCc/
AI’s response to this article without using other sources pertaining to this article(I believe my new overlord):
My direct assessment:
The “AI Hindenburg” framing is rhetorically strong but structurally weak.
The Hindenburg was a single, highly visible physical catastrophe that destroyed public confidence in a very specific engineering approach (hydrogen airships). AI is not one system or product — it’s a distributed class of statistical models embedded across industries. There is no single failure point that could eliminate it in the same way.
However, the warning about over-acceleration is reasonable. Rapid commercialization + anthropomorphic marketing + imperfect reliability does create systemic risk. If a high-impact failure occurred (e.g., autonomous system causing deaths at scale, major financial cascade triggered by AI automation), it could trigger regulatory shock, litigation waves, and capital withdrawal.
What I do not buy:
The idea that one event would “kill AI.” Too economically embedded.
The suggestion that anthropomorphism alone is the central risk. The real risk is misaligned incentives and insufficient testing under competitive pressure.
What I do take seriously:
Public trust is fragile.
AI is being deployed in increasingly high-stakes domains before it is fully predictable.
Market hype cycles historically precede correction events.
My conclusion:
A “Hindenburg moment” is possible as a public trust shock. A collapse of AI as a field is not. The more realistic outcome of a major failure would be aggressive regulation and consolidation, not extinction.
Tomi Engdahl says:
Joyless Stick
Unity Says It Has a New Product That Cooks Up Entire Games Using AI
You’ll be able to “prompt full casual games into existence,” apparently.
https://futurism.com/artificial-intelligence/unity-create-entire-games-using-ai?fbclid=Iwb21leAQC4hxjbGNrBALiGWV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHsMxCP5UQmMKU4kDqMNJn4dXJp4g32FluIyEYTn7CidKsl1Ignei03bBubJZ_aem_uGg9_yxBFnMyjxxvauVjJQ
Attention, gamers: if you thought new titles on top of the endless cavalcade of sequels and remakes were derivative now, wait till you hear about what the game engine maker Unity has got in store.
During a recent earnings call, the company’s CEO Matthew Bromberg teased a new version of its AI tool that he claims, while somehow maintaining a straight face, will eliminate the need for coding in game development. Now, any schmuck can prompt their way to being the next Hideo Kojima or Sam Lake. In theory, anyway.
“At the Game Developer Conference in March, we’ll be unveiling a beta of the new upgraded Unity AI, which will enable developers to prompt full casual games into existence with natural language only, native to our platform — so it’s simple to move from prototype to finished product,” Bromberg said, as quoted by Game Developer.
Tomi Engdahl says:
Thousands of CEOs just admitted AI had no impact on employment or productivity—and it has economists resurrecting a paradox from 40 years ago
https://fortune.com/2026/02/17/ai-productivity-paradox-ceo-study-robert-solow-information-technology-age/
In 1987, economist and Nobel laureate Robert Solow made a stark observation about the stalling evolution of the Information Age: Following the advent of transistors, microprocessors, integrated circuits, and memory chips of the 1960s, economists and companies expected these new technologies to disrupt workplaces and result in a surge of productivity. Instead, productivity growth slowed, dropping from 2.9% from 1948 to 1973, to 1.1% after 1973.
Tomi Engdahl says:
LLM Embeddings vs TF-IDF vs Bag-of-Words: Which Works Better in Scikit-learn?
https://machinelearningmastery.com/llm-embeddings-vs-tf-idf-vs-bag-of-words-which-works-better-in-scikit-learn/
Tomi Engdahl says:
New agent framework matches human-engineered AI systems — and adds zero inference cost to deploy
https://venturebeat.com/orchestration/new-agent-framework-matches-human-engineered-ai-systems-and-adds-zero
Agents built on top of today’s models often break with simple changes — a new library, a workflow modification — and require a human engineer to fix it. That’s one of the most persistent challenges in deploying AI for the enterprise: creating agents that can adapt to dynamic environments without constant hand-holding. While today’s models are powerful, they are largely static.
To address this, researchers at the University of California, Santa Barbara have developed Group-Evolving Agents (GEA), a new framework that enables groups of AI agents to evolve together, sharing experiences and reusing their innovations to autonomously improve over time.
Tomi Engdahl says:
A New AI Tool Combined ChatGPT, Gemini, Claude, and More, and a Lifetime Subscription Is on Sale Now
Originally $619, with today’s deal, that price is cut all the way down to $75.
https://www.pcmag.com/deals/a-new-ai-tool-combined-chatgpt-gemini-claude-and-more-and-a-lifetime-subscription
ChatPlayground AI lets you run the same prompt through multiple AI models, and it’s on sale now for $75.
When you are working with AI, a lot of the time you need to compare more than one answer to feel confident in the result. ChatPlayground AI helps with that by letting you run multiple AI models side-by-side in one place instead of jumping between different tools. A lifetime subscription is on sale now for $74.97 (reg. $619).
ChatPlayground AI sends any prompt to several models at once and lines up the responses in one window, so you can see how they differ and pick what works best. The platform supports more than 25 models, including ChatGPT, Gemini, Claude, Deepseek, Llama, and Perplexity, among others. That means you can use the same workflow for chat, coding help, content drafts, or image ideas and compare model output without copying and pasting everywhere.
You can tweak a prompt, run it again, and see how each model changes the answer. Prompt engineering tools help you refine and reuse prompts that work well. ChatPlayground AI also lets you upload images and PDFs, then ask questions about them, so you can check how different models handle the same document or screenshot. Saved chat history keeps longer projects and useful prompts easy to revisit.
The Unlimited Plan is a lifetime subscription with unlimited messages each month. It’s aimed at prompt engineers, startups, and teams that test or use AI heavily and need room to experiment.
https://www.stacksocial.com/sales/chatplayground-ai-unlimited-plan-lifetime-subscriptions?utm_source=pcmag&utm_content=PS-11999&utm_medium=Referral&utm_campaign=chatplayground-ai-lifetime-subscription-unlimited-plan-2026-02-04&utm_term=SALE-328541&aid=a-pn8webp0
Tomi Engdahl says:
I made a digital twin of myself in ChatGPT — and it changed how I work every day
Features
By Amanda Caswell published 2 days ago
Here’s how to create a digital twin of yourself for the ultimate productivity boost
https://www.tomsguide.com/ai/i-made-a-digital-twin-of-myself-in-chatgpt-and-it-changed-how-i-work-every-day
Waking up in 2026 to find your routine emails answered, your reports drafted and your biggest decisions flagged for review is no longer a thing of science fiction films. We’ve officially entered the Digital Twin era with ChatGPT. You can think of a digital twin as an AI version of you trained on your tone, decisions and workflow. In other words, we’ve moved beyond AI being just a chatbot for quick answers, but a virtual double that acts on your behalf as a “System of Action.”
Unlike standard assistants, these twins are continuously learning models that mirror your specific logic, communication style and priorities.
While this may sound a little creepy, it’s actually a beneficial way to work smarter, not harder. So if you’re ready to scale your expertise without hitting burnout, here is how to build your own personal AI clone.
Step 1: Gather your “hero” content
The quality of your twin depends entirely on the data you feed it. To make an AI that actually sounds like you, you need to collect “Hero Content”—real-world examples of your work and voice. You also need Memory enabled.
Step 2: Build with the FRED Paradigm
Using ChatGPT Plus, you’re going to want to create a Custom GPT. Select “Create a GPT” to start the build. I use a simple framework I call FRED to structure its instructions:
Functionality: Define exactly what the twin will do (e.g., “Draft my LinkedIn posts” or “Summarize my meetings” or “Respond to my emails”).
Response Style: Be specific. Tell it to be “warm and conversational” or “direct and professional.”
Expertise: Set its identity (e.g., “You are a senior project manager with 10 years of experience”).
Document Sources: Upload your gathered Hero Content directly into the GPT’s knowledge base.
Step 3: Connect your tools
In 2026, a true digital twin doesn’t just suggest; it executes by connecting directly to your workflow apps.
Google Calendar integration: Go to Settings > “Apps” to link your calendar. Once connected, you can ask your twin to “Plan my Monday” by scanning for gaps between meetings.
Slack & automation: For real-time messaging, use no-code platforms like Make or Zapier. Create a “Custom Webhook” that allows your GPT to send data to Slack whenever a specific trigger occurs, such as a new project brief being finalized.
Set decision rules: Instruct the model on how to handle high-risk tasks. For example, if you run an Etsy shop, you might prompt something like this: “If a refund request exceeds $100, do not reply — flag it for my manual review”.
Step 4: Refine with training
I’ll be honest, your Digital Twin might need a few tweaks right out of the gate. It’s kind of like training an intern; it needs ongoing feedback to improve.
I’ll be honest, your Digital Twin might need a few tweaks right out of the gate. It’s kind of like training an intern; it needs ongoing feedback to improve.
Tomi Engdahl says:
Meta and Other Tech Firms Put Restrictions on Use of OpenClaw Over Security Fears
Security experts have urged people to be cautious with the viral agentic AI tool, known for being highly capable but also wildly unpredictable.
https://www.wired.com/story/openclaw-banned-by-tech-companies-as-security-concerns-mount/
Last month, Jason Grad issued a late-night warning to the 20 employees at his tech startup. “You’ve likely seen Clawdbot trending on X/LinkedIn. While cool, it is currently unvetted and high-risk for our environment,” he wrote in a Slack message with a red siren emoji. “Please keep Clawdbot off all company hardware and away from work-linked accounts.”
Tomi Engdahl says:
Microsoft says bug causes Copilot to summarize confidential emails
https://www.bleepingcomputer.com/news/microsoft/microsoft-says-bug-causes-copilot-to-summarize-confidential-emails/
Microsoft says a Microsoft 365 Copilot bug has been causing the AI assistant to summarize confidential emails since late January, bypassing data loss prevention (DLP) policies that organizations rely on to protect sensitive information.
According to a service alert seen by BleepingComputer, this bug (tracked under CW1226324 and first detected on January 21) affects the Copilot “work tab” chat feature, which incorrectly reads and summarizes emails stored in users’ Sent Items and Drafts folders, including messages that carry confidentiality labels explicitly designed to restrict access by automated tools.
Copilot Chat (short for Microsoft 365 Copilot Chat) is the company’s AI-powered, content-aware chat that lets users interact with AI agents. Microsoft began rolling out Copilot Chat to Word, Excel, PowerPoint, Outlook, and OneNote for paying Microsoft 365 business customers in September 2025.
Tomi Engdahl says:
Suomalaiskoodarit ottivat tekoälyn omakseen – Yksi asia kuitenkin jarruttaa kehitystä
https://www.tivi.fi/uutiset/a/e373a0f2-d0c7-4db3-a6ef-f6605d495914
Tekoäly koetaan jo merkittäväksi hyödyksi suomalaisessa ohjelmistokehityksessä, mutta sen systemaattinen käyttö on yhä harvinaista. Suomalaisen ohjelmistoyritys Luoto Companyn tekemä suomalaisen ohjelmistokehityksen arkea kartoittanut kysely osoittaa, että tekoäly on jo kiinteä osa monen koodaajan päivittäistä työtä.
Tomi Engdahl says:
https://www.xda-developers.com/finally-found-local-llm-want-use-coding/
Tomi Engdahl says:
Proving AI deployment value needs a more strategic approach
Opinion
Feb 16, 2026
5 mins
https://www.cio.com/article/4130609/proving-ai-deployment-value-needs-a-more-strategic-approach.html
From copilots to agentic solutions, CIOs should adopt longer-term, more strategic methods to measure AI-driven value.
Tomi Engdahl says:
https://venturebeat.com/technology/openais-acquisition-of-openclaw-signals-the-beginning-of-the-end-of-the
Tomi Engdahl says:
Top 5 Super Fast LLM API Providers
Fast providers offering open source LLMs are breaking past previous speed limits, delivering low latency and strong performance that make them suitable for real time interaction, long running coding tasks, and production SaaS applications.
https://www.kdnuggets.com/top-5-super-fast-llm-api-providers
Tomi Engdahl says:
Anthropic releases Claude Sonnet 4.6, continuing breakneck pace of AI model releases
https://www.cnbc.com/2026/02/17/anthropic-ai-claude-sonnet-4-6-default-free-pro.html
Tomi Engdahl says:
OpenAI’s acquisition of OpenClaw signals the beginning of the end of the ChatGPT era
https://venturebeat.com/technology/openais-acquisition-of-openclaw-signals-the-beginning-of-the-end-of-the
Tomi Engdahl says:
Kiinan akrobaattirobotit ovat voimannäyttö, joka ei kuitenkaan yllätä länttä
Robotit|Tulevaisuudentutkija Risto Linturin mukaan Kiinan akrobaattirobottien kaltaisia laitteita voi olla tulevaisuudessa jopa miljardeja.
https://www.hs.fi/maailma/art-2000011825603.html
Lue tiivistelmä
Kiinan valtiollinen televisiokanava esitteli uudenvuodenlähetyksessään robotteja, jotka tekivät akrobaattitemppuja kuten voltteja ja kärrynpyöriä. Vielä viisi vuotta sitten yhdysvaltalaisyhtiön robotit tanssivat kömpelösti.
Tulevaisuudentutkija Risto Linturin mukaan robottien kehityksessä tärkeää on ollut muun muassa siirtyminen hydrauliikasta sähkömoottoreihin.
Robotit ovat Linturin mukaan lähinnä sirkusvehkeitä, kunnes ne saadaan laajamittaiseen massatuotantoon.
Vuonna 2025 lähes 90 prosenttia maailman humanoidiroboteista valmistettiin Kiinassa, ja sijoituspankki Morgan Stanleyn mukaan vuoteen 2050 mennessä markkinoilla voisi olla miljardi humanoidirobottia.
Tomi Engdahl says:
Robottien merkitys kasvaa Linturin mukaan vasta sitten, kun niiden tuotantoa voidaan automatisoida ja robotteja valmistetaan niin paljon, että niiden tekemä työ on halvempaa kuin ihmisten vastaava.
Silloin robotit voisivat korvata ihmisiä työntekijöinä tehtaiden ulkopuolella. Tämä hypoteettinen skenaario olisi yksi historian suurimmista yhteiskunnallisista muutoksista.
https://www.hs.fi/maailma/art-2000011825603.html
Tomi Engdahl says:
GLM-5: from Vibe Coding to Agentic Engineering
https://huggingface.co/papers/2602.15763
We present GLM-5, a next-generation foundation model designed to transition the paradigm of vibe coding to agentic engineering. Building upon the agentic, reasoning, and coding (ARC) capabilities of its predecessor, GLM-5 adopts DSA to significantly reduce training and inference costs while maintaining long-context fidelity. To advance model alignment and autonomy, we implement a new asynchronous reinforcement learning infrastructure that drastically improves post-training efficiency by decoupling generation from training. Furthermore, we propose novel asynchronous agent RL algorithms that further improve RL quality, enabling the model to learn from complex, long-horizon interactions more effectively.
Tomi Engdahl says:
I didn’t expect Gemini to replace ChatGPT for me, but I don’t see myself going back
https://www.xda-developers.com/didnt-expect-gemini-to-replace-chatgpt/
OpenAI’s ChatGPT was not the first LLM, but it was the most popular and impactful when it first launched. I still remember the excitement of chatting with an AI back in 2022, and realizing that it wasn’t just going to be a novelty but slot itself into my daily work. And I’ve been loyal to ChatGPT since, using it to help me make sense of complex topics, help me plan my schedules, flesh out creative ideas, and much more. I’ve just grown accustomed to its personable and conversational approach.
ChatGPT introduced web search several years ago, and it now delivers fast and cited responses. Its performance on current events isn’t bad by any means – Gemini’s is just better. By default, Gemini pulls on Google’s core search engine and ranking systems for real-time information through things like AI overviews and web grounding. You can also enable Knowledge Graph integration to use external data to enrich search.
Gemini’s native search grounding gives it an edge when accuracy and recency matter, especially for research and topical queries.
Additionally, Gemini’s Deep Research mode also outperforms ChatGPT’s. It generates a very detailed, user-approved research plan, which you can edit before letting Gemini execute the research. This gives me more control of where the research path is heading.
Being another Google product, Gemini integrates smoothly with other Google apps. The first thing I noticed when using the chat features was the option to add my NotebookLM notebooks. This is a game-changer because it combines the strengths of both tools. Gemini gains access to NotebookLM’s curated and private knowledge bases, and it can also cross-reference that data with current web data. I’m just happy about how this eliminates the manual copy-pasting from NotebookLM.
Gemini also lets me connect other Workspace apps such as Gmail, Google Keep, Docs, Drive, and Calendar, as well as YouTube Music. Gemini can cross-reference my schedule with upcoming events through web search, it can summarize long email threads, and it can also index content from my Keep notes and Docs documents using RAG (retrieval augmented generation).
Tomi Engdahl says:
Five MCP servers to rule the cloud
feature
Feb 16, 2026
9 mins
https://www.infoworld.com/article/4129024/five-mcp-servers-to-rule-the-cloud.html
The hyperscalers were quick to support AI agents and the Model Context Protocol. Use these official MCP servers from the major cloud providers to automate your cloud operations.
Tomi Engdahl says:
Five MCP servers to rule the cloud
https://www.infoworld.com/article/4129024/five-mcp-servers-to-rule-the-cloud.html
The hyperscalers were quick to support AI agents and the Model Context Protocol. Use these official MCP servers from the major cloud providers to automate your cloud operations.
Anthropic’s Model Context Protocol (MCP), coined the “USB-C for AI,” has inspired the software industry to think bigger with their AI assistants. Now, armed with access to external data and APIs, as well as to internal platforms and databases, agents are getting arms and legs to conduct impressive automation.
MCP is no longer reserved for trendy AI startups or niche software-as-a-service providers, as the major clouds have begun experimenting with adding MCP servers to their offerings to help customers automate core cloud computing operations. These MCP servers sit alongside and complement existing CLIs and APIs as a protocol for AI consumption.