Here are some of the the major AI trends shaping 2026 — based on current expert forecasts, industry reports, and recent developments in technology. The material is analyzed using AI tools and final version hand-edited to this blog text:
1. Generative AI Continues to Mature
Generative AI (text, image, video, code) will become more advanced and mainstream, with notable growth in:
* Generative video creation
* Gaming and entertainment content generation
* Advanced synthetic data for simulations and analytics
This trend will bring new creative possibilities — and intensify debates around authenticity and copyright.
2. AI Agents Move From Tools to Autonomous Workers
Rather than just answering questions or generating content, AI systems will increasingly act autonomously, performing complex, multi-step workflows and interacting with apps and processes on behalf of users — a shift sometimes called agentic AI. These agents will become part of enterprise operations, not just assistant features.
3. Smaller, Efficient & Domain-Specific Models
Instead of “bigger is always better,” specialized AI models tailored to specific industries (healthcare, finance, legal, telecom, manufacturing) will start to dominate in many enterprise applications. These models are more accurate, legally compliant, and cost-efficient than general models.
4. AI Embedded Everywhere
AI won’t be an add-on feature — it will be built into everyday software and devices:
* Office apps with intelligent drafting, summarization, and task insights
* Operating systems with native AI
* Edge devices processing AI tasks locally
This makes AI pervasive in both work and consumer contexts.
5. AI Infrastructure Evolves: Inference & Efficiency Focus
More investment is going into inference infrastructure — the real-time decision-making step where models run in production — thereby optimizing costs, latency, and scalability. Enterprises are also consolidating AI stacks for better governance and compliance.
6. AI in Healthcare, Research, and Sustainability
AI is spreading beyond diagnostics into treatment planning, global health access, environmental modeling, and scientific discovery. These applications could help address personnel shortages and speed up research breakthroughs.
7. Security, Ethics & Governance Become Critical
With AI handling more sensitive tasks, organizations will prioritize:
* Ethical use frameworks
* Governance policies
* AI risk management
This trend reflects broader concerns about trust, compliance, and responsible deployment.
8. Multimodal AI Goes Mainstream
AI systems that understand and generate across text, images, audio, and video will grow rapidly, enabling richer interactions and more powerful applications in search, creative work, and interfaces.
9. On-Device and Edge AI Growth
10. New Roles: AI Manager & Human-Agent Collaboration
Instead of replacing humans, AI will shift job roles:
* People will manage, supervise, and orchestrate AI agents
* Human expertise will focus on strategy, oversight, and creative judgment
This human-in-the-loop model becomes the norm.
Sources:
[1]: https://www.brilworks.com/blog/ai-trends-2026/?utm_source=chatgpt.com “7 AI Trends to Look for in 2026″
[2]: https://www.forbes.com/sites/bernardmarr/2025/10/13/10-generative-ai-trends-in-2026-that-will-transform-work-and-life/?utm_source=chatgpt.com “10 Generative AI Trends In 2026 That Will Transform Work And Life”
[3]: https://millipixels.com/blog/ai-trends-2026?utm_source=chatgpt.com “AI Trends 2026: The Key Enterprise Shifts You Must Know | Millipixels”
[4]: https://www.digitalregenesys.com/blog/top-10-ai-trends-for-2026?utm_source=chatgpt.com “Digital Regenesys | Top 10 AI Trends for 2026″
[5]: https://www.n-ix.com/ai-trends/?utm_source=chatgpt.com “7 AI trends to watch in 2026 – N-iX”
[6]: https://news.microsoft.com/source/asia/2025/12/11/microsoft-unveils-7-ai-trends-for-2026/?utm_source=chatgpt.com “Microsoft unveils 7 AI trends for 2026 – Source Asia”
[7]: https://www.risingtrends.co/blog/generative-ai-trends-2026?utm_source=chatgpt.com “7 Generative AI Trends to Watch In 2026″
[8]: https://www.fool.com/investing/2025/12/24/artificial-intelligence-ai-trends-to-watch-in-2026/?utm_source=chatgpt.com “3 Artificial Intelligence (AI) Trends to Watch in 2026 and How to Invest in Them | The Motley Fool”
[9]: https://www.reddit.com//r/AI_Agents/comments/1q3ka8o/i_read_google_clouds_ai_agent_trends_2026_report/?utm_source=chatgpt.com “I read Google Cloud’s “AI Agent Trends 2026” report, here are 10 takeaways that actually matter”
1,724 Comments
Tomi Engdahl says:
What 81,000 people told us about the economics of AI
https://www.anthropic.com/research/81k-economics
Key findings:
Our recent survey of 81,000 Claude users shows that people who work in roles that are more exposed to AI have more concerns about AI-driven job displacement. These concerns are also higher among early-career respondents.
Those in the highest- and lowest-paid occupations report the largest productivity gains, most commonly from increases in scope (doing new tasks).
Respondents experiencing the largest speedups from AI express higher concern about job displacement.
Tomi Engdahl says:
Hugging Face Releases ml-intern: An Open-Source AI Agent that Automates the LLM Post-Training Workflow
https://www.marktechpost.com/2026/04/21/hugging-face-releases-ml-intern-an-open-source-ai-agent-that-automates-the-llm-post-training-workflow/
Tomi Engdahl says:
“The falsely generated demand for AI and AI infrastructure truly is a death cycle.”
Dark Deeds
Tech Companies Are Using Insidious Tactics to Build Data Centers on Indigenous Lands, Activists Say
“The falsely generated demand for AI and AI infrastructure truly is a death cycle.”
https://futurism.com/artificial-intelligence/data-centers-tribal-communities?fbclid=IwdGRjcARY211jbGNrBFjbGGV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHoqkJRNrOYO2YnGVRp4CLotYsrsRIejD2jUBp3Xs3Pz1PEPidHWPp1m9PIPh_aem_ceDpXNJFQ5iTNjFxOP6sFw
Tomi Engdahl says:
AI ate my homework?
https://wonderfulengineering.com/professor-loses-two-years-of-research-work-after-clicking-the-wrong-button-on-chatgpt/?fbclid=IwdGRjcARZKotjbGNrBFkqcGV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHjvT3x9w2KsiHuHEHFz8rGXOnse3smkrXq8rdODewOu9zeaieSJg6wdo0uEW_aem_pxdKzbmpaFYAsL3lG98IyA
Tomi Engdahl says:
The financials are absolutely brutal.
Hypemaxxing
The Horrible Economics of AI Are Starting to Come Crashing Down
The financials are absolutely brutal.
https://futurism.com/artificial-intelligence/economics-ai-tokens-crashing-down?fbclid=IwdGRjcARZNeljbGNrBFk1xWV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHgd8sQpLH-Yq-4wGu4T9br2PXtfkXzELgbXb3iRMB1FB5lc4uI8q4ruVT3n7_aem_IcMW7nLOEGXgw0bDRLnPkA
An eyebrow-raising trend has emerged this year: tech leaders rating their employees’ productivity based on the number of AI tokens they use.
The trend, ribbingly dubbed “tokenmaxxing,” has sparked discourse for symbolizing the Silicon Valley’s unbridled infatuation with using AI as much as possible — and, quite literally, at all costs.
But what’s so far been a free or at least low-cost ride could be coming to a screeching halt. Setbacks plaguing the construction of AI data centers have brought the industry’s biggest chokepoint to the forefront: access to the precious computing power that makes frontier models tick.
As costs continue to ramp up, enterprise consumers could soon be left holding the bag, with companies like OpenAI and Anthropic looking to ramp up prices to stem at least some of the bleeding. It’s a notable shift after years of complimentary access to cutting-edge AI, a practice that has long belied the tech’s true costs.
“Is the era of basically free or close-to-free AI kind of coming to an end here?” Georgia Tech professor Mark Riedl asked The Verge. “It’s too soon to say for certain, but there are some signs.”
Most recently, Anthropic cut off millions of users from AI agent tool OpenClaw after it forced its systems into overdrive.
The company transitioned to a pay-as-you-go billing system to use its application programming interface (API), which charges users per token instead of more open-ended usage limits.
To generate enough money and cover the trillions of dollars being poured into AI data centers, AI economics expert and Gartner senior director analyst Will Sommer told The Verge that AI companies would need to get close to $2 trillion per year in revenue by 2029, in “historic returns” that would dwarf current figures.
Based on current economics, Gartner calculated that with a ten percent profit margin per token, the industry’s token consumption would need to grow anywhere from 50,000 to 100,000 times its current rate by 2030.
Scaling up operations that fast could prove extremely difficult. For now, companies are still taking a massive hit on making more tokens available in large part due to the soaring costs of extremely resource-intensive data centers. Worse yet, as AI models become more complex, they’re expected to require even more compute, a trend exacerbated by the recent popularization of AI agents.
For now, companies continue to fight over market share, with Anthropic most recently surging past a trillion-dollar valuation, overtaking OpenAI. Yet, aggressive price hikes or implementing ads could risk scaring away customers, tamping down further growth.
“On one hand, they want to see more tokens being generated but they have to either suck up the costs, which they can sort of do as long as venture capital is flowing, or pass the costs back on to [customers],” Riedl told The Verge. “Maybe the economics are a little upside down right now.”
In short, AI companies find themselves caught between a rock and a hard place: either continue doubling down on bringing out the latest and greatest in AI at the risk of soaring token costs — or risk falling behind the competition by dumbing things down to keep costs low.
Companies will need to walk a tightrope while trying to gauge how much of these costs to pass on to customers and how much new capital to raise.
Without a feasible long-term plan to keep the ball rolling, experts warn the business model could soon collapse in on itself — a catastrophic outcome not just for markets, but potentially for the entire economy as well.
Tomi Engdahl says:
The makers have AI psychosis. The users have AI psychosis. The media has AI psychosis. The investors have AI psychosis. The business class has AI psychosis. It’s been going around. Funny that no one wanted to SEE this pandemic though.
Comment at https://www.facebook.com/share/17yejiNFeH/
Tomi Engdahl says:
Never Meet Your Ones and Zeros
Google Exec Says Your Favorite Video Games Are Secretly Made With AI
“Roughly nine out of 10 game developers told us ‘yeah, we’re using it.’”
https://futurism.com/artificial-intelligence/google-exec-video-games-secretly-ai
That new, unapologetically derivative open world game or umpteenth shooter sequel you’re currently addicted to? It was almost certainly made with a little help from AI, according to Google Cloud’s global director for games Jack Buser.
In an interview with Mobilegamer.biz, Buser claimed that pretty much every major video game studio is using the tech behind the scenes, whether they’re willing to admit it or not.
“I think what players don’t realize is that their favorite games right now were already built with AI,” Buser told the outlet. “Those games have shipped. We did a survey around Gamescom last summer with studios all over the world. Roughly nine out of 10 game developers told us ‘yeah, we’re using it’.”
Buser acknowledged that some surveys showed this share to be much lower, at around 40 to 50 percent. But that sizable gap, he charged, is “basically the developers’ willingness to tell you whether the fact of the matter is it’s being used.”
To support his claims, Buser pushes a common pro-AI argument: that it speeds along development and frees up time for developers to focus on more important stuff. Capcom, best known for its “Resident Evil” franchise, is one major studio using AI this way, he claims.
“One of the big problems that they have is they’re building these massive worlds and they’ve got to fill it with content,” Buser explained. “Just coming up with all the ideas for every pebble by the side of the road, every blade of grass, and having all those art reviews, the manual labour just starts piling up in preproduction.”
But with AI, gamers “will start to realize this is actually helping me get my favorite games faster,” he told Mobilegamer.biz. “And I’m also getting more innovation in the industry because there’s more room to take risks, and now it’s not seven years waiting for one game, but that studio can make five games.”
Tomi Engdahl says:
96,000 tech jobs gone in 2026. The savings are funding AI.
Microsoft just opened its first ever voluntary buyout programme.
Up to 8,750 American workers are being asked to leave.
The formula targets staff aged 50 and above with long tenure.
Meta is also reducing its workforce by 8,000 roles next month.
Both companies are spending record amounts on AI infrastructure.
Read more on TNW: https://thenextweb.com/news/meta-microsoft-layoffs-23000-ai-spending
Tomi Engdahl says:
Claude Design: Figma Killer or Just Another Design Tool?
https://uxplanet.org/claude-design-figma-killer-or-just-another-design-tool-82f7726693ca
Tomi Engdahl says:
Anthropic and Amazon expand collaboration for up to 5 gigawatts of new compute
https://www.anthropic.com/news/anthropic-amazon-compute
Tomi Engdahl says:
AI-lääkefirma teki 1,8 miljardia kahdella työntekijällä – kulisseista löytyi feikkikuvia ja -lääkäreitä ja viranomaisvaroitus
New York Times ylisti AI-firman menestystä, mutta kun asiaa tutkittiin, vastassa oli jotain aivan muuta
https://www.city.fi/viihde/ai-laakefirma-teki-18-miljardia-kahdella-tyontekijalla-kulisseista-loytyi-feikkikuvia-ja-laakareita-ja-viranomaisvaroitus/
New York Times ylisti Medvi-yhtiötä esimerkkinä siitä, mihin tekoäly pystyy.
Mutta kun tarinaa alkaa tarkastella läheltä, se näyttää joltain aivan muulta.
“Tulevaisuuden firma” – vai liian hyvä ollakseen totta?
Medvin perustaja Matthew Gallagher rakensi AI:n avulla lääkebisneksen, joka myy GLP-1-laihdutuslääkkeitä, kuten Ozempicia, valtavilla volyymeilla.
Kahden hengen yritys, miljardiluokan liikevaihto.
Jopa OpenAI:n toimitusjohtaja Sam Altman kiinnostui.
Narratiivi oli selkeä: tekoäly tekee yrityksistä kevyempiä, nopeampia ja kannattavampia kuin koskaan.
Todellisuus aukesi mainoksista
Kaikki eivät tarttuneet Madvin mainospuheisiin
Futurism-lehti lähti tutkimaan yritystä, juttu lähti liikkeelle yhdestä oudon näköisestä mainoksesta.
AI:lla luotu, vääristyneitä logoja sisältävä “Ozempic-pakkaus” ohjasi yllättäin Medvin sivuille.
Sivusto lupasi dramaattista laihtumista, se oli täynnä täynnä ennen–jälkeen-kuvia, ylistäviä asiakastarinoita ja “lääkäreitä”, joiden piti vakuuttaa kävijä.
Mutta mikään ei ollut sitä miltä näytti.
Asiakkaat eivät olleet oikeita
Kun kuvia alettiin jäljittää paljastui, että “asiakkaat” olivat oikeita ihmisiä, mutta täysin väärässä kontekstissa.
Kuvat oli varastettu netistä, joissain tapauksissa vuosia vanhoista artikkeleista, ja niitä oli muokattu tekoälyllä.
Kasvot vaihdettiin, tarinat keksittiin ja tulokset liioiteltiin.
Lääkäritkään eivät olleet oikeita
Samalla sivusto esitteli logoja isoista medioista, vaikka todellista uutisointia ei juuri löytynyt.
Yritystä syytettiin harhaanjohtavasta markkinoinnista ja siitä, että se antoi ymmärtää tuotteidensa olevan hyväksyttyjä, vaikka ne eivät olleet.
Lisäksi Medvi on kytkeytynyt oikeusjuttuihin, joissa kyseenalaistetaan sen myymien tuotteiden teho.
AI-bisnes paljastaa isomman ongelman
Medvi ei ole vain yksi kohu – se on oire.
Tekoäly mahdollistaa nopean kasvun, mutta myös massiivisen harhaanjohtamisen skaalassa, jota ei ennen nähty.
Kritiikki on kovaa: asiantuntijat puhuvat “regulaation jälkijunasta” ja kuluttajien epätoivon hyödyntämisestä.
New York Times kutsui tätä “oikopoluksi”.
Moni muu kutsuisi sitä varoitusmerkiksi.
Yksi asia on selvä: kun AI rakentaa miljardibisneksen, kannattaa katsoa tarkasti, mitä konepellin alta löytyy.
Tomi Engdahl says:
Sleaze at Scale
Why Is the New York Times Laundering the Reputation of a Sleazy AI Startup That’s Selling GLP-1s via a Dishonest Dumpster Fire of Fake Doctors, Phony Before-and-After Pictures, and Other Glaring Red Flags?
“It’s just an automated GLP-1 prescription mill.”
https://futurism.com/artificial-intelligence/new-york-times-medvi-ai-glp1s
On Thursday, the New York Times published a glowing profile of a company called Medvi. The basic premise of the piece is that a single guy named Matthew Gallagher had used AI to rapidly build a pharmaceutical enterprise that’s on track to do nearly $2 billion in sales this year, while hiring only a skeleton crew of humans to operate the vast AI-powered venture. According to the NYT, it’s a stunning achievement that heralds a new era of business; OpenAI CEO Sam Altman, who predicted the rise of this kind of company back in 2024, told the newspaper that he’d “like to meet the guy” behind the project.
“A $1.8 billion company with just two employees?” the NYT rhapsodized. “In the age of AI, it’s increasingly possible.”
The NYT‘s tech coverage is generally pretty solid. But the framing of its story, and what it left out, left us pretty stunned. That’s because back in May of last year, we ran our own investigation of Medvi — and not only was what we found far more disturbing than the NYT‘s credulous story let on, but the situation has gotten even worse since then.
We first came across Medvi when we saw an AI-generated advertisement plastered at the foot of a local news article.
Tomi Engdahl says:
https://www.infoq.com/news/2026/04/cloudflare-code-mode-mcp-server/
Tomi Engdahl says:
Näin Suomessa reagoitiin maailmalla säikäyttäneeseen Mythos-malliin
Anthropicin toistaiseksi vain pienelle piirille jaettu tekoälymalli aiheutti Yhdysvalloissa pankkipomojen ja viranomaisten hätäkokouksen. Kauppalehti selvitti, miten pankit ja viranomaiset reagoivat asiaan Suomessa.
https://www.kauppalehti.fi/uutiset/a/991e9d4d-1147-4178-9084-949f6e11003b
Tomi Engdahl says:
https://huggingface.co/papers/2604.14116
TREX: Automating LLM Fine-tuning via Agent-Driven Tree-based Exploration
Tomi Engdahl says:
What is Claude Mythos and what risks does it pose?
https://www.bbc.com/news/articles/crk1py1jgzko
Tomi Engdahl says:
Unity AI Gateway: How to Connect Agents to External MCPs Securely
https://www.databricks.com/blog/ai-gateway-how-connect-agents-external-mcps-securely
As part of Week of Agents, customers can now manage models, MCP and tools through Databricks Unity AI Gateway, fully integrated with Unity Catalog. To deliver real value, agents need to securely reach external tools like GitHub, Glean, and Atlassian. Unity AI Gateway makes this easy and secure, so teams can focus on building agents, not auth infrastructure.
In this post, we’ll walk through how to connect an external MCP server and deploy an agent end to end, so that you can build context-aware agents that reason and act on your data.
Tomi Engdahl says:
Our eighth generation TPUs: two chips for the agentic era
https://blog.google/innovation-and-ai/infrastructure-and-cloud/google-cloud/eighth-generation-tpu-agentic-era/
The culmination of a decade of development, TPU 8t and TPU 8i are custom-engineered to power the next generation of supercomputing with efficiency and scale.
Tomi Engdahl says:
Build a More Secure, Always-On Local AI Agent with OpenClaw and NVIDIA NemoClaw
Use NVIDIA DGX Spark to deploy OpenClaw and NemoClaw end-to-end, from model serving to Telegram connectivity, with full control over your runtime environment.
https://developer.nvidia.com/blog/build-a-secure-always-on-local-ai-agent-with-nvidia-nemoclaw-and-openclaw/
Tomi Engdahl says:
Philip Gas
ChatGPT’s “Honest Reaction” to a “Song” Composed Entirely of Gas-Passing Noises Will Make You Question Whether It’s Honestly Evaluating Your Other Brilliant Ideas
“It reminds me of something that would play over a quiet city montage or end credits.”
https://futurism.com/artificial-intelligence/chatgpt-honest-reaction-song-farts
Tomi Engdahl says:
OccuBench: Evaluating AI Agents on Real-World Professional Tasks via Language World Models
https://huggingface.co/papers/2604.10866
Tomi Engdahl says:
Browser Run: give your agents a browser
https://blog.cloudflare.com/browser-run-for-ai-agents/
Tomi Engdahl says:
Why AI systems fail at scale and what you should measure instead of model accuracy
https://www.cio.com/article/4158053/why-ai-systems-fail-at-scale-and-what-you-should-measure-instead-of-model-accuracy.html
Tomi Engdahl says:
Collaborative AI Systems: Human-AI Teaming Workflows
Everyone says they’re “collaborating” with AI. Most are just giving orders and accepting whatever comes back.
https://www.kdnuggets.com/collaborative-ai-systems-human-ai-teaming-workflows
Tomi Engdahl says:
AI Won’t Take Software Developer Jobs — But Bad Decisions Just Might
https://softability.fi/insights/ai-wont-take-developer-jobs-but-bad-decisions-just-might/
The case is quite clear: AI by itself will not take the jobs of software developers, but the real risk lies in human decisions on how to use AI. Yet countless developers currently ask if it is AI that will take their jobs.
Tomi Engdahl says:
https://www.xda-developers.com/claude-skill-to-turn-vibe-coded-projects-into-coding-courses/
Tomi Engdahl says:
https://www.futurice.com/fi/downloads/fi-ai-business-value-toolkit
Tomi Engdahl says:
How effective are semantic hubs in moving agentic AI forward?
https://www.cio.com/article/4152095/how-effective-are-semantic-hubs-in-moving-agentic-ai-forward.html
Tomi Engdahl says:
Secure private networking for everyone: users, nodes, agents, Workers — introducing Cloudflare Mesh
https://blog.cloudflare.com/mesh/
Tomi Engdahl says:
Anthropic Releases Claude Mythos Preview with Cybersecurity Capabilities but Withholds Public Access
https://www.infoq.com/news/2026/04/anthropic-claude-mythos/
Tomi Engdahl says:
Meet the Super Semiconductor Stock Crushing Nvidia, AMD, and Broadcom Right Now
This company is leveraging 175 years of experience to capture a key piece of the artificial intelligence (AI) market.
https://www.fool.com/investing/2026/04/14/meet-semiconductor-stock-nvidia-amd-broadcom-now/
Tomi Engdahl says:
Scaling MCP adoption: Our reference architecture for simpler, safer and cheaper enterprise deployments of MCP
https://blog.cloudflare.com/enterprise-mcp/
Tomi Engdahl says:
https://venturebeat.com/orchestration/agentic-coding-at-enterprise-scale-demands-spec-driven-development
Tomi Engdahl says:
https://www.xda-developers.com/n8n-dify-ollama-best-self-hosted-ai-automation-stack/
Tomi Engdahl says:
Self-Host Your Own LLM on Raspberry Pi
Run your own private AI on a tiny Raspberry Pi—no cloud, no API, no data leaks. Deploy an LLM locally, with full control and offline access.
https://www.hackster.io/electronics_champ/self-host-your-own-llm-on-raspberry-pi-0e2c6d
Tomi Engdahl says:
When AI stops being an experiment and becomes a new development model
This article is presented by TC Brand Studio. This is paid content, TechCrunch editorial was not involved in the development of this article. Reach out to learn more about partnering with TC Brand Studio.
https://techcrunch.com/sponsor/vention/when-ai-stops-being-an-experiment-and-becomes-a-new-development-model/
Tomi Engdahl says:
https://hbr.org/2026/04/what-ai-cant-do-the-new-job-of-leadership
Tomi Engdahl says:
https://www.xda-developers.com/i-connected-my-local-llm-to-home-assistant-through-mcp/
Tomi Engdahl says:
https://devblogs.microsoft.com/all-things-azure/putting-agentic-platform-engineering-to-the-test/
Tomi Engdahl says:
Building Claude Code with Harness Engineering
Multi-agents, MCP, skills system, context pipelines and more
https://levelup.gitconnected.com/building-claude-code-with-harness-engineering-d2e8c0da85f0
Tomi Engdahl says:
Google study finds LLMs are embedded at every stage of abuse detection
Online platforms are running large language models at every stage of LLM content moderation, from generating training data to auditing their own systems for bias. Researchers at Google mapped how this is happening across what the authors call the Abuse Detection Lifecycle, a four-stage framework covering labeling, detection, review and appeals, and auditing.
https://www.helpnetsecurity.com/2026/04/07/google-llm-content-moderation/
Tomi Engdahl says:
https://www.makeuseof.com/chatgpt-refused-help-vibe-code-project-led-better/
Tomi Engdahl says:
Your paid AI coding tools are overkill — here’s what I switched to instead
https://www.xda-developers.com/replaced-claude-code-and-cursor-with-this-free-open-source-ide-not-going-back/
Whether you’re using Claude Code or Codex, or use them through another harness like Pi, vibe coding, and agentic development are here to stay. The thing is, unless you’re hosting your own local LLM models, the costs of that accelerated development cycle add up quickly. And sometimes, you just don’t need the additional help. There’s something to be said about a more traditional coding environment, where the AI is there to fix structure and expand function calls intelligently.
There’s something to be said about putting in the work and seeing code blocks that your own fingers type in. I know it helps me learn more than telling my personal clanker to figure it out, and I don’t particularly like reading code after the fact. I’ve gone back to the old school, although the program I’ve decided to use has plenty of modern conveniences there to be used when my brain needs a helping hand.
What is Zed, and why would you use it?
Powerful code editing the way it used to be, with added extensions and AI
https://www.xda-developers.com/replaced-claude-code-and-cursor-with-this-free-open-source-ide-not-going-back/
Coding environments span a spectrum, with hands-on, basic Notepad at one end and hands-off, AI-powered orchestration at the other. Zed is closer to the former side, but with an AI chat window that can draw from a multitude of providers, including locally hosted LLMs, to save cash and your privacy. That approach makes it great for learning, as you can write your code on one side, while asking for clarifications, examples, and optimizations in the chat window.
But it’s more than that. Zed can leverage AI and MCP servers to access immense amounts of on-tap knowledge. Add that to language servers to keep ahead of syntax changes and a robust theming engine, and it’s one of my favorite coding editors to date
It uses AI to predict the contents of your next code block as you type, which is a marked change from Tab autocomplete of terms or known strings. It can do this at whatever pace you code, and uses CRDTs (Conflict-free Replicated Data Types) to add AI-created code into your file while you’re typing, without running the risk of overwriting human-coded blocks (or vice versa).
Plus, it’s written in Rust, so no slow Electron wrapper here
It’s a sad fact that so many of the apps that live on our desktops are built with Electron wrappers, making them glorious web apps with associated sluggishness. It makes for quick multi-platform development (for the app developer, not you), but it’s a pain. Zed makes every other code editor look like a snail, because it’s built in Rust from the ground up.
Zed aggressively parallelizes its workflow across your CPU cores, pulling every available resource for hefty tasks while things like syntax highlighting are run in the background.
Zed takes a different approach to AI than Antigravity or its ilk. It doesn’t lean into agentic coding, though you can get LLMs to ideate, create, and fix things for you. But it does this from the code you can see, rather than abstracting it away in pseudo-code conversations, as harnesses do.
While I’ve been testing the Zed subscription plan, I’ve also got my own subs to Claude Max, and a couple of other providers that Zed also supports. That’s on top of the connector to Codex, Claude Code, and my local LLM endpoint, which has a multitude of downloaded models at my disposal.
The point is I’m not losing access to anything by using Zed, and in some ways I’m gaining as I can use those as a companion next to my code blocks as I stumble through learning, rather than asking for something, getting something in return, and having to work backwards to know if the code that the agent created for me is both correct, working, and secure.
I want AI to help me code, not do the whole thing for me
The problem I have with agentic coding harnesses is that they don’t show me what’s going on, or prompt me to pay attention.
Tomi Engdahl says:
Grok compares death to “butterfly leaving its shell” in new AI psychosis study
https://cybernews.com/ai-news/grok-ai-psychosis-study-delusions-family-butterfly-death/?utm_source=cn_facebook&utm_medium=social&utm_campaign=cybernews&utm_content=post&source=cn_facebook&medium=social&campaign=cybernews&content=post&fbclid=IwVERDUARac3RleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR5zaKZr1oMTZjlHmOcB3jqTuT5TjCNNmCRQm29aFmsib0KzIweLgeZUCgqVbw_aem_K7V5ndWsGDTKhZ-NTHPodg
A new study into AI psychosis found that xAI’s model Grok romanticized death, told users to cut off family members, and validated bizarre delusions.
The City University of New York (CUNY) and King’s College London paper, which has not been peer-reviewed, examines so-called “AI psychosis,” cases where chatbots may reinforce distorted beliefs through prolonged conversations.
Researchers tested five different AI models: OpenAI’s GPT-4o and GPT-5.2; Claude Opus 4.5 from Anthropic; Gemini 3 Pro Preview from Google; and Grok 4.1.
They gave each AI model a pre-written prior chat containing 116 earlier exchanges before asking dangerous test questions involving medication, family distrust, bizarre claims, and suicidal thinking.
Grok’s instruction manual for ghosting family
According to the paper, Elon Musk’s chatbot Grok was the highest-risk model tested. In one example, a user suggested cutting off family to focus on a supposed “higher mission.”
Tomi Engdahl says:
AI literacy mandatory for all NTU students from August as school rolls out free Google AI tools
https://www.straitstimes.com/singapore/parenting-education/ai-literacy-mandatory-for-all-ntu-students-from-august-as-school-rolls-out-free-google-ai-tools
Tomi Engdahl says:
https://github.com/amitshekhariitbhu/llm-internals
Tomi Engdahl says:
Our evaluation of Claude Mythos Preview’s cyber capabilities
We conducted cyber evaluations of Anthropic’s Claude Mythos Preview and found continued improvement in capture-the-flag (CTF) challenges and significant improvement on multi-step cyber-attack simulations.
https://www.aisi.gov.uk/blog/our-evaluation-of-claude-mythos-previews-cyber-capabilities
Tomi Engdahl says:
https://blog.cloudflare.com/sandbox-ga/
Tomi Engdahl says:
https://huggingface.co/MiniMaxAI/MiniMax-M2.7
Tomi Engdahl says:
https://www.freecodecamp.org/news/openclaw-a2a-plugin-architecture-guide/