Here are some of the the major AI trends shaping 2026 — based on current expert forecasts, industry reports, and recent developments in technology. The material is analyzed using AI tools and final version hand-edited to this blog text:
1. Generative AI Continues to Mature
Generative AI (text, image, video, code) will become more advanced and mainstream, with notable growth in:
* Generative video creation
* Gaming and entertainment content generation
* Advanced synthetic data for simulations and analytics
This trend will bring new creative possibilities — and intensify debates around authenticity and copyright.
2. AI Agents Move From Tools to Autonomous Workers
Rather than just answering questions or generating content, AI systems will increasingly act autonomously, performing complex, multi-step workflows and interacting with apps and processes on behalf of users — a shift sometimes called agentic AI. These agents will become part of enterprise operations, not just assistant features.
3. Smaller, Efficient & Domain-Specific Models
Instead of “bigger is always better,” specialized AI models tailored to specific industries (healthcare, finance, legal, telecom, manufacturing) will start to dominate in many enterprise applications. These models are more accurate, legally compliant, and cost-efficient than general models.
4. AI Embedded Everywhere
AI won’t be an add-on feature — it will be built into everyday software and devices:
* Office apps with intelligent drafting, summarization, and task insights
* Operating systems with native AI
* Edge devices processing AI tasks locally
This makes AI pervasive in both work and consumer contexts.
5. AI Infrastructure Evolves: Inference & Efficiency Focus
More investment is going into inference infrastructure — the real-time decision-making step where models run in production — thereby optimizing costs, latency, and scalability. Enterprises are also consolidating AI stacks for better governance and compliance.
6. AI in Healthcare, Research, and Sustainability
AI is spreading beyond diagnostics into treatment planning, global health access, environmental modeling, and scientific discovery. These applications could help address personnel shortages and speed up research breakthroughs.
7. Security, Ethics & Governance Become Critical
With AI handling more sensitive tasks, organizations will prioritize:
* Ethical use frameworks
* Governance policies
* AI risk management
This trend reflects broader concerns about trust, compliance, and responsible deployment.
8. Multimodal AI Goes Mainstream
AI systems that understand and generate across text, images, audio, and video will grow rapidly, enabling richer interactions and more powerful applications in search, creative work, and interfaces.
9. On-Device and Edge AI Growth
10. New Roles: AI Manager & Human-Agent Collaboration
Instead of replacing humans, AI will shift job roles:
* People will manage, supervise, and orchestrate AI agents
* Human expertise will focus on strategy, oversight, and creative judgment
This human-in-the-loop model becomes the norm.
Sources:
[1]: https://www.brilworks.com/blog/ai-trends-2026/?utm_source=chatgpt.com “7 AI Trends to Look for in 2026″
[2]: https://www.forbes.com/sites/bernardmarr/2025/10/13/10-generative-ai-trends-in-2026-that-will-transform-work-and-life/?utm_source=chatgpt.com “10 Generative AI Trends In 2026 That Will Transform Work And Life”
[3]: https://millipixels.com/blog/ai-trends-2026?utm_source=chatgpt.com “AI Trends 2026: The Key Enterprise Shifts You Must Know | Millipixels”
[4]: https://www.digitalregenesys.com/blog/top-10-ai-trends-for-2026?utm_source=chatgpt.com “Digital Regenesys | Top 10 AI Trends for 2026″
[5]: https://www.n-ix.com/ai-trends/?utm_source=chatgpt.com “7 AI trends to watch in 2026 – N-iX”
[6]: https://news.microsoft.com/source/asia/2025/12/11/microsoft-unveils-7-ai-trends-for-2026/?utm_source=chatgpt.com “Microsoft unveils 7 AI trends for 2026 – Source Asia”
[7]: https://www.risingtrends.co/blog/generative-ai-trends-2026?utm_source=chatgpt.com “7 Generative AI Trends to Watch In 2026″
[8]: https://www.fool.com/investing/2025/12/24/artificial-intelligence-ai-trends-to-watch-in-2026/?utm_source=chatgpt.com “3 Artificial Intelligence (AI) Trends to Watch in 2026 and How to Invest in Them | The Motley Fool”
[9]: https://www.reddit.com//r/AI_Agents/comments/1q3ka8o/i_read_google_clouds_ai_agent_trends_2026_report/?utm_source=chatgpt.com “I read Google Cloud’s “AI Agent Trends 2026” report, here are 10 takeaways that actually matter”
1,277 Comments
Tomi Engdahl says:
https://www.linkedin.com/posts/mojoe_aivideo-activity-7442630881241927680-UVKM?utm_source=social_share_video_v2&utm_medium=android_app&rcm=ACoAAAACCmABPSSc6WguWzruqQo_hSEPC5Jn4Xg&utm_campaign=copy_link
Tomi Engdahl says:
Ticking Time Bomb
A Grim Truth Is Emerging in Employers’ AI Experiments
“Those are foundational problems no one has solved in LLM technology. And you want to tell me that’s not going to manifest in code quality problems?”
https://futurism.com/artificial-intelligence/ai-coding-error-debt?fbclid=IwdGRjcAQ3CY5jbGNrBDcJZ2V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHjmVDiPbBwfjHkq7kch5ofoR9INdX0m6GjWGu7SOPygXfJzJxeLNwue-4P5M_aem_PN3U_nnxhUPaZqomUgxLtg
The tremendous hype surrounding AI coding shows no signs of dying down. Last month, Anthropic released a suite of industry-specific plug-ins for its Claude Cowork AI agent, panicking investors over fears that traditional enterprise software-as-a-service companies could soon be made obsolete. The announcement triggered a trillion-dollar sell-off, with many tech companies seeing sharp declines in their share prices.
It even seemed to jolt Sam Altman’s OpenAI, which moved to drop many of its distracting “side quests” in a concerted effort to double down on coding and enterprise-specific AI tools.
Yet plenty of glaring questions about the long-term viability of AI programming prevail, with some warning that questionable and unverified code could come to spell disaster for corporations that eagerly embrace it.
Tomi Engdahl says:
Artificial Intelligence
How to 10x Your Vulnerability Management Program in the Agentic Era
The evolution of vulnerability management in the agentic era is characterized by continuous telemetry, contextual prioritization and the ultimate goal of agentic remediation.
https://www.securityweek.com/how-to-10x-your-vulnerability-management-program-in-the-agentic-era/
Tomi Engdahl says:
Artificial Intelligence
OpenAI Launches Bug Bounty Program for Abuse and Safety Risks
Through the new program, OpenAI will reward reports covering design or implementation issues leading to material harm.
https://www.securityweek.com/openai-launches-bug-bounty-program-for-abuse-and-safety-risks/
OpenAI
OpenAI has announced a new public safety bug bounty program focused on AI-specific abuse and safety risks in its products.
Tomi Engdahl says:
Artificial Intelligence
Why Agentic AI Systems Need Better Governance – Lessons from OpenClaw
Agentic AI platforms are shifting from passive recommendation tools to autonomous action-takers with real system access,
https://www.securityweek.com/why-agentic-ai-systems-need-better-governance-lessons-from-openclaw/
Organizations urgently need governance frameworks built around visibility, access control, and behavioral monitoring to manage the expanded attack surface this creates.
OpenClaw is an open-source platform for autonomous AI agents that you can self-host and run locally on your machine for task automation. Taking this platform to task, AI agents are now interacting with one another via an experimental social network for AI agents called Moltbook. Even an experienced AI security researcher at Meta learned that OpenClaw is not without its wild-west frontier status. An AI agent accidentally deleted her emails.
This news has again put the spotlight on the nature of authority and agency granted to agentic AI systems, as well as the need for better security and governance.
The OpenClaw Gateway is the always-on control plane that receives incoming messages, maintains sessions and channel connections, and routes requests to the right agent, tools, or services. It’s like the front door of a busy supermarket in an agentic AI system. There is a series of prompts coming in and out of the door. Upon receiving a prompt, it gears up for action, picking the right set of tools and integrations to finish the task. In more advanced setups, the agentic AI has even more agency, storing session state and the credentials needed to interact with other systems. If this ‘front door’ is compromised, you have to confront a growing blast radius as the exposure can trigger legitimate actions across multiple apps and services:
The gateway’s risk rises sharply when it extends beyond its intended network scope and becomes remotely reachable, effectively turning it from a simple exposed service into an external control point.
Weak access controls can worsen exposure because they can let an attacker (who can connect to the gateway) authenticate successfully and start triggering actions.
On local networks, discovery protocols like multicast DNS can advertise the gateway’s presence and connection details, making it easier for anyone with local access to find it and start probing it.
Many gateways also use two paths at once: regular HTTP endpoints, plus long-lived WebSocket connections for interactive sessions. If the reverse proxy and access rules are not applied consistently to both, gaps appear that attackers can exploit.
Tomi Engdahl says:
ABB tuo generatiivisen tekoälyn osaksi energianhallintaa
https://etn.fi/index.php/13-news/18727-abb-tuo-generatiivisen-tekoaelyn-osaksi-energianhallintajaerjestelmaeaensae
ABB on liittänyt generatiiviseen tekoälyyn perustuvan Industrial Knowledge Vault -toiminnallisuuden osaksi Ability Energy Management Systemiä. Tavoitteena on nopeuttaa energiankulutuksen, päästöajureiden, kustannusten ja laitteiden suorituskyvyn tulkintaa ilman raskasta raporttien ja näkymien läpikäyntiä.
Energianhallinnan varsinainen uutuus ABB:n julkistuksessa ei ole uusi mittaus- tai optimointikerros, vaan oikeastaan käyttöliittymä. Käyttäjä voi hakea Energy Management Systemin tietoja luonnollisella kielellä generatiivisen AI:n avulla. ABB:n mukaan tämä tuo energiankulutuksen, päästöjen, kustannustekijöiden ja laitteiden suorituskyvyn analyysin lähemmäs operatiivista päätöksentekoa etenkin prosessiteollisuuden ympäristöissä.
Käytännössä ABB tuo Industrial Knowledge Vaultin copilot-toiminnallisuuden osaksi olemassa olevaa EMS-järjestelmää. Ajatus on, että käyttäjän ei tarvitse etsiä tietoa useista näkymistä, rakentaa suodattimia, viedä dataa ulos tai koota raportteja käsin, vaan olennaisia kysymyksiä voi esittää suoraan järjestelmälle. Tällainen lähestymistapa on ymmärrettävä etenkin toimialoilla, joilla energia muodostaa suuren osan käyttökustannuksista, kuten kaivoksissa, metsäteollisuudessa, metalleissa ja sementissä.
Tomi Engdahl says:
By the Numbers
OpenAI Data Finds Hundreds of Thousands of ChatGPT Users Might Be Suffering Mental Health Crises
This is staggering.
https://futurism.com/future-society/openai-data-chatgpt-mental-health-crises?fbclid=IwdGRjcAQ3IhZjbGNrBDch72V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHllIc4Vo438R9Od1P1DRdEhVpRobKWb9VwEQ67lNu8_fpt7FilNOGyc6uwgc_aem_iUQnxQSYWlDKBrEcMejQeg
As reports of its chatbot driving episodes of “AI psychosis” continue to mount, OpenAI has finally released its own estimates of how many ChatGPT users are showing signs of suffering these alarming mental health crises — and they’re staggering in scale.
In an announcement first reported by Wired, the Sam Altman-led company estimated that, in any given week, around 0.07 percent of active ChatGPT users show “possible signs of mental health emergencies related to psychosis and mania.” Grimly, an even larger contingent, 0.15 percent, “have conversations that include explicit indicators of potential suicide planning or intent.”
Given ChatGPT’s immense popularity, these percentages are too significant to be ignored. Last month, Altman announced that the chatbot boasts 800 million weekly active users. Based on that figure, around 560,000 people are having distressing conversations with ChatGPT that may indicate they’re experiencing AI psychosis,
Tomi Engdahl says:
How to rescue failing AI initiatives
https://www.cio.com/article/4141649/how-to-rescue-failing-ai-initiatives.html
Most AI projects stumble due to faults in the organization, not the models themselves. Leaders need to know when to fix and refocus an initiative internally, or when to kill it.
Yet few of those automations made it to production. While the models worked well in isolation, they couldn’t survive inside the web of Zapier’s existing tools, data sources, approval flows, and human workflows.
“That’s when it really clicked for me,” Sammut adds. “The hard part of AI isn’t the AI itself. It’s the orchestration around it.”
The demo-to-production gap, as he calls it, is only one of the reasons AI initiatives go sideways. Fragmented data, weak governance, and a disconnect between leaders and frontline teams often compound the problem. The numbers reflect this as well. MIT’s State of AI in Business 2025 report estimated that about 95% of gen AI pilots fail to produce measurable business impact.
It’s no secret that AI adoption is messy and far more complicated than some estimates suggest. So C-level executives have to recognize when an AI initiative is drifting off course, and whether it’s worth fixing or not. With the right strategy, though, a struggling experiment can turn into a project that serves the business.
“AI shouldn’t be sustained on optimism alone,” says Scott Likens, the US and global chief AI engineering officer at PwC. “It needs observable, repeatable outcomes tied to business value.”
Frist signs of failure
Signs of trouble often surface early. A common one is when a project seems to be moving forward, but the date for putting it into real use keeps getting pushed back. “’A few more weeks’ turns into ‘We need to sort out the integration,’ which turns into, ‘We’re waiting on a security review,’” Sammut says. “Each delay feels reasonable on its own, but taken together, it’s a pattern.”
Another sign that a project is going south is the gap between leaders and practitioners. Executives often feel they have a clear view of the project, while the engineers and operators doing the work say much of the day-to-day friction goes unseen. At some point, Sammut believed projects were on track because he was hearing about milestones and launch dates. But a closer look showed that teams were stuck on integration backlogs and policy delays — problems that rarely surfaced during executive briefings.
In other cases, projects fail because they were never considered a priority by leadership. It happened to Eli Vovsha, manager of data science at cybersecurity software provider Fortra. Around 2022, he and a colleague set out to transform a hackathon prototype into a production-ready system powered by reinforcement learning.
Looking back, he adds, the initiative faltered because leadership never treated it as a real priority. “This lack of genuine interest meant our engineer colleague had to repeatedly postpone work as higher priority items came his way,” he says.
In retrospect, he doesn’t question the technical decisions he made. The lesson was tactical. “As a project manager, I’d be very careful to ensure the initiative is aligned with the vision and roadmaps of the appropriate product managers, and there’s a built-in lever to guarantee engineering support,” he says.
Likens says he’s observed many AI initiatives lose momentum when they drift from clear business goals. When it comes to AI, he says, flexibility matters more than ownership, and refocusing on specific business problems supported by stronger data and governance can help teams move faster, and deliver results that last.
But not all warning signs are easy to spot. Some are subtle and easy to miss like the project fades from agendas, people stop talking about it, and updates grow vague. The excitement that once surrounded it gives way to polite silence. It’s why executives need to pay attention to what’s not being said. And they should also pay attention to users if the project has already been deployed. “The main thing to watch for is the absence of positive user feedback,” says Australian software engineer Sean Goedecke, who writes about AI and large-company dynamics on his website. “The most successful AI products have immediately clicked with users.”
AI projects that fail, he adds, usually do so because they’re driven by the urge to do something with AI rather than to solve a real problem for users.
Rescuing failed initiatives
Some AI projects can be saved, but recovery requires a shift in mindset. These initiatives shouldn’t be treated as technical experiments, though. Leaders need to rather focus on how the work will deliver real value to the business. That means integrating these projects into real workflows, assigning clear ownership, and setting measurable results.
“The first step is shifting from model performance metrics to workflow performance metrics,” says Likens. “Ask is the business outcome improving, not if the model is accurate,” he says.
To have any chance of rescuing a failed AI initiative, organizations need to gain visibility into what’s actually happening, not what leadership thinks is happening. Then it’s important to figure out where the orchestration gaps are and what’s actually blocking progress.
Tomi Engdahl says:
“If shutting down one initiative frees your team to learn faster on a better one, that’s not failure. That’s good leadership.”
Dealing with doubt
AI initiatives carry enormous expectations. They’re launched with bold promises, ambitious timelines, and the hope of quick transformation. “When they don’t deliver, it can feel like a public setback, especially with how visible AI has become,” Likens says.
Often, when they fall short, the pressure can feel deeply personal. “There’s a real moment of doubt,” says Sammut. “You wonder if you pushed too hard, or not hard enough. You wonder if people are losing confidence in the broader AI strategy because of one initiative that stalled.”
The teams building these AI projects feel it too. Engineers and product managers can become more cautious, and less willing to take chances or propose bold ideas. Over time, that hesitation can slow innovation far more than any single failed project.
https://www.cio.com/article/4141649/how-to-rescue-failing-ai-initiatives.html
Tomi Engdahl says:
“Cool! How do we block it?”
Get Attie Here
Bluesky Users Respond With Overwhelming Disgust to Platform’s New AI
“Cool! How do we block it?”
https://futurism.com/artificial-intelligence/bluesky-users-disgust-new-ai?fbclid=IwdGRjcAQ3vqdjbGNrBDe-mGV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHusFqr2zqj3jIgP_KGNTe97-1FSNq9pGibZz6Zm7ufSScHb9BDq-F_5q9BYT_aem_FzJRh14mpMqjVd6envLCZA
In its early days, Twitter alternative Bluesky tried to paint itself as a safe haven from the onslaught of AI, promising in November 2024 that it had “no intention” of scraping user-generated posts to train AI models.
It was a shot across the bow, clearly aimed at its rival X-formerly-Twitter, which had recently changed its terms of service to allow just that. And since then, backlash to AI slop and relentless AI integrations has grown to new heights.
So it shouldn’t come as a surprise that Bluesky’s abrupt foray into AI isn’t sitting well with its notoriously anti-AI user base.
Attie, which interim CEO Toni Schneider referred to as a “new product” that’s “not part of the Bluesky app” in an interview with TechCrunch, allows users to essentially vibe code their own custom feed using natural language prompts — or even build their own Bluesky app alternative on top of the service’s Atmosphere protocol, an ecosystem of interoperable social applications.
“You control it, you shape it, without having to write code or know how to set up these feeds,” Schneider enthused.
The CEO seemed well aware of the headwinds against launching consumer-facing AI products in 2026.
“It is an AI product, but it’s an AI product that’s very people-focused,” he told TechCrunch. “We think AI is a very powerful technology, but we want to make sure that we use it to build things that really benefit people.”
“We think AI should serve people, not platforms,” Graber told audiences at this weekend’s announcement. “An open protocol puts this power directly in users’ hands.”
However, given the immediate reactions to the new app, it may struggle to catch on.
Tomi Engdahl says:
How AI is changing software
Thomas Martinsen is a Technical Evangelist at Twoday with more than 25 years of experience at the intersection of technology, strategy, and business innovation. He’s a Microsoft Regional Director and Microsoft AI MVP, recognized for his ability to translate complex technologies into clear strategies that create measurable impact. Thomas is passionate about both community and leadership.
https://www.twoday.com/blog/how-ai-is-changing-software
Tomi Engdahl says:
Just Giving Up
Alarming Study Finds That Most People Just Do What ChatGPT Tells Them, Even If It’s Totally Wrong
We’re shockingly prone to “cognitive surrender.”
https://futurism.com/artificial-intelligence/study-do-what-chatgpt-tells-us
In a matter of only a few years, AI chatbots have become a common part of many of our daily lives, even though they remain deeply flawed systems.
The reality is that chatbots like OpenAI’s ChatGPT, Google’s Gemini, or Anthropic’s Claude still make regular mistakes. According to an October study by the BBC, even the most advanced AI chatbots gave wrong answers a whopping 45 percent of the time.
Tomi Engdahl says:
OpenAI Extends the Responses API to Serve as a Foundation for Autonomous Agents
https://www.infoq.com/news/2026/03/openai-responses-api-agents/
Tomi Engdahl says:
The AI Shift: Will software engineers survive agentic AI?
A data deep dive shows that job vacancies are rising — but only for senior developers
https://www.ft.com/content/7325e967-5f4e-40b1-af3f-7d2351781843?syn-25a6b1a6=1
Tomi Engdahl says:
https://www.tekniikanmuseo.fi/nayttely/tekoalyn-tila/
Tomi Engdahl says:
https://www.xda-developers.com/prompts-i-use-to-get-smart-claude-responses/
Tomi Engdahl says:
Novee introduces autonomous AI red teaming to hunt LLM vulnerabilities
Novee today introduced AI Red Teaming for LLM Applications for its AI penetration testing platform, designed to uncover security vulnerabilities in LLM-powered applications before attackers can exploit them.
https://www.helpnetsecurity.com/2026/03/24/novee-ai-red-teaming-for-llm-applications/
Tomi Engdahl says:
Anthropic tiukentaa Clauden käyttörajoituksia kasvavan kysynnän vuoksi – Suomessa rajat paukkuvat nopeasti iltaisin
https://dawn.fi/uutiset/2026/03/28/anthropic-tiukentaa-claude-kayttorajoituksia#google_vignette
Tomi Engdahl says:
Make OpenAI’s models misbehave and earn a reward
OpenAI’s public Safety Bug Bounty program focuses on AI abuse and safety risks across its products. The goal is to support safe and secure systems and reduce the risk of misuse that could lead to harm.
This program complements the Security Bug Bounty. It accepts reports of abuse and safety risks that do not meet the criteria for a security vulnerability. Submissions are reviewed by teams from both programs based on scope and ownership.
https://www.helpnetsecurity.com/2026/03/27/openai-safety-bug-bounty-program/
Tomi Engdahl says:
https://www.theguardian.com/technology/2026/mar/27/number-of-ai-chatbots-ignoring-human-instructions-increasing-study-says
Number of AI chatbots ignoring human instructions increasing, study says
Exclusive: Research finds sharp rise in models evading safeguards and destroying emails without permission
Tomi Engdahl says:
I connected 4 services to Claude that have nothing to do with coding, and it’s the most underrated way to use it
https://www.xda-developers.com/connected-services-claude-nothing-with-coding-its-underrated-way-to-use/
Tomi Engdahl says:
Github pyörsi päätöksensä: Koodarien data päätyy sittenkin tekoälylle
Justus Vento26.3.202612:04|päivitetty26.3.202612:04TekoälyOhjelmistokehitys
Tietosuojakäytäntöjen muutos koskee lähes kaikkia tilaustasoja.
https://www.tivi.fi/uutiset/a/24b0422c-8bbd-4a10-bcf0-6281f727621b
Tomi Engdahl says:
https://medium.com/javarevisited/10-books-every-ai-engineer-should-read-in-2026-822892b870ed
Tomi Engdahl says:
The Complete Claude Architect Study Guide (With Code and Tutor Prompts)
Everything you need to build, configure, and ship production agents on Claude’s stack.
https://medium.com/data-science-collective/the-complete-claude-architect-study-guide-with-code-and-tutor-prompts-01f524e95c92
Tomi Engdahl says:
https://thenewstack.io/gpt-54-nano-mini/
Tomi Engdahl says:
https://techxplore.com/news/2026-03-llms-creativity-ai-responses-variety.html
Tomi Engdahl says:
https://www.anthropic.com/engineering/claude-code-auto-mode
Tomi Engdahl says:
https://www.infoworld.com/article/4149535/new-jetbrains-platform-manages-ai-coding-agents.html
Tomi Engdahl says:
What To Vibe Code First To Buy Back Hours Every Week
https://www.forbes.com/sites/jodiecook/2026/03/24/what-to-vibe-code-first-to-buy-back-hours-every-week/
Tomi Engdahl says:
Cloudflare’s new Dynamic Workers ditch containers to run AI agent code 100x faster
https://venturebeat.com/infrastructure/cloudflares-new-dynamic-workers-ditch-containers-to-run-ai-agent-code-100x
Tomi Engdahl says:
https://uxplanet.org/claude-code-cheat-sheet-190e023fe7b0
Tomi Engdahl says:
Google Stitch for Product Designers
Is Google Stitch a true revolution in UI design space or a dead end?
https://uxplanet.org/google-stitch-for-product-designers-5b46b56c7c8c
Tomi Engdahl says:
https://devblogs.microsoft.com/all-things-azure/agentic-platform-engineering-with-github-copilot/
Tomi Engdahl says:
https://www.marktechpost.com/2026/03/22/meet-gitagent-the-docker-for-ai-agents-that-is-finally-solving-the-fragmentation-between-langchain-autogen-and-claude-code/
Tomi Engdahl says:
Yleinen tekoäly on jo saavutettu, Nvidian pääjohtaja sanoo
https://tekniikanmaailma.fi/yleinen-tekoaly-on-jo-saavutettu-nvidian-paajohtaja-sanoo/
Tomi Engdahl says:
https://thenewstack.io/api-mcp-agent-integration/
Tomi Engdahl says:
https://www.vincit.com/grandone-edilexai
Tomi Engdahl says:
OpenShell is the safe, private runtime for autonomous AI agents. It provides sandboxed execution environments that protect your data, credentials, and infrastructure — governed by declarative YAML policies that prevent unauthorized file access, data exfiltration, and uncontrolled network activity.
https://github.com/NVIDIA/OpenShell
Tomi Engdahl says:
https://uxplanet.org/7-advanced-claude-code-slash-commands-db4c9be3e38c
Tomi Engdahl says:
https://www.twoday.com/blog/the-ai-driven-companies-of-the-future-from-feature-to-foundation
Tomi Engdahl says:
I used Claude Code, Google Antigravity and OpenAI Codex to develop an app, and found only one worth using
https://www.xda-developers.com/i-used-claude-code-google-antigravity-and-openai-codex-to-develop-an-app-and-found-only-one-worth-using/
Tomi Engdahl says:
How AI is changing software
https://www.twoday.com/blog/how-ai-is-changing-software
Tomi Engdahl says:
https://thenewstack.io/llm-d-cncf-kubernetes-inference/
Tomi Engdahl says:
Why Vibe Coders Still Need To Think Like Software Engineers
https://www.forbes.com/sites/bernardmarr/2026/03/20/why-vibe-coders-still-need-to-think-like-software-engineers/
Tomi Engdahl says:
Stop using CLAUDE.md; here’s what actually works for AI-assisted development
https://www.xda-developers.com/claude-md-helping-your-projects-is-myth/
Tomi Engdahl says:
https://github.com/docker/docker-agent
Tomi Engdahl says:
https://www.producthunt.com/products/google-ai-studio-8
Tomi Engdahl says:
https://www.anthropic.com/research/long-running-Claude
Tomi Engdahl says:
Newsletters
College students are writing with AI – but a pilot study finds they’re not simply letting it write for them
https://theconversation.com/college-students-are-writing-with-ai-but-a-pilot-study-finds-theyre-not-simply-letting-it-write-for-them-276856
Tomi Engdahl says:
“The question is no longer whether AI will displace significant numbers of workers.”
Outbrained
If Your Job Involves Using Your Brain, You May Be in Big Trouble, Tufts Report Finds
It’s a game of survival of the fittest minds.
https://futurism.com/artificial-intelligence/jobs-brain-ai-tufts-report?fbclid=IwdGRjcAQ4U-ljbGNrBDhTyGV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHlH6ZCK53UoRpCkVlFsj3A0M6H7AOJ8Q-sPFuz39yeMKQOJ4FfVEDoY2hxCQ_aem_0v_Z7BLjU9H2EHvsXmyF9g
As fears over an AI-driven jobs apocalypse continue to grow, researchers are trying to get a better sense of which occupations will be most — and least — affected.
Tech leaders continue to cite AI as the reason behind widespread layoffs, with economists warning that the changing dynamics could be devastating in the longer run.
https://futurism.com/artificial-intelligence/jobs-brain-ai-tufts-report?fbclid=IwdGRjcAQ4VENjbGNrBDhTyGV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHlH6ZCK53UoRpCkVlFsj3A0M6H7AOJ8Q-sPFuz39yeMKQOJ4FfVEDoY2hxCQ_aem_0v_Z7BLjU9H2EHvsXmyF9g
Now, researchers at Tufts University have released what they claim to be the “first-of-its-kind data-driven framework,” dubbed the American AI Jobs Risk Index, to map out which occupations are the most vulnerable to AI — and geographically where those effects are likely to be felt the hardest.
If they hold up, the findings are cause for alarm. The data suggests that around 9.3 million American jobs are “at risk of displacement in the next two to five years.” Some 4.9 million workers were identified to be spread across 33 “tipping point” occupations that are at the highest risk of AI displacement.
Anywhere from $200 billion to $1.5 trillion of combined household incomes could be on the chopping block, a potential jolt that could have vast implications.
The researchers’ take: only those who can leverage existing expertise and are ready to adopt the tech to gain an advantage over others will survive.
“We already know that AI is not just automating routine tasks — it is moving up, targeting the cognitive and analytical work that defines high-skill, high-wage careers,” said Tufts University dean of global business and economist Bhaskar Chakravorti in a statement. “The jobs of the future will be secured by those with a combination of subject-matter expertise, critical-thinking skills for human judgment, and knowledge of AI and how to use it.”
The index, which assigns an “exposure score” to just shy of 800 different occupations, echoes previous research into the effects of AI on employment. At highest risk — perhaps unsurprisingly — are web and digital interface designers, web developers, database architects, computer programmers, data scientists, and financial risk specialists.
On the other end of the spectrum are a host of blue collar occupations, including roofers, miners, machine operators, meat packers, welders, stonemasons, and plasterers. Other least exposed occupations include surgical assistants, massage therapists, and fast food counter workers.
The researchers concluded that many of the lowest-paying jobs happen to be the least exposed.
“Physical, manual, and variable-condition work (roofers, orderlies, dishwashers) face less than one percent displacement,” they wrote. “The occupations AI cannot touch are largely those the economy has always undervalued.”
We’ve seen other investigations into the labor market impacts of AI, like one by Anthropic that was published earlier this month, with a similar takeaway.
“Our index makes clear that the question is no longer whether AI will displace significant numbers of workers, but in which states and cities, how fast, and whether we are prepared by taking pre-emptive action,” said Chakravorti. “The geography of this disruption has real political consequences: the states and metros most at risk are already the most active in seeking AI regulation — and the federal government is telling them to stand down.”