Here are some of the the major AI trends shaping 2026 — based on current expert forecasts, industry reports, and recent developments in technology. The material is analyzed using AI tools and final version hand-edited to this blog text:
1. Generative AI Continues to Mature
Generative AI (text, image, video, code) will become more advanced and mainstream, with notable growth in:
* Generative video creation
* Gaming and entertainment content generation
* Advanced synthetic data for simulations and analytics
This trend will bring new creative possibilities — and intensify debates around authenticity and copyright.
2. AI Agents Move From Tools to Autonomous Workers
Rather than just answering questions or generating content, AI systems will increasingly act autonomously, performing complex, multi-step workflows and interacting with apps and processes on behalf of users — a shift sometimes called agentic AI. These agents will become part of enterprise operations, not just assistant features.
3. Smaller, Efficient & Domain-Specific Models
Instead of “bigger is always better,” specialized AI models tailored to specific industries (healthcare, finance, legal, telecom, manufacturing) will start to dominate in many enterprise applications. These models are more accurate, legally compliant, and cost-efficient than general models.
4. AI Embedded Everywhere
AI won’t be an add-on feature — it will be built into everyday software and devices:
* Office apps with intelligent drafting, summarization, and task insights
* Operating systems with native AI
* Edge devices processing AI tasks locally
This makes AI pervasive in both work and consumer contexts.
5. AI Infrastructure Evolves: Inference & Efficiency Focus
More investment is going into inference infrastructure — the real-time decision-making step where models run in production — thereby optimizing costs, latency, and scalability. Enterprises are also consolidating AI stacks for better governance and compliance.
6. AI in Healthcare, Research, and Sustainability
AI is spreading beyond diagnostics into treatment planning, global health access, environmental modeling, and scientific discovery. These applications could help address personnel shortages and speed up research breakthroughs.
7. Security, Ethics & Governance Become Critical
With AI handling more sensitive tasks, organizations will prioritize:
* Ethical use frameworks
* Governance policies
* AI risk management
This trend reflects broader concerns about trust, compliance, and responsible deployment.
8. Multimodal AI Goes Mainstream
AI systems that understand and generate across text, images, audio, and video will grow rapidly, enabling richer interactions and more powerful applications in search, creative work, and interfaces.
9. On-Device and Edge AI Growth
10. New Roles: AI Manager & Human-Agent Collaboration
Instead of replacing humans, AI will shift job roles:
* People will manage, supervise, and orchestrate AI agents
* Human expertise will focus on strategy, oversight, and creative judgment
This human-in-the-loop model becomes the norm.
Sources:
[1]: https://www.brilworks.com/blog/ai-trends-2026/?utm_source=chatgpt.com “7 AI Trends to Look for in 2026″
[2]: https://www.forbes.com/sites/bernardmarr/2025/10/13/10-generative-ai-trends-in-2026-that-will-transform-work-and-life/?utm_source=chatgpt.com “10 Generative AI Trends In 2026 That Will Transform Work And Life”
[3]: https://millipixels.com/blog/ai-trends-2026?utm_source=chatgpt.com “AI Trends 2026: The Key Enterprise Shifts You Must Know | Millipixels”
[4]: https://www.digitalregenesys.com/blog/top-10-ai-trends-for-2026?utm_source=chatgpt.com “Digital Regenesys | Top 10 AI Trends for 2026″
[5]: https://www.n-ix.com/ai-trends/?utm_source=chatgpt.com “7 AI trends to watch in 2026 – N-iX”
[6]: https://news.microsoft.com/source/asia/2025/12/11/microsoft-unveils-7-ai-trends-for-2026/?utm_source=chatgpt.com “Microsoft unveils 7 AI trends for 2026 – Source Asia”
[7]: https://www.risingtrends.co/blog/generative-ai-trends-2026?utm_source=chatgpt.com “7 Generative AI Trends to Watch In 2026″
[8]: https://www.fool.com/investing/2025/12/24/artificial-intelligence-ai-trends-to-watch-in-2026/?utm_source=chatgpt.com “3 Artificial Intelligence (AI) Trends to Watch in 2026 and How to Invest in Them | The Motley Fool”
[9]: https://www.reddit.com//r/AI_Agents/comments/1q3ka8o/i_read_google_clouds_ai_agent_trends_2026_report/?utm_source=chatgpt.com “I read Google Cloud’s “AI Agent Trends 2026” report, here are 10 takeaways that actually matter”
1,502 Comments
Tomi Engdahl says:
https://devblogs.microsoft.com/microsoft365dev/mcp-apps-now-available-in-copilot-chat/
Tomi Engdahl says:
ChatGPT can do Math and write code but it cannot set a timer, Sam Altman says it will take time
A video is going viral showing ChatGPT failing to track time and instead making up a result. As the video gained traction, OpenAI CEO Sam Altman acknowledged the flaw, stating that this is a ‘known limitation’ which may take up to a year to fix.
https://www.indiatoday.in/technology/news/story/chatgpt-can-do-math-and-write-code-but-it-cannot-set-a-timer-sam-altman-says-it-will-take-time-2893221-2026-04-08#google_vignette
Tomi Engdahl says:
Decision-Making by Consensus Doesn’t Work in the AI Era
https://hbr.org/2026/04/decision-making-by-consensus-doesnt-work-in-the-ai-era
AI is bringing about an organizational reckoning. While most leaders probably agree that their organizations will need to adapt, too few are willing to admit that this will require them to abandon one of the most pervasive management principles of the past half-century: decision-making by consensus. The companies that survive the next decade will not be those with the best algorithms or the most data. They will be those that have the courage to abandon how decisions get made.
Tomi Engdahl says:
Goodbye, Llama? Meta launches new proprietary AI model Muse Spark — first since Superintelligence Labs’ formation
https://venturebeat.com/technology/goodbye-llama-meta-launches-new-proprietary-ai-model-muse-spark-first-since
Tomi Engdahl says:
https://www.twoday.com/blog/how-ai-is-changing-software
Tomi Engdahl says:
What Quantum AI Actually Means
https://thequantuminsider.com/2026/03/30/what-quantum-ai-actually-means/
Insider Brief
Quantum AI refers to the intersection of quantum computing and artificial intelligence, encompassing both the use of quantum computers to accelerate AI workloads and the application of AI techniques to improve quantum hardware and algorithms.
The relationship is symbiotic rather than competitive: AI already plays a critical role in calibrating quantum systems, mitigating errors, and optimizing quantum circuits, while quantum computing offers potential speedups for specific AI bottlenecks like optimization and sampling.
Major technology companies including IBM, Google, Microsoft, and Amazon are exploring quantum AI applications, alongside specialized firms like Quantinuum, IonQ, and Zapata AI, though most practical applications remain years away from deployment.
Despite widespread misconceptions, quantum computing will not replace classical AI systems but may serve as a specialized co-processor for narrow tasks where quantum algorithms offer exponential advantages over classical approaches.
Tomi Engdahl says:
Researchers train living rat neurons to perform real-time AI computations — experiments could pave the way for new brain-machine interfaces
https://www.tomshardware.com/tech-industry/researchers-train-living-rat-neurons-to-perform-real-time-ml-computations
Tomi Engdahl says:
Changing the rules of the game again: Gemini and Google Translate join forces
Google’s new Gemini-powered tech enables real-time conversation translation directly in headphones, preserving tone and emphasis. Now rolling out to iPhone after launching on Android.
https://www.jpost.com/consumerism/article-891976
Tomi Engdahl says:
https://thenewstack.io/cursor-3-demotes-ide/
Tomi Engdahl says:
Microsoft says Copilot is for entertainment purposes only, not serious use — firm pushing AI hard to consumers and businesses tells users not to rely on it for important advice
News
By Jowi Morales last updated April 3, 2026
These might be boilerplate disclaimers, but they kind of contradict the company’s ads and marketing.
https://www.tomshardware.com/tech-industry/artificial-intelligence/microsoft-says-copilot-is-for-entertainment-purposes-only-not-serious-use-firm-pushing-ai-hard-to-consumers-tells-users-not-to-rely-on-it-for-important-advice
Tomi Engdahl says:
This new chip survives 1300°F (700°C) and could change AI forever
A heat-proof memory device that thrives at 700°C could transform everything from space exploration to AI computing.
https://www.sciencedaily.com/releases/2026/04/260406192904.htm
Tomi Engdahl says:
AI agents and agentic AI vs. traditional automation
https://searchengineland.com/guide/ai-agents-and-agentic-ai-vs-traditional-automation
Tomi Engdahl says:
CORAL: Towards Autonomous Multi-Agent Evolution for Open-Ended Discovery
https://huggingface.co/papers/2604.01658
Tomi Engdahl says:
AI breakthrough cuts energy use by 100x while boosting accuracy
A smarter, logic-driven AI could slash energy use by 100x—and outperform today’s most powerful systems.
https://www.sciencedaily.com/releases/2026/04/260405003952.htm
AI is consuming staggering amounts of energy—already over 10% of U.S. electricity—and the demand is only accelerating. Now, researchers have unveiled a radically more efficient approach that could slash AI energy use by up to 100× while actually improving accuracy. By combining neural networks with human-like symbolic reasoning, their system helps robots think more logically instead of relying on brute-force trial and error.
Tomi Engdahl says:
https://thenewstack.io/mcp-maintainers-enterprise-roadmap/
Tomi Engdahl says:
Claude Code Architecture Study
https://github.com/sanbuphy/learn-coding-agent
Tomi Engdahl says:
https://towardsdatascience.com/how-to-run-claude-code-agents-in-parallel/
Tomi Engdahl says:
Chat, Who’s Gonna Win the Election?
Foolish Pollsters Are Now Just Asking AI What Voters Would Say in Response to Questions and Publishing It at Face Value
“Pure fictions are on the brink of being treated as scientific and political knowledge.”
https://futurism.com/artificial-intelligence/ai-polls-silicon-sampling?fbclid=IwdGRjcARHZp9jbGNrBEdmb2V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHrCBNWufK6J33asx3kA-saxbuYZGX6GuEAa357paSzh2JnxJ9lANvisHZQkF_aem_ZP_ASkp-9KO76WMaCGDocA
Tomi Engdahl says:
https://www.reuters.com/world/us/suspect-arrested-after-molotov-cocktail-attack-openai-ceo-sam-altmans-home-2026-04-10/?fbclid=IwdGRjcARHkPhjbGNrBEeQ3GV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHmUrcyMe52sO9v2B08O_S67agEcYqkR2y_Ft0iauHWLxSax2yLt3guPbJxyR_aem_bn6RTpOm2OV8NUc8kBpWNQ
Tomi Engdahl says:
Buyer’s Guide
OpenAI Says It’s Already Made $100 Million by Stuffing ChatGPT With Ads
A handsome chunk of change.
https://futurism.com/artificial-intelligence/openai-money-chatgpt-ads?fbclid=IwdGRjcARHkYpjbGNrBEeRdGV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHhSDTaXKgxak4rF4dE0OWFgBaoSwwI0DiC1kcenjNU4sk1K-WYZW0XcgGKqs_aem_7JbEC_cg1Eav_5nRSdZ_ww
Huzzah! A mass psychosis inducing machine is making money out the wazoo by bombarding users with highly-targeted corporate messages.
According to a new Axios scoop, OpenAI has already generated $100 million in annual recurring revenue from stuffing advertisements into ChatGPT in just two months, suggesting that its big bet on leveraging its users’ deeply personal conversations to offer hyper-effective commercials is paying off.
Tomi Engdahl says:
OpenAI halts UK data centre project over energy costs and red tape
The plans are being shelved until the ‘right conditions’ allow
https://www.independent.co.uk/news/business/openai-stargate-newcastle-data-centre-b2955087.html?fbclid=IwdGRjcARIBNBjbGNrBEgEzWV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHjYP9MPJnV5LL1aFQlr7GYsUBnsHTKMLt3pvk-F7vbwybo9lpgws8DkHjBPC_aem_BqxxDYis9-SafwvCggJYYQ
Tomi Engdahl says:
“AI didn’t deliver.”
Cubicle Coup
There’s a Mass Rebellion Against AI in the Workplace
“AI didn’t deliver.”
https://futurism.com/future-society/ai-enterprise-workers-survey?fbclid=IwdGRjcARIDh5jbGNrBEgN-mV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHsSBN8VCDz3Aw8647mCmDxrbyOVzc-oh1UVJyMIQisi6z0ky-VFSCKt_rbYo_aem_9mwbM-zCrJVaCyMcwK6xqA
With consumers roundly panning generative AI slop wherever it’s found, tech companies have set their sights on a more captive audience as a source of revenue: the world’s boardrooms and cubicle farms. But the office, it turns out, is also failing to provide fertile ground for its overtures.
A new survey by the SAP-owned software company WalkMe of 3,750 executives and employees found a major discontent growing in large companies across the globe. According to the findings, 54 percent of workers reported avoiding their company’s in-house AI tools in order to complete tasks themselves. A full third of workers reported never using AI at all.
The survey also identified a massive rift between workers and their bosses when it comes to AI. While 61 percent of executives surveyed said they trust the tech for complex, “business-critical” decisions, only nine percent of workers said the same.
A further 88 percent of corporate bigwigs expressed confidence that the AI tools they forced on their workers were adequate — while only 21 percent of workers agreed.
That cognitive disconnect isn’t just a sentiment issue. The WalkMe survey also found that, though 81 percent of executives think their AI deployments have “significantly improved productivity,” their workers are actually wasting eight hours per week cleaning up after AI’s messes, which is the equivalent of 51 work days a year.
Those findings mark a drastic uptick from last year’s WalkMe survey, which showed workers were losing 36 days a year dealing with AI friction.
“AI didn’t deliver,” Johns Hopkins economist Steve Hanke told Fortune about the findings. “Welcome to the real world. Forget the AI bubble. You know, it didn’t deliver. You look at all the surveys and yeah, everybody’s using it a little bit, but you dig into it and it hasn’t done much.”
“Productivity, by the way, it was weak,” Hanke continued. “If AI delivered, productivity would be way up. You listen to these Silicon Valley guys and they say we’re gonna have GDP going to 5 percent [or] 6 percent. Productivity is gonna go up to six. It’s just not happening.”
The results are just more fuel for the AI skeptics’ already-towering bonfire. In August of last year, an often-cited MIT study found that 95 percent of AI deployments in the workplace had failed to generate the expected return on investment.
Tomi Engdahl says:
Clock In
AI Expert Says It’s Time to Stop Freaking Out About AI Taking Our Jobs
That’s a relief.
https://futurism.com/artificial-intelligence/ai-jobs-automation-expert?fbclid=IwVERDUARIDq9leHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR6q00I_ulaaLBgK6Ta35WVNjw9iCCxnoBJaj-mMVH95NxyWH8uaLUG5iQvWzg_aem_6AXsgKtALBxaMtTKGgeQYA
Tomi Engdahl says:
The quest to build Meta’s 5GW Hyperion data center (the world’s largest ever) is pushing engineers to rethink compute, cooling, and network technology. https://buff.ly/VdJFO5m
Tomi Engdahl says:
Zoom In
Gen Z Sabotaging AI at Work So It Won’t Take Their Job
They’re fed up.
https://futurism.com/artificial-intelligence/zoomers-ai-sabotage?fbclid=IwdGRjcARItgZjbGNrBEi1zGV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHiQMNZMGvf-LauoGmWrxBALHMBPmB6oQG9b31x_yDZ9GjcYNXL9kSnplPY0k_aem_s_1uS8HpSRpn9ExL5cc3gA
Is artificial intelligence taking our jobs in a sweeping overhaul of the productive economy as we know it? It’s a burning question with no definitive answers, but some workers aren’t waiting around to find out — and fighting back.
A new report by the AI company Writer and research firm Workplace Intelligent found a massive portion of workers across the US, UK, and Europe are intentionally trying to sabotage their bosses’ AI initiatives.
The firms surveyed 1,200 “knowledge workers” — a fluffy term for wage-earning office workers — and 1,200 business executives. It found that a whopping 29 percent of workers admitted to sabotaging their company’s AI by entering proprietary info into public AI chatbots, using unapproved AI tools, or intentionally using low-quality AI output in their work without fixing it.
That nearly a third of all workers are actively trying to wreck their company’s AI systems speaks volumes. But the concentration among Gen Z workers is particularly stunning: 44 percent of all zoomers admitted to sabotaging their in-house AI deployments.
C-suite executives, meanwhile, are facing the heavy burden of squeezing blood from AI’s proverbial stone. 72 percent of all surveyed execs said their company’s AI strategy is causing them stress or anxiety, 32 percent of whom characterize their stress as “high” or “crippling.”
There’s also a major rift between how much each group uses AI. Though only 28 percent of wage-earning employees said they used AI for over two hours a day, more than half of all executives — 64 percent — said the same. Some of them live with a chatbot window open: nearly one in five executives admitted logging four or five hours a day with an AI model, while one in 25 use AI in excess of six hours a day.
When it comes to sabotage, the corporate report suggests that “organizations can address some of these concerns by investing in higher-quality AI platforms and partners.”
Whether or not “honesty” is enough to bring the skeptics on board remains to be seen. CEOs won’t stop squealing about the windfall from AI automation, so it’s unclear why any worker, Gen Z or otherwise, should sit back and take it. Getting automated out of a job is a catastrophic financial event which delays homeownership, lowers lifetime earnings, and even impacts a person’s chances of getting married.
With close to zero control over how their firms are run, workers arguably have very rational reasons to resist, especially if the AI revolution is as close as tech-happy executives claim.
Tomi Engdahl says:
Brain Drain
Gen Z Is Using AI to Have Difficult Relationship Conversations, and the Results Are Massively Cringe
“Oftentimes they’re using it as a way to overcompensate for the fact that they don’t really know how to truly interact with others.”
https://futurism.com/artificial-intelligence/ai-chatbot-social-offloading?fbclid=IwVERDUARIttJleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR4JGtR4JtdaHuyCk1hy_cp0nWSj-5u87sIJHQOpLBE-q-V61ySWobyuaH-F6Q_aem_BYkYSMbDCWvf9dmd5n8FeQ
Researchers, teachers, and mental health professionals alike have spent the past few years reeling as teens and young adults exported their brains to AI chatbots — so it should come as no surprise they’re now using the tech as a crutch to sidestep hard conversations they don’t want to have.
New reporting by CNN details the troubling rise of young people using AI models like ChatGPT to step in for them during life’s delicate moments.
One Yale University student identified as Patrick, for example, used ChatGPT to reject a girl he had met through some mutual friends. “Hey Emily! I hope your half-marathon went well — I’m sure you crushed it,” Patrick began.
The ensuing text, six paragraphs long and chock full of ChatGPTisms, may be the perfect distillation of 21st century cringe.
In other words, social offloading is exactly what Patrick did when he asked ChatGPT to reason through the rejection for him.
Tomi Engdahl says:
Executive Dysfunction
Study Finds That Execs Are Already Outsourcing Their Thinking to AI
How ironic.
https://futurism.com/artificial-intelligence/ai-executive-thinking-survey?fbclid=IwVERDUARIt1VleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR4JGtR4JtdaHuyCk1hy_cp0nWSj-5u87sIJHQOpLBE-q-V61ySWobyuaH-F6Q_aem_BYkYSMbDCWvf9dmd5n8FeQ
The headlines warning about AI melting our brains usually point to students or workers, which — fair enough. But there’s a much more ironic victim hiding in the corner office: the very business executives who unleashed AI on us in the first place.
A recent study conducted by market research agency 3Gem and flagged by The Register found that business leaders in the United Kingdom seem to be outsourcing a huge amount of their cognitive and emotional labor to their AI chatbots.
Tomi Engdahl says:
Bragging Rights
OpenAI’s Latest Thing It’s Bragging About Is Actually Kind of Sad
This compute measuring contest is just pitiful
https://futurism.com/artificial-intelligence/openai-bragging-compute-sad?fbclid=IwdGRjcARI1W9jbGNrBEjUNmV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHkYoAvtI_6G5diMc97h_W1MvnCtIJ3ch0AtQsMfZladKp1AOsyrZB2s2Lwd8_aem_4T9IQnE8gzhVGmiLuQYHfA
Despite promising to allocate hundreds of billions of dollars for the build out of enormous data centers, the AI industry has struggled to keep up with its lofty ambitions.
According to recent reporting by Bloomberg and Ed Zitron, roughly half the data centers slated to open in the United States are either being delayed or canceled outright. Massive electric component shortages and soaring costs have slowed the infrastructure boom to a trickle, frustrating tech leaders.
In the resulting morass, every AI player has been making big claims to try to keep the hype train going. But OpenAI in particular has turned braggadocio into an art form, and its latest boast is ringing a bit sad: in a memo obtained by Bloomberg, the company boasted that it’s planning to have 30 gigawatts worth of compute — enough to power over 22 million US households — by 2030, while its rival Anthropic is only planning for seven to eight gigawatts by the end of 2027.
To put those numbers into perspective, OpenAI had just 1.9 gigawatts of computing capacity in 2025, while Anthropic had 1.4 gigawatts.
Tomi Engdahl says:
Unpacking AI security in 2026 from experimentation to the agentic era
Cut through the noise and understand the real risks, responsibilities, and responses shaping enterprise AI today.
https://www.theregister.com/2026/04/10/unpacking_ai_security_2026/
The stakes for 2026
For IT and security leaders, the margin for error has disappeared. A single vulnerability in an AI pipeline can now lead to automated data exfiltration, systemic reputational damage, and heavy regulatory penalties. Understanding how to build a secure by design AI infrastructure has become a business imperative.
Key takeaways:
The rise of agentic risks: How to secure autonomous agents that move more traffic and carry more risk than simple chatbots.
Securing the AI supply chain: Why protecting your data is not enough when third party plugins and datasets are compromised.
The regulatory pivot: Navigating the 2026 shift from non-binding safety frameworks to hard global AI compliance laws.
Predictive defense: Using advanced visibility to detect anomalies and deploy countermeasures before a threat is unleashed.
Putting trust into practice: Practical steps to ensure your AI governance is functional rather than just theoretical.
Tomi Engdahl says:
Making AI crawlers squirm
AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt
Attackers explain how an anti-spam defense became an AI weapon.
https://arstechnica.com/tech-policy/2025/01/ai-haters-build-tarpits-to-trap-and-trick-ai-scrapers-that-ignore-robots-txt/
Tomi Engdahl says:
Lightpanda Browser
The headless browser built from scratch for AI agents and automation.
Not a Chromium fork. Not a WebKit patch. A new browser, written in Zig.
https://github.com/lightpanda-io/browser?fbclid=IwdGRjcARHjGJjbGNrBEeMX2V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHmBrny_XFfj_qd1GuJ5yn_U5l3UW7M1m9vf9tb022xLM3u7da_6N2pLSGIaQ_aem_Wd0eT4Ve8fq_BDZsXzzhoA
Tomi Engdahl says:
Agents don’t know what good looks like. And that’s exactly the problem.
A reaction to the Neal Ford and Sam Newman fireside chat on agentic AI and software architecture
https://www.oreilly.com/radar/agents-dont-know-what-good-looks-like-and-thats-exactly-the-problem/
Tomi Engdahl says:
https://www.databricks.com/blog/memory-scaling-ai-agents
Tomi Engdahl says:
https://thenewstack.io/anthropic-takes-claude-cowork-out-of-preview-and-straight-into-the-enterprise/
Tomi Engdahl says:
GitHub Copilot CLI adds Rubber Duck review agent
news
Apr 7, 2026
3 mins
Rubber Duck uses a second model from a different AI family to evaluate the primary agent’s plans, question assumptions, and raise concerns.
https://www.infoworld.com/article/4155289/github-copilot-cli-adds-rubber-duck-review-agent.html
Tomi Engdahl says:
27 questions to ask before choosing an LLM
feature
Apr 6, 2026
12 mins
From cost and performance specs to advanced capabilities and quirks, answers to these questions will help you determine the right model for your use case.
https://www.infoworld.com/article/4152738/27-questions-to-ask-before-choosing-an-llm.html
Tomi Engdahl says:
From GPT-2 to Claude Mythos: The return of AI models deemed ‘too dangerous to release’
https://the-decoder.com/from-gpt-2-to-claude-mythos-the-return-of-ai-models-deemed-too-dangerous-to-release/
Seven years ago, OpenAI declared its language model GPT-2 “too dangerous to release.” The industry rolled its eyes. Now Anthropic is repeating the move with Claude Mythos Preview – but this time there’s real evidence on the table: thousands of vulnerabilities in operating systems and browsers, found by an AI that barely any human could review.
In February 2019, OpenAI catapulted itself into public consciousness when it unveiled a language model that could generate fake news so convincingly that the company decided not to release it. Parts of the AI research community considered it a wise precaution; others dismissed it as a PR stunt. OpenAI withheld the full 1.5-billion-parameter model, citing remarkable progress in text generation and concerns about potential misuse.
Tomi Engdahl says:
The Roadmap to Mastering Agentic AI Design Patterns
https://machinelearningmastery.com/the-roadmap-to-mastering-agentic-ai-design-patterns/
Introduction
Most agentic AI systems are built pattern by pattern, decision by decision, without any governing framework for how the agent should reason, act, recover from errors, or hand off work to other agents. Without structure, agent behavior is hard to predict, harder to debug, and nearly impossible to improve systematically. The problem compounds in multi-step workflows, where a bad decision early in a run affects every step that follows.
Agentic design patterns are reusable approaches for recurring problems in agentic system design. They help establish how an agent reasons before acting, how it evaluates its own outputs, how it selects and calls tools, how multiple agents divide responsibility, and when a human needs to be in the loop. Choosing the right pattern for a given task is what makes agent behavior predictable, debuggable, and composable as requirements grow.
This article offers a practical roadmap to understanding agentic AI design patterns. It explains why pattern selection is an architectural decision and then works through the core agentic design patterns used in production today. For each, it covers when the pattern fits, what trade-offs it carries, and how patterns layer together in real systems.
Tomi Engdahl says:
Dark Clouds
Why Does It Suddenly Feel Like OpenAI Is Melting Down Into Disaster?
It’s been a bruising year so far.
https://futurism.com/artificial-intelligence/openai-melting-down-disaster?fbclid=IwdGRjcARI4NZjbGNrBEjgq2V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHr7KM60teY9J2KAItD6wZ9IiY9uDknDrEuCAPcSU4jAZYRSkrPhTtNfrQ7s1_aem_tgYV6ioiN-B6FTXqSf4-eQ
OpenAI is gearing up for a potential IPO later this year at a staggering valuation of up to $1 trillion — a meteoric rise from a mere $29 billion in January 2023, months after launching ChatGPT.
Just under three and a half years after its watershed moment, OpenAI seems almost unrecognizable. This year, in particular, has been a rude awakening for the Sam Altman-led company, with a string of bad news and controversies raising some hard-to-ignore questions about its long-term viability and ability to keep up with increasingly steep competition.
Kicking off a bruising year was OpenAI diving in to snap up a lucrative Department of Defense contract in late February after Anthropic walked away from the table. The latter company’s CEO, Dario Amadei, made it clear that its AI models shouldn’t be used for mass surveillance of Americans and autonomous weapon systems — a principled stand that the Pentagon refused to agree to.
It was a PR disaster on OpenAI’s part. Altman later admitted that the move “looked opportunistic and sloppy,” but the damage had already been done. The eyebrow-raising deal triggered a mass exodus with uninstall rates of ChatGPT spiking overnight and making Anthropic look incredible by comparison — at the exact moment that its models have been pulling decisively ahead among programmers.
Less than a month later, OpenAI announced it was killing its text-to-video AI app, Sora, an “unholy abomination” that was riddled with copyright infringing material and mindless AI slop. As the Wall Street Journal reported at the time, the company was desperately looking to free up computing resources to power its next-generation models — another implicit admission that Anthropic is starting to eat its lunch.
Meanwhile, OpenAI executives are racing to contain a financial bloodbath. While the company claims it will reach $100 billion in just advertising revenue by 2030, its current financial predicament should have anybody take that figure with a massive grain of salt. Spending is still vastly outpacing the company’s relatively meager revenue, despite OpenAI revising its $1.4 trillion infrastructure commitments through 2030 to $600 billion in February — less than half of what it had originally planned to spend.
Tomi Engdahl says:
“It worked for nuclear weapons, why not AI?”
Mutual AI Destruction
OpenAI Staffers Horrified When Senior Leadership Hatched “Insane” Plan to Pit World Governments Against Each Other
“It worked for nuclear weapons, why not AI?”
https://futurism.com/artificial-intelligence/openai-staffers-horrified-insane-plan?fbclid=IwdGRjcARI4nJjbGNrBEjiS2V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHnYB1o1NGvqPWUKksypbe0YpO-5_H0M17ffEOQ08RSAGopzcqYTOanFGw4QB_aem_8KhBXPI5usTyu7vR89Lm-Q
OpenAI leaders horrified staffers after proposing an “insane” plan to enrich the company by pitting world governments against each other.
This anecdote of near comic-book-villainry comes from The New Yorker’s sweeping new investigation into CEO Sam Altman, which documents his alarming pattern of lying and manipulating to build his AI empire, a behavior that some insiders likened to that of an actual “sociopath.”
Tomi Engdahl says:
https://etn.fi/index.php/13-news/18775-lahdessa-aloitti-tekoaelyn-tutkimuskeskus
Tomi Engdahl says:
https://etn.fi/index.php/13-news/18776-jaettimaeiset-tekoaelypiirit-pakottavat-verifioinnin-uusiksi
Perinteiset simulointi- ja emulointimenetelmät eivät enää pysy tekoälypiirien mukana. Kun sirujen koko kasvaa miljardien porttien järjestelmiksi ja niitä ajetaan oikeilla AI-ohjelmistoilla jo ennen valmistusta, verifiointi on pakko rakentaa uudella tavalla.
Siemens ja NVIDIA kertovat saavuttaneensa läpimurron AI-piirien verifioinnissa. Yhtiöt pystyvät nyt ajamaan valtavia määriä suunnittelusyklejä muutamassa päivässä ennen ensimmäisen sirun valmistumista.
Tomi Engdahl says:
Code Overload
The Effects of AI-Generated Code Tearing Through Corporations Is Actually Kind of Funny
Womp womp!
https://futurism.com/artificial-intelligence/ai-code-tearing-through-corporations?fbclid=IwdGRjcARJkkJjbGNrBEmSGWV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHlPkn0IfNt1vMwQPLx5Mwv23Cqw4XPs3XiQIfDrTBGA4dixbzWlo4qoBqrgy_aem_QMDYQ9nmuicDSYEuk63d_w
Corporations are rapidly embracing AI to churn out mountains of code.
Outwardly, this is presented as a revolution in productivity. But a behind the scenes look in The New York Times paints a slightly different, and somewhat comic, picture. Beleaguered programmers are being saddled with more code than what they know what to do with, while their employers struggle to find the best way to get them to check all the AI’s hastily written work.
One financial services company, for example, saw its coding output increase tenfold after embracing the popular AI tool Cursor — creating an epic backlog of one million lines of code that needs to be reviewed, according to Joni Klippert, CEO of the security startup StackHawk, which works with the financial firm.
And the code glut isn’t something that can be ignored. Left unchecked, bad code — regardless of whether it’s AI-generated or human-written — can gum up software and cause security flaws. Amazon and Meta both recently experienced disruptions after AI tools took unauthorized actions, and those are just the ones we’ve heard about.
“The sheer amount of code being delivered, and the increase in vulnerabilities, is something they can’t keep up with,” Klippert told the NYT. The accelerated output created a “lot of stress” in other departments, like sales and marketing support, she added.
We’re now at an interesting inflection point of AI’s impact in the workplace. It’s been used to justify whittling down workforces across the globe, with one report finding that AI was cited in the announcements of more than 54,000 layoffs last year. This year included major names in tech: Jack Dorsey’s fintech firm Block and software giant Atlassian laid off thousands of employees while touting pivots to AI.
Yet, at the same time that jobs are being eliminated, AI is also creating more work that would be best done by another human. Someone has to test the AI code, and traditionally it’d be the guy who wrote it — but nowadays they’re too busy prompting an AI agent. Who’s supposed to pick up the slack is unclear.
“There are not enough application security engineers on the planet to satisfy what just American companies need,” Joe Sullivan, an adviser to Costanoa Ventures, told the NYT.
Moreover, AI may actually be making programmers’ jobs harder. Software engineers have admitted that being expected to produce more code while having to constantly supervise their AI tools is accelerating them towards burnout — a phenomenon that’s been documented in emerging research into the topic. One ongoing study dubbed this mental health toll AI “brain fry.”
Companies are still grappling with how to address the code glut. “The blessing and the curse is that now everyone inside your company becomes a coder,”
“It’s just going to break something, and they’re not going to know why it broke,” he told the NYT.
Another solution is throwing more AI at the problem. Anthropic and OpenAI have released AI agents designed to review code. And in December, Cursor, the provider of the much hyped AI coding tool, bought the startup Graphite, which builds an AI code reviewing platform.
Tomi Engdahl says:
Trickle Down Slop-onomics
Wall Street Journal Editor-in-Chief Instructs Staff to Welcome AI Sloplords
Open wide!
https://futurism.com/artificial-intelligence/wall-street-journal-sloplords?fbclid=IwVERDUARJkzFleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR71wBCHSs88-O0oMxVV8FKq4u7uK3s2hWodpsbDxYU81Id-A5QLXcdNjyBSzA_aem_hTOCbE8ggZck31ZlpS56Rw
Emma Tucker, the editor-in-chief of the esteemed Wall Street Journal, just heaped praise on the sloplords drowning journalism in AI-generated dreck.
In an email obtained by Semafor, Tucker congratulated a Fortune editor for being a likeminded individual embracing AI, and was so impressed by the magazine’s AI efforts that she forced her underlings to read about them.
Last month, the WSJ reported on how Fortune editor Nick Lichtenberg used AI to crank out 600 stories in just six months at the magazine, which is more than what his colleagues write in an entire year. AI-assisted articles made up nearly 20 percent of Fortune’s web traffic in the second half of 2025.
As Lichtenberg happily admits, he was literally just copy-pasting press releases into a chatbot and asking it to spit out an article. And this, apparently, is the kind of gumshoe work that warrants being lavished with effusive plaudits from the editor of one of the US’s so-called newspapers of record.
Per the Semafor scoop, Tucker emailed Fortune’s Alyson Shontell saying she “absolutely loved” the piece about “your reporter Lichtenberg,” before lamenting the broad resistance to AI in journalism. Tucker, naturally, viewed herself and Shontell as the pioneers bucking this trend.
“I love your totally clear-eyed, unsentimental approach to AI in newsrooms,” Tucker enthused. “It makes you pretty unique among our cohort.”
“I just did an All Hands meeting with our APAC staff (I’m in Tokyo) and told them they all had to read it,” she added.
Then Tucker dropped a hot take.
“Anyone who doesn’t get what you are doing at fortune [sic], or thinks it is ‘wrong’, should get out of journalism fast!”
AI’s role in the newsroom remains a hot-button issue in the industry, with recent controversies over the New York Times publishing content that was AI-generated or assisted — in both alleged and confirmed cases — spilling over into the public eye.
Despite how divisive the tech remains, newsroom leaders are championing its use whether their underlings like it or not. A senior manager at the Associated Press, for example recently told staffers that “resistance” to AI was “futile.” Meanwhile, major newspapers have launched high profile experiments with the tech, with mixed results.
Tomi Engdahl says:
Sleaze at Scale
Why Is the New York Times Laundering the Reputation of a Sleazy AI Startup That’s Selling GLP-1s via a Dishonest Dumpster Fire of Fake Doctors, Phony Before-and-After Pictures, and Other Glaring Red Flags?
“It’s just an automated GLP-1 prescription mill.”
https://futurism.com/artificial-intelligence/new-york-times-medvi-ai-glp1s?fbclid=IwVERDUARJk-xleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR71wBCHSs88-O0oMxVV8FKq4u7uK3s2hWodpsbDxYU81Id-A5QLXcdNjyBSzA_aem_hTOCbE8ggZck31ZlpS56Rw
Tomi Engdahl says:
https://www.facebook.com/share/p/18DvTfnnf7/
AI isn’t killing music… it’s exposing how it’s always worked and finally giving power back to the creators.
For years the industry’s been a hidden machine. You hear a “star” but behind them is a whole ghost team: writers, producers, engineers. Most people couldn’t even name who actually made the song they love. Now suddenly AI shows up and everyone’s shouting “fake”? Nah… what’s fake is pretending the system was pure in the first place.
Here’s what AI really does:
It removes the gatekeepers.
You don’t need:
– a £5k studio
– a top producer charging crazy money
– a vocalist if you can’t sing
– a label to say you’re “allowed” in
If you’ve got ideas, melody in your head, lyrics in your soul… you can bring it to life yourself. That’s not cheating. That’s evolution.
And let’s be real… not everyone using AI is making fire. Same way not everyone with decks is a DJ. Skill didn’t disappear, it shifted.
Now skill is:
– your ear for music
– your vision
– how you direct the AI
– how you build a vibe people actually feel
Two people can use the same tool and one makes magic, the other makes noise. That tells you everything.
To the haters saying “it’s not real music”…
Was autotune real?
Were synths real?
Were DAWs real?
Every generation cries when the next tool arrives. Then a few years later… it becomes standard.
And producers / singers getting salty… this is where it gets uncomfortable:
You’re not obsolete. You’re just not the only route anymore.
Songwriters don’t need to sell their work and stay invisible.
Singers don’t need to wait to be picked.
Producers don’t control access to sound anymore.
Everyone can stay in their own lane and build something from scratch.
If anything, the best producers and singers will still win… because they’ll use AI to go even further, faster.
At the end of the day the listener decides.
If a track hits, connects, moves people… no one on a dancefloor cares how it was made. Energy doesn’t lie.
AI didn’t kill music.
It cut out the middle and handed the keys to anyone with vision.
Adapt or get left behind.
Tomi Engdahl says:
“I built an algorithm to endow humans with perfect and infinite memory. Welcome to a new future where you can remember everything.”
AI startup offers humans ‘perfect and infinite memory’, Harvard professor says
Engramme says it uses ‘large memory models’ to store memories indefinitely
https://www.independent.co.uk/tech/ai-memory-startup-engramme-artificial-intelligence-b2956487.html
A neuroscience professor claims to have developed an AI algorithm that endows humans with “perfect and infinite memory”.
Gabriel Kreiman, who researches artificial intelligence and neuroscience at Harvard Medical School, launched a startup last month in the hope of commercialising technology that he says will transform people’s cognitive capabilities.
He describes his work as a “fight against oblivion”, allowing memories to be stored indefinitely.
The idea is to use something called “large memory models” – a play on the large language model (LLM) coinage used for AI tools like ChatGPT – in order to retrieve data from a person’s digital life.
In a manifesto on the startup’s website, the founders claim the technology will reshape every profession – from medicine and law, to the arts and engineering.
Tomi Engdahl says:
Lauttasaari-lehdessä julkaistiin tekoälyllä tehty katuhaastattelu
Media|”Tämä on tietenkin vastoin kaikkia periaatteitamme”, Lauttasaari-lehden päätoimittaja Petri Suhonen sanoo.
https://www.hs.fi/kulttuuri/art-2000011941912.html
Lauttasaari-lehdessä julkaistiin viime viikolla tekoälyllä tehty katuhaastattelu, jossa sekä kuvat että vastaukset olivat keksittyjä.
Lehden päätoimittajalle Petri Suhoselle asia tuli täytenä yllätyksenä.
Suhonen pyytää anteeksi lukijoilta ja sanoo, että tekoälyn käyttö on vastoin lehden kaikkia periaatteita. ”Avustajasuhteemme toimittajaan päättyy samantien”, hän toteaa.
Tomi Engdahl says:
”Palstan toimittaja on myöntänyt tehneensä viimeisimmän palstan tekoälykuvin. Tämä on tietenkin vastoin kaikkia periaatteitamme.”
”Avustajasuhteemme toimittajaan päättyy samantien.”
https://www.hs.fi/kulttuuri/art-2000011941912.html
Tomi Engdahl says:
Suspect in arson Sam Altman home aimed to kill, warned extinction AI https://share.google/qqwWs5jukZ4r5hVdH