In the tech world, there is a constant flow of changes and keeping up with them means the choice for tools and technologies which are the most appropriate to invest your time in.
In 2026 the best programming language or technology stack to learn really depends on your personal aims, hobbies, and apps you are going to create.
The use of AI is increasing. AI as a “Pair Programmer” is becoming the default. Code completion, refactoring, and boilerplate generation are used often. Devs spend more time reviewing and steering code than typing it. “Explain this error” and “why is this slow?” prompts are useful.
In prompt-Driven Development programmers describe the intent in natural language and then let AI generate first drafts of functions, APIs, or configs. Iterate by refining prompts rather than rewriting code. Trend: Knowing how to ask is becoming as important as syntax.
Strong growth in: Auto-generated unit and integration tests and edge-case discovery. Trend: “Test-first” is easier when AI writes the boring parts.
AI is moving up the stack. Trend: AI as a junior architect or reviewer, not the final decider.
AI comes to Security & Code Quality Scanning. Rapid adoption in: Static analysis and vulnerability detection, secret leakage and dependency risk checks. AI can give secure-by-default code suggestions. Trend: AI shifts security earlier in the SDLC (“shift left”).
Instead of one-off prompts: AI agents that plan → code → test → fix → retry. Multi-step autonomous tasks (e.g., “add feature X and update docs”) can be done in best cases. Trend: Still supervised, but moving toward semi-autonomous dev loops.
AI is heavily used for explaining large, unfamiliar codebases and translating between languages/frameworks. It helps onboarding new engineers faster.
What’s changing: Less manual boilerplate work
More focus on problem definition, review, and decision-making. There is stronger emphasis on fundamentals, architecture, and domain knowledge. Trend: Devs become editors, designers, and orchestrators.
AI usage policies and audit trails is necessary. Trend: “Use AI, but safely.”
Likely directions:
Deeper IDE + CI/CD integration
AI maintaining legacy systems
Natural-language → production-ready features
AI copilots customized to your codebase
526 Comments
Tomi Engdahl says:
I ran NetAlertX on a Raspberry Pi, and now I get notified the second a new device joins my network
https://www.xda-developers.com/ran-netalertx-raspberry-pi-notified-new-device-joins-network/
Tomi Engdahl says:
How to Setup Claude Code with Ollama in VSCode on Windows 11 | Zero-Cost AI Coding Assistant (2026)
https://www.youtube.com/watch?v=bQK9dBNlCsY
Tomi Engdahl says:
Intel 486 Support Likely To Be Removed In Linux 7.1
https://hackaday.com/2026/04/07/intel-486-support-likely-to-be-removed-in-linux-7-1/
Although everyone’s favorite Linux overlord [Linus Torvalds] has been musing on dropping Intel 486 support for a while now, it would seem that this time now has finally come. In a Linux patch submitted by [Ingo Molnar] the first concrete step is taken by removing support for i486 in the build system. With this patch now accepted into the ‘tip’ branch, this means that no i486-compatible image can be built any more as it works its way into the release branches, starting with kernel 7.1.
No mainstream Linux distribution currently supports the 486 CPU, so the impact should be minimal, and there has been plenty of warning. We covered the topic back in 2022 when [Linus] first floated the idea, as well as in 2025 when more mutterings from the side of [Linus] were heard, but no exact date was offered until now.
Tomi Engdahl says:
https://hackaday.com/2026/04/07/tinygo-boldly-goes-where-no-go-ever-did-go-before/
When you’re programming microcontrollers, you’re likely to think in C if you’re old-school, Rust if you’re trendy, or Python if you want it done quick and have resources to spare. What about Go? The programming language, not the game. That’s an option, too, with TinyGo now supporting over 100 different dev boards, along with webASM.
https://tinygo.org/
Tomi Engdahl says:
Visual Studio Code 1.115 introduces VS Code Agents app
news
Apr 8, 2026
3 mins
Preview of new companion app allows developers to run multiple agent sessions in parallel across multiple repos and iterate on human and agent reviews.
https://www.infoworld.com/article/4156169/visual-studio-code-1-115-introduces-vs-code-agents-app-2.html
Tomi Engdahl says:
I used Claude Code, Antigravity, and Perplexity Computer to build a portfolio — there was a clear winner
https://www.xda-developers.com/used-claude-code-antigravity-and-perplexity-computer-to-build-a-portfolio/
Web development has changed massively in the last few years. There was a time when building a website meant dealing with raw HTML and CSS and obsessing over every tiny pixel by styling it yourself. Then tools like Wix and Squarespace came along where you could build a decent-looking website just by dragging and dropping elements.
Now, we have tools that let you simply describe what you want, and they go ahead and build the entire thing for you exactly how you describe it. All you need to do is the ideating and prompting, and it handles the rest. I wanted to see how far that’s really come, so I took Claude Code, Google’s Antigravity, and Perplexity Computer, and gave them the exact same job: to create a portfolio website for me. I used the same exact prompts and instructions, and here’s how it went…
I asked all three tools to build me the same portfolio
Same prompt and instructions
Now, I didn’t really want a generic portfolio. If I did, I’d have just used a template on a tool like Wix! Instead, I wanted an interactive portfolio with fluid animations, a section with all my published work so far, and an AI chatbot integrated that a reader could use to ask questions about all my work.
So, to achieve this, I used the same process across all three tools.
And then, I let each tool do its own thing. Keep in mind that I’m judging the outputs on the very first version each tool produces — no edits, no follow-up prompts, no tweaking from my end. Just the raw first result.
Perplexity Computer
Nailed everything on the first try
While Claude Code and Antigravity are both built primarily for coding and development related tasks, Perplexity Computer is something that’s in a bit of a different lane. It’s more so positioned as an OpenClaw alternative, and the impressive bit about it is that it has access to multiple AI models.
The very first task I made it do was this one — building me a portfolio website. Now, right off the bat, I was impressed. As I mentioned above, the first thing I asked these tools was to find everything they can on me. Perplexity took the longest to wrap up its research, but it also returned the most in-depth information, which is exactly what I was looking for. It went as far as digging into my Instagram account, my Twitter, and even found some stuff that I wasn’t aware of, like the fact that Authory had featured my account on their website! It was a bit creepy how much AI can find out about you from a single name, but honestly, for this specific use case, that’s exactly what I needed.
Once I had sent off my idea to it, it asked a couple of follow-up questions including the visual mood I wanted, the primary audience, a headshot of mine, and if I had any websites or portfolios that I love the vibe of. It then went off and began building! It delegated the task of collecting my articles to Gemini 3 Flash, while Claude Opus 4.6 handled the coding. I hadn’t specified a design vibe and gave the tools the freedom to decide. Perplexity Computer went with warm cream and coral tones, had a headshot of me right on the front page and a typewriter style tagline that cycled through different phrases about me. This included tech journalist, CS student, NotebookLM evangelist, professional yapper (which it got off my Instagram)!
Now, Perplexity’s output was the only one that included all my articles published (as I had asked for) and the only one with an AI chatbot that actually worked properly! I barely had any complaints with the output, and this is definitely the portfolio I’m considering actually deploying. It was functional, aesthetically pleasing enough, and most importantly, it actually nailed every single thing I asked for on the first try.
Claude Code built something impressive, but it wasn’t quite there yet
I expected better
Claude Code has become one of my favorite AI tools, and my expectations from it for this were high. I began with the same prompt of asking it to conduct in-depth research, and it found decent enough information. It wasn’t as detailed as I hoped it would have been (since Claude is typically great at finding information), but for the sake of this experiment, I didn’t push it further and moved on to the building prompt. It asked me 10 questions regarding the portfolio including the design layout, the vibe, the AI model I wanted it to use, if I was going to deploy it, and more. Then, it began building.
Out of the three, Claude Code took the longest to build it, and it got stuck at fetching my articles.
Antigravity was the weakest of the three
It was just…dissapointing
Finally, it was time to put Google’s agentic IDE, Antigravity, through the same test. When it came to finding all the information it could about me, the tool searched the web and whipped up an answer within seconds. The information was surface-level and as with Claude, it could have been better, but it was enough to work with. Once I shared my idea with the tool, it came up with a high-level plan and questions about the design vibe and how I’d want the AI chatbot to work. I answered the questions, and then gave it the green light to begin building. Antigravity used Gemini 3.1 Pro to build the whole thing, and it took longer than Perplexity Computer but finished before Claude Code.
Now, remember how I mentioned Claude Code’s gradient design made it feel vibe-coded right off the bat? Antigravity’s portfolio had the exact same look. Same dark layout, same gradient vibe — if you put the two side by side, you’d struggle to tell which tool built which. Here’s what cracked me up, though. Despite taking longer than Perplexity to build this, the portfolio included seven articles only. Seven out of over four hundred articles published! The AI chatbot was also a complete disappointment. One of the seven articles included an article about Perplexity, so I assumed the AI chatbot could at least answer a question about it.
I didn’t expect this
What I find really ironic is that Perplexity Computer used both Gemini and Anthropic’s models to build its portfolio (the very same models that power Claude Code and Antigravity), and still came out on top. It outperformed both tools using their own tech, despite the same prompts and instructions! I wouldn’t have expected that.
Tomi Engdahl says:
Get started with Python’s new frozendict type
feature
Apr 8, 2026
5 mins
Python 3.15 introduces an immutable or ‘frozen’ dictionary that is useful in places ordinary dicts can’t be used.
https://www.infoworld.com/article/4152654/get-started-with-pythons-new-frozendict-type.html
Only very rarely does Python add a new standard data type. Python 3.15, when it’s released later this year, will come with one—an immutable dictionary, frozendict.
Dictionaries in Python correspond to hashmaps in Java. They are a way to associate keys with values. The Python dict, as it’s called, is tremendously powerful and versatile. In fact, the dict structure is used by the CPython interpreter to handle many things internally.
But a dict has a big limitation: it’s not hashable. A hashable type in Python has a hash value that never changes during its lifetime. Strings, numerical values (integers and floats), and tuples are all hashable because they are immutable. Container types, like lists, sets, and, yes, dicts, are mutable, so can’t guarantee they hold the same values over time.
Python has long included a frozenset type—a version of a set that doesn’t change over its lifetime and is hashable. Because sets are basically dictionaries with keys and no values, why not also have a frozendict type? Well, after much debate, we finally got just that. If you download Python 3.15 alpha 7 or later, you’ll be able to try it out.
The basics of a frozendict
In many respects, a frozendict behaves exactly like a regular dictionary. The main difference is you can’t use the conventional dictionary constructor (the {} syntax) to make one. You must use the frozendict() constructor
Tomi Engdahl says:
5 Useful Python Scripts to Automate Boring Excel Tasks
https://www.kdnuggets.com/5-useful-python-scripts-to-automate-boring-excel-tasks
Merging spreadsheets, cleaning exports, and splitting reports are necessary-but-boring tasks. These Python scripts handle the repetitive parts so you can focus on the actual work.
Tomi Engdahl says:
“Negative” views of Broadcom driving thousands of VMware migrations, rival says
Western Union exec says there were “challenges” working with Broadcom.
https://arstechnica.com/information-technology/2026/04/nutanix-claims-it-has-poached-30000-vmware-customers/
Tomi Engdahl says:
The next stages of AI conformance in the cloud-native, open-source world
Learn how standardization of AI workloads on Kubernetes has become an urgent industry priority and how llm-d and a CNCF conformance program makes that happen.
https://thenewstack.io/the-next-stages-of-ai-conformance-in-the-cloud-native-open-source-world/
Tomi Engdahl says:
Asqav: Open-source SDK for AI agent governance
AI agents are executing consequential tasks autonomously, often across multiple systems and with little record of what they did or why. Asqav, a Python SDK released under the MIT license, addresses that gap by attaching a cryptographic signature to each agent action and linking entries into a hash chain.
https://www.helpnetsecurity.com/2026/04/09/asqav-ai-agent-audit-trail/
Tomi Engdahl says:
Lukan AI Agent, IDE and workstation.
The open-source AI workstation for coding, ops, and life
https://www.producthunt.com/products/lukan-ai-agent-ide-and-workstation
Tomi Engdahl says:
https://devblogs.microsoft.com/microsoft365dev/mcp-apps-now-available-in-copilot-chat/
Tomi Engdahl says:
https://github.com/AgriciDaniel/claude-obsidian
Tomi Engdahl says:
How AI is changing software
Thomas Martinsen is a Technical Evangelist at Twoday with more than 25 years of experience at the intersection of technology, strategy, and business innovation. He’s a Microsoft Regional Director and Microsoft AI MVP, recognized for his ability to translate complex technologies into clear strategies that create measurable impact. Thomas is passionate about both community and leadership.
https://www.twoday.com/blog/how-ai-is-changing-software?utm_campaign=241033424-GL_SE%3A%20Software%20Engineering&utm_source=facebook&utm_medium=paidsocial&utm_term=always-on-2026&utm_content=tech-expert-thomas&hsa_acc=2085177758592038&hsa_cam=120239477866880201&hsa_grp=120239506738140201&hsa_ad=120239506738290201&hsa_src=fb&hsa_net=facebook&hsa_ver=3&fbclid=IwdGRjcARHVsBleHRuA2FlbQEwAGFkaWQBqy0zaJXOCXNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHsDpg1zo6yMaDZshe4jtlLWNBgCuQTjseZVqD1377tTkgUb1xtmmKsK5ICS6_aem_nubzoempMAUf7kaVUzJ5_g&utm_id=120239477866880201
Artificial Intelligence is rewriting the rules of software. What began as a wave of intelligent assistants that help users write, code, or summarize is now turning into something much bigger: intelligent systems that can reason, collaborate, and act.
This change goes far beyond adding a new feature or automating a task. It is transforming how software is designed, how it operates, and how it creates value.
From automation to intelligence
For decades, software automation was based on rules. If something happened, the system reacted exactly as programmed. It was predictable but limited. AI has broken that pattern. Instead of rigid scripts, we now build systems that understand intent, interpret context, and make decisions.
The first step in this evolution came with intelligent assistants: the copilots that help us write emails, generate code, or analyze data. The next step is Agentic AI – systems of autonomous agents that can reason, collaborate, and act on behalf of users or other systems.
Each agent focuses on a specific task. One observes, another decides, a third executes. Together they behave like distributed intelligence, capable of monitoring, coordinating, and adapting in ways that traditional software never could. The result is not a single chatbot or model but an ecosystem of specialized intelligences working together to solve complex problems.
Agent orchestration
Modern AI systems rarely rely on just one agent, model, or service. They combine multiple agents, models, and tools to complete a task. A language model might interpret a request, a reasoning agent decides what to do, and another component executes the action.
Platforms such as Microsoft Foundry and the Microsoft Agent Framework make this possible. They give developers tools to define agents, manage their permissions and connections, and ensure they collaborate securely. In practice, this creates the structure and control needed for agents to operate in harmony within an organization’s digital ecosystem.
Agentic integration platform
At Twoday, these ideas have become tangible in our AI-driven integration platform. Built on Azure Integration Services, it connects the systems companies rely on daily – CRM, ERP, HR, and many others – now with intelligence built into its core.
Inside the integrations, AI agents monitor data flows, validate data quality, and detect anomalies. If something looks unusual or breaks a rule, the system can involve a human automatically, creating a true human-in-the-loop experience. Instead of waiting for nightly syncs or manual error handling, the platform acts in real time and escalates only when necessary.
The platform also includes a chat-based interface that lets users interact directly with data. Instead of digging through dashboards or reports, they can simply ask, “Find all information about the property at Sundkaj 125 in Nordhavn,” or “Which suppliers haven’t updated their data?” The agents interpret the question, retrieve the information, and respond instantly in natural language.
Even the process of building integrations has changed. Developers use AI tools like GitHub Copilot to generate much of the repetitive code, documentation, and testing, freeing time to focus on architecture and governance. Intelligence is now embedded throughout the full lifecycle – from design to deployment and operation.
A new kind of software development
This shift fundamentally changes how software is built. Developers are no longer just writing logic; they are orchestrating intelligence. Software is designed as an ecosystem of agents that collaborate rather than a single monolithic program executing fixed instructions.
Learning and adaptation are built in. Cloud platforms like Microsoft Foundry handle versioning, access control, and monitoring of models and agents, ensuring that the system keeps improving. Software begins to behave more like an organization – a collection of specialized roles working within a shared structure.
What it means for leaders
For business and technology leaders, this shift brings both opportunity and responsibility. System architecture must evolve toward modular and event-driven designs that can host multiple AI components safely. Governance becomes more important, ensuring transparency, traceability, and alignment as decision-making is distributed across agents.
Teams will need new skills. Developers must understand both traditional engineering and emerging disciplines like prompting, evaluation, and model integration. As these capabilities mature, intelligent software will reduce friction between people and systems, improve response times, and enable new services built on real-time insight.
AI-native software
Our intelligent integration platform is one example of what’s coming. Over the next few years, more enterprise systems will move in the same direction: applications built as networks of agents, each responsible for reasoning, learning, or execution.
Software will become self-observing, adaptive, and collaborative across data sources and teams. The role of humans shifts from operating systems to supervising and guiding them.
This is the path toward AI-native software – solutions not just using AI, but designed around it from the ground up.
It’s still early, yet the direction is clear. AI isn’t just changing what software can do – it’s changing what software is.
Tomi Engdahl says:
https://devblogs.microsoft.com/azure-sql/introducing-sql-mcp-server/
Tomi Engdahl says:
The World Needs More Software Engineers
A Conversation with Box CEO Aaron Levie
https://www.oreilly.com/radar/the-world-needs-more-software-engineers/
Tomi Engdahl says:
https://thenewstack.io/build-mcp-server-tutorial/
Tomi Engdahl says:
RightNow AI Releases AutoKernel: An Open-Source Framework that Applies an Autonomous Agent Loop to GPU Kernel Optimization for Arbitrary PyTorch Models
https://www.marktechpost.com/2026/04/06/rightnow-ai-releases-autokernel-an-open-source-framework-that-applies-an-autonomous-agent-loop-to-gpu-kernel-optimization-for-arbitrary-pytorch-models/
Tomi Engdahl says:
https://shatteredsilicon.net/the-aws-lambda-kiss-of-death/
Tomi Engdahl says:
Local-first browser data gets real
analysis
Apr 3, 2026
4 mins
Wasm, PGlite, OPFS, and other new tech bring robust data storage to the browser, Electrobun brings Bun to desktop apps, Signals bring sanity to state management, and more in this month’s JavaScript Report
https://www.infoworld.com/article/4154031/local-first-browser-data-gets-real.html
Tomi Engdahl says:
https://thenewstack.io/ai-coding-tools-reckoning/
Tomi Engdahl says:
Meet ‘AutoAgent’: The Open-Source Library That Lets an AI Engineer and Optimize Its Own Agent Harness Overnight
A meta-agent ran overnight, modified its own harness, and climbed to #1 on SpreadsheetBench and the top GPT-5 score on TerminalBench. No human tuned the agent. That’s the point.
https://www.marktechpost.com/2026/04/05/meet-autoagent-the-open-source-library-that-lets-an-ai-engineer-and-optimize-its-own-agent-harness-overnight/
Tomi Engdahl says:
I connected Claude to Figma and it’s the workflow I didn’t know I was missing
https://www.xda-developers.com/connected-claude-to-figma-improved-design-workflow/
Tomi Engdahl says:
https://thenewstack.io/cursor-3-demotes-ide/
Tomi Engdahl says:
New Linux Kernel Rules Put The Onus On Humans For AI Tool Usage
https://hackaday.com/2026/04/14/new-linux-kernel-rules-put-the-onus-on-humans-for-ai-tool-usage/
It’s fair to say that the topic of so-called ‘AI coding assistants’ is somewhat controversial. With arguments against them ranging from code quality to copyright issues, there are many valid reasons to be at least hesitant about accepting their output in a project, especially one as massive as the Linux kernel. With a recent update to the Linux kernel documentation the use of these tools has now been formalized.
The upshot of the use of such Large Language Models (LLM) tools is that any commit that uses generated code has to be signed off by a human developer, and this human will ultimately bear responsibility for the code quality as well as any issues that the code may cause, including legal ones. The use of AI tools also has to be declared with the Assisted-by: tag in contributions so that their use can be tracked.
When it comes to other open source projects the approach varies, with NetBSD having banished anything tainted by ‘AI’, cURL shuttering its bug bounty program due to AI code slop, and Mesa’s developers demanding that you understand generated code which you submit, following a tragic slop-cident.
Meanwhile there are also rising concerns that these LLM-based tools may be killing open source through ‘vibe-coding’,