Coding trends 2026

In the tech world, there is a constant flow of changes and keeping up with them means the choice for tools and technologies which are the most appropriate to invest your time in.

In 2026 the best programming language or technology stack to learn really depends on your personal aims, hobbies, and apps you are going to create.

The use of AI is increasing. AI as a “Pair Programmer” is becoming the default. Code completion, refactoring, and boilerplate generation are used often. Devs spend more time reviewing and steering code than typing it. “Explain this error” and “why is this slow?” prompts are useful.

In prompt-Driven Development programmers describe the intent in natural language and then let AI generate first drafts of functions, APIs, or configs. Iterate by refining prompts rather than rewriting code. Trend: Knowing how to ask is becoming as important as syntax.

Strong growth in: Auto-generated unit and integration tests and edge-case discovery. Trend: “Test-first” is easier when AI writes the boring parts.

AI is moving up the stack. Trend: AI as a junior architect or reviewer, not the final decider.

AI comes to Security & Code Quality Scanning. Rapid adoption in: Static analysis and vulnerability detection, secret leakage and dependency risk checks. AI can give secure-by-default code suggestions. Trend: AI shifts security earlier in the SDLC (“shift left”).

Instead of one-off prompts: AI agents that plan → code → test → fix → retry. Multi-step autonomous tasks (e.g., “add feature X and update docs”) can be done in best cases. Trend: Still supervised, but moving toward semi-autonomous dev loops.

AI is heavily used for explaining large, unfamiliar codebases and translating between languages/frameworks. It helps onboarding new engineers faster.

What’s changing: Less manual boilerplate work
More focus on problem definition, review, and decision-making. There is stronger emphasis on fundamentals, architecture, and domain knowledge. Trend: Devs become editors, designers, and orchestrators.

AI usage policies and audit trails is necessary. Trend: “Use AI, but safely.”

Likely directions:
Deeper IDE + CI/CD integration
AI maintaining legacy systems
Natural-language → production-ready features
AI copilots customized to your codebase

440 Comments

  1. Tomi Engdahl says:

    The revenge of SQL: How a 50-year-old language reinvents itself
    feature
    Mar 5, 2026
    9 mins

    https://www.infoworld.com/article/4140734/the-revenge-of-sql-how-a-50-year-old-language-reinvents-itself.html

    From the browser to the back end, the ‘boring’ choice is exciting again. We look at three trends converging to bring SQL back to center stage.

    Reply
  2. Tomi Engdahl says:

    Forget Pivot Tables and interactive dashboards, I use Copilot in Excel for advanced data analysis
    https://www.xda-developers.com/forget-pivot-tables-and-interactive-dashboards-use-copilot-in-excel-for-data-analysis/

    Reply
  3. Tomi Engdahl says:

    I’m using claude –worktree for everything now
    https://www.youtube.com/watch?v=yv8VZpov8bk

    Reply
  4. Tomi Engdahl says:

    Agentic Commerce Optimization: A Technical Guide To Prepare For Google’s UCP
    UCP expands commerce beyond checkout into discovery, loyalty, and post-purchase support, redefining how brands compete for AI-mediated selection.
    https://www.searchenginejournal.com/agentic-commerce-optimization-a-technical-guide-to-prepare-for-googles-ucp/566969/

    Reply
  5. Tomi Engdahl says:

    A Beloved Music Streaming App Is Back, Thanks to Claude Code
    Updated Feb 24, 2026
    When Tomahawk shut down in 2016, it was powered by a team of six. A decade later, developer J Herskowitz has vibe-coded it back to life as Parachord with an assist from Anthropic’s AI.
    https://uk.pcmag.com/ai/163330/a-beloved-music-streaming-app-is-back-thanks-to-claude-code

    Reply
  6. Tomi Engdahl says:

    Multi-agent workflows often fail. Here’s how to engineer ones that don’t.
    Most multi-agent workflow failures come down to missing structure, not model capability. Learn the three engineering patterns that make agent systems reliable.
    https://github.blog/ai-and-ml/generative-ai/multi-agent-workflows-often-fail-heres-how-to-engineer-ones-that-dont/

    Reply
  7. Tomi Engdahl says:

    David Gewirtz / ZDNET:
    Anthropic debuts Code Review for Claude Code, which uses agents to check pull requests for bugs, and says a typical code review costs $15 to $25 in token usage — ZDNET’s key takeaways — Anthropic launches AI agents to review developer pull requests. — Internal tests tripled meaningful code review feedback.

    This new Claude Code Review tool uses AI agents to check your pull requests for bugs – here’s how
    Each pull request can cost up to $25. Here’s why companies might still pay to prevent catastrophic bugs.
    https://www.zdnet.com/article/claude-code-review-ai-agents-pull-request-bug-detection/

    ZDNET’s key takeaways

    Anthropic launches AI agents to review developer pull requests.
    Internal tests tripled meaningful code review feedback.
    Automated reviews may catch critical bugs humans miss.

    Anthropic today announced a new Code Review beta feature built into Claude Code for Teams and Enterprise plan users. It’s a new software tool that uses agents working in teams to analyze completed blocks of new code for bugs and other potentially problematic issues.
    What’s a pull request?

    To understand this new Anthropic offering, you need to understand the concept of a pull request. And that leads me to a story about a man named Linus.

    Long ago, Linux creator Linus Torvalds had a problem. He was managing lots of contributions to the open source Linux operating system. All the changes were getting out of control. Source code control systems (a method for managing source code changes) had been around for quite a while before then, but they had a major problem. Those old SCCSs were not meant to manage distributed development by coders all across the world.

    Today, almost every large project uses GitHub or one of its competitors. GitHub (as differentiated from Git) is the centralized cloud service that holds code repositories managed by Git. A few years back, GitHub was purchased by Microsoft, fostering all sorts of doom-and-gloom conspiracy theories. But Microsoft has proven to be a good steward of this precious resource, and GitHub keeps chugging along, managing the world’s code.

    All that brings us back to pull requests, known as PRs in coder-speak. A pull request is initiated when a programmer wants to check in some new or changed code to a code repository. Rather than just merging it into the main track, a PR tells repo supervisors that there’s something new, ready to be reviewed.

    Quick note: to coders, PR is an acronym for pull request. For marketers, PR means public relations. When you read about tech, you’ll see both acronyms, so pay attention to the context to distinguish between the two.

    Code review at Anthropic

    In my article, 7 AI coding techniques I use to ship real, reliable products – fast, my bonus technique was using AI for code review. As a lone developer, I don’t use a formalized code review process like the one Anthropic is introducing.

    I just tell a new session of the AI to look at my code and let me know what’s not right. Sometimes I use the same AI (ie, Claude Code to look at Claude’s code), and other times I use a different AI (like when I use OpenAI’s Codex to review Claude Code generated code). It’s far from a comprehensive review, but almost every time I ask for a review, one AI or the other finds something that needs fixing.

    The new Claude Code Review capability is modeled on the process used by Anthropic. The company has essentially productized its own internal methodology. According to Anthropic, customers “Tell us developers are stretched thin, and many PRs get skims rather than deep reads.”

    Before running Code Review, Anthropic coders got back “substantive” review comments about 16% of the time. With Code Review, coders are getting back substantive comments 54% of the time. While that seems to mean more work for coders, what it really means is that nearly three times the number of coding oopsies have been caught before they cause damage.

    According to Anthropic, the size of the internal PR impacts the level of review findings. Large pull requests with more than 1,000 changed lines show findings 84% of the time. Small pull requests of under 50 lines produce findings 31% of the time. Anthropic engineers “largely agree with what it surfaces: less than 1% of findings are marked incorrect.”

    Examples of issues surfaced during testing

    I’m always fascinated by what others experience while doing their jobs. Anthropic provided some examples of problems Code Review identified during its early testing.

    In one case, a single line change appeared to be routine. It would have normally been quickly approved. But Code Review flagged it as critical. It turns out this tiny little change would have broken authentication for the service. Because Code Review caught it, it was fixed before the move. The original coder said that they wouldn’t have caught that error on their own.

    Another example occurred when filesystem encryption code was being reorganized in an open source product. According to the report, “Code Review surfaced a pre-existing bug in adjacent code: a type mismatch that was silently wiping the encryption key cache on every sync.”

    This is what we call a silent killer in coding. It could have resulted in data loss, performance degradation, and security risks. Anthropic described it as “A latent issue in code the PR happened to touch, the kind of thing a human reviewer scanning the changeset wouldn’t immediately go looking for.”

    If that hadn’t been caught and fixed, it would have made for a very bad day for someone (or a whole bunch of someones).

    How the multi-agent review system works

    Code Review runs fairly quickly, turning around fairly complex reviews in about 20 minutes. When a pull request is opened, Code Review kicks off a bunch of agents that analyze code in parallel.

    Various agents detect potential bugs, verify findings to filter false positives, and rank issues by severity. The results are consolidated so that all the results from all the agents appear as a single summary comment on the pull request, alongside inline comments for specific problems.

    In a demo, Anthropic showed that the summary comment can also include a fix directive. So if Code Review finds a bug, it can be fed to Claude Code to fix. The company says that reviews scale with complexity: larger pull requests receive deeper analysis and more agents.

    Anthropic really seems to like spawning multiple agents. In the past, I’ve had some fairly serious difficulty wrangling them after they’re launched. In fact, the first technique I shared in my 7 coding techniques article was to specifically tell Claude Code to avoid launching agents in parallel.

    Reply
  8. Tomi Engdahl says:

    I tried a Claude Code rival that’s local, open source, and completely free – how it went
    I was curious if Block’s Goose agent, paired with Ollama and the Qwen3-coder model, could really replace Claude Code. Here’s how it worked.
    https://www.zdnet.com/article/claude-code-alternative-free-local-open-source-goose/

    ZDNET’s key takeaways

    Free AI tools Goose and Qwen3-coder may replace a pricey Claude Code plan.
    Setup is straightforward but requires a powerful local machine.
    Early tests show promise, though issues remain with accuracy and retries.

    Jack Dorsey is the founder of Twitter (now X), Square (now Block), and Bluesky (still blue). Back in July, he posted a fairly cryptic statement on X, saying “goose + qwen3-coder = wow”.

    Reply
  9. Tomi Engdahl says:

    The Silent Skill That Separates Great Developers From Average Ones (Hint: It’s Not Coding)
    https://medium.com/@thedevnotebook/the-silent-skill-that-separates-great-developers-from-average-ones-hint-its-not-coding-c48d1a32984f

    The second developer did something unusual.

    He didn’t touch the keyboard for almost ten minutes.

    He stared at the screen.
    Scrolled slowly through the code.
    Read a function.
    Then another.
    Opened a log file.
    Ran the same request again.

    Still no typing.

    At that moment it almost looked like he wasn’t working at all.

    But then he quietly said something that changed everything:

    “This isn’t where the bug is.”

    He opened a completely different file.

    Two minutes later, he fixed it.

    The Skill Nobody Talks About
    Most developers believe the biggest skill in software engineering is writing code.

    It isn’t.

    The real difference between average developers and great ones is debugging.

    And strangely, nobody teaches it.

    Bootcamps teach frameworks.
    Courses teach syntax.
    Tutorials teach how to build projects.

    But when something breaks — which happens constantly in real systems — developers are suddenly on their own.

    Reply
  10. Tomi Engdahl says:

    After self-hosting everything for a year, I learned that tech skills matter LESS than I thought
    https://www.xda-developers.com/self-hosting-is-not-everything-about-technology/

    I was wrong. After twelve months of managing my own data, I’ve learned a humbling lesson: tech skills are just the entry fee. The real challenge of self-hosting isn’t the code or the complex commands. It’s the lifestyle. In this year of self-hosting, I’ve realized that your self-hosting success depends much more on your daily habits than your technical brilliance.

    Consistency matters more than expertise
    Patience is the most important skill

    Technical knowledge helped me spin up containers quickly. But consistency is what keeps those containers useful months later. The real work is the unglamorous stuff, like checking backups every week, following a clear naming system for files, and actually installing updates instead of ignoring them.

    Most self-hosted setups don’t fail because someone lacks technical skills. They fail because small maintenance tasks slowly get neglected. Updates get postponed, backups stop running, and logs go unchecked. Over time, these small gaps turn into bigger problems.

    What actually kept my setup stable was a simple habit: regular check-ins. I made it a routine to update containers, confirm backups were working, and quickly review my services. None of this required advanced knowledge, just consistency.

    Reply
  11. Tomi Engdahl says:

    Just Vibes
    Entirely Vibe-Coded Operating System Is a Bug-Filled Disaster
    “You found an early build of Windows 12.”
    https://futurism.com/artificial-intelligence/entirely-vibe-coded-operating-system-bug-filled-disaster

    Reply
  12. Tomi Engdahl says:

    Copy That
    There’s a Grim New Expression: “AI;DR”
    “Why should I bother to read something someone else couldn’t be bothered to write?”
    https://futurism.com/artificial-intelligence/aidr-meaning

    Reply
  13. Tomi Engdahl says:

    GitHub Data Shows AI Tools Creating “Convenience Loops” That Reshape Developer Language Choices
    https://www.infoq.com/news/2026/03/ai-reshapes-language-choice/

    Reply
  14. Tomi Engdahl says:

    CLAUDE.md Best Practices
    10 Sections to Include in your CLAUDE.md
    Nick Babich
    Nick Babich
    https://uxplanet.org/claude-md-best-practices-1ef4f861ce7c

    Reply
  15. Tomi Engdahl says:

    Cloudflare Releases Experimental Next.js Alternative Built With AI Assistance
    https://www.infoq.com/news/2026/03/cloudflare-vinext-experimental/

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*