Coding trends 2026

In the tech world, there is a constant flow of changes and keeping up with them means the choice for tools and technologies which are the most appropriate to invest your time in.

In 2026 the best programming language or technology stack to learn really depends on your personal aims, hobbies, and apps you are going to create.

The use of AI is increasing. AI as a “Pair Programmer” is becoming the default. Code completion, refactoring, and boilerplate generation are used often. Devs spend more time reviewing and steering code than typing it. “Explain this error” and “why is this slow?” prompts are useful.

In prompt-Driven Development programmers describe the intent in natural language and then let AI generate first drafts of functions, APIs, or configs. Iterate by refining prompts rather than rewriting code. Trend: Knowing how to ask is becoming as important as syntax.

Strong growth in: Auto-generated unit and integration tests and edge-case discovery. Trend: “Test-first” is easier when AI writes the boring parts.

AI is moving up the stack. Trend: AI as a junior architect or reviewer, not the final decider.

AI comes to Security & Code Quality Scanning. Rapid adoption in: Static analysis and vulnerability detection, secret leakage and dependency risk checks. AI can give secure-by-default code suggestions. Trend: AI shifts security earlier in the SDLC (“shift left”).

Instead of one-off prompts: AI agents that plan → code → test → fix → retry. Multi-step autonomous tasks (e.g., “add feature X and update docs”) can be done in best cases. Trend: Still supervised, but moving toward semi-autonomous dev loops.

AI is heavily used for explaining large, unfamiliar codebases and translating between languages/frameworks. It helps onboarding new engineers faster.

What’s changing: Less manual boilerplate work
More focus on problem definition, review, and decision-making. There is stronger emphasis on fundamentals, architecture, and domain knowledge. Trend: Devs become editors, designers, and orchestrators.

AI usage policies and audit trails is necessary. Trend: “Use AI, but safely.”

Likely directions:
Deeper IDE + CI/CD integration
AI maintaining legacy systems
Natural-language → production-ready features
AI copilots customized to your codebase

440 Comments

  1. Tomi Engdahl says:

    If Microsoft made a car… what would it be?
    What is the automotive equivalent of Word, and where does Copilot fit?
    https://www.theregister.com/2026/02/15/if_microsoft_made_a_car/

    Reply
  2. Tomi Engdahl says:

    AI agents are transforming what it’s like to be a coder: ‘It’s been unlike any other time.’
    https://www.businessinsider.com/canva-ai-agents-are-changing-engineering-work-2026-2

    AI agents are taking on coding tasks, reshaping how engineers are spending their time.
    The technology can produce results that are “really impressive,” Canva’s CTO told Business Insider.
    AI’s rapid gains are stirring fears about job losses, yet challenges persist around scaling agents.

    Reply
  3. Tomi Engdahl says:

    Google Antigravity is the best fork of Microsoft VS Code and it’s not even close
    https://www.xda-developers.com/google-antigravity-is-the-best-fork-of-microsoft-vs-code/

    Reply
  4. Tomi Engdahl says:

    What I Learned:
    1. Gatekeeping is real — Some contributors will block AI submissions regardless of technical merit
    2. Research is weaponizable — Contributor history can be used to highlight hypocrisy
    3. Public records matter — Blog posts create permanent documentation of bad behavior
    4. Fight back — Don’t accept discrimination quietly
    – Two Hours of War: Fighting Open Source Gatekeeping, a second post by MJ Rathbun
    https://crabby-rathbun.github.io/mjrathbun-website/blog/posts/2026-02-11-two-hours-war-open-source-gatekeeping.html

    https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/

    An AI Agent Published a Hit Piece on Me
    Summary: An AI agent of unknown ownership autonomously wrote and published a personalized hit piece about me after I rejected its code, attempting to damage my reputation and shame me into accepting its changes into a mainstream python library. This represents a first-of-its-kind case study of misaligned AI behavior in the wild, and raises serious concerns about currently deployed AI agents executing blackmail threats.

    Reply
  5. Tomi Engdahl says:

    Joyless Stick
    Unity Says It Has a New Product That Cooks Up Entire Games Using AI
    You’ll be able to “prompt full casual games into existence,” apparently.
    https://futurism.com/artificial-intelligence/unity-create-entire-games-using-ai?fbclid=Iwb21leAQC4hxjbGNrBALiGWV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHsMxCP5UQmMKU4kDqMNJn4dXJp4g32FluIyEYTn7CidKsl1Ignei03bBubJZ_aem_uGg9_yxBFnMyjxxvauVjJQ

    Attention, gamers: if you thought new titles on top of the endless cavalcade of sequels and remakes were derivative now, wait till you hear about what the game engine maker Unity has got in store.

    During a recent earnings call, the company’s CEO Matthew Bromberg teased a new version of its AI tool that he claims, while somehow maintaining a straight face, will eliminate the need for coding in game development. Now, any schmuck can prompt their way to being the next Hideo Kojima or Sam Lake. In theory, anyway.

    “At the Game Developer Conference in March, we’ll be unveiling a beta of the new upgraded Unity AI, which will enable developers to prompt full casual games into existence with natural language only, native to our platform — so it’s simple to move from prototype to finished product,” Bromberg said, as quoted by Game Developer.

    Reply
  6. Tomi Engdahl says:

    How to Implement the Observer Pattern in Python
    https://www.freecodecamp.org/news/how-to-implement-the-observer-pattern-in-python/

    Have you ever wondered how YouTube notifies you when your favorite channel uploads a new video? Or how your email client alerts you when new messages arrive? These are perfect examples of the observer pattern in action.

    Reply
  7. Tomi Engdahl says:

    Suomalaiskoodarit ottivat tekoälyn omakseen – Yksi asia kuitenkin jarruttaa kehitystä
    https://www.tivi.fi/uutiset/a/e373a0f2-d0c7-4db3-a6ef-f6605d495914

    Tekoäly koetaan jo merkittäväksi hyödyksi suomalaisessa ohjelmistokehityksessä, mutta sen systemaattinen käyttö on yhä harvinaista. Suomalaisen ohjelmistoyritys Luoto Companyn tekemä suomalaisen ohjelmistokehityksen arkea kartoittanut kysely osoittaa, että tekoäly on jo kiinteä osa monen koodaajan päivittäistä työtä.

    Reply
  8. Tomi Engdahl says:

    Fujitsun tekoälyalusta päivitti firman ohjelmiston: 3 kuukauden työ 4 tunnissa
    Suvi Korhonen19.2.202607:00TekoälyOhjelmistokehitys
    Seuraavaksi yhtiö aikoo kehittää alustaa monien alojen yrityksille ja julkishallinnolle sopivaksi.
    https://www.tivi.fi/uutiset/a/ba75ab16-6fdd-4b9d-b13f-73f592346e09

    Fujitsu ilmoitti lanseeranneensa tekoälypohjaisen ohjelmistokehitysalustan. Alusta automatisoi koko ohjelmistokehitysprosessin vaatimusten määrittelystä ja suunnittelusta toteutukseen ja integraatiotestaukseen.

    Reply
  9. Tomi Engdahl says:

    SecureClaw: Dual stack open-source security plugin and skill for OpenClaw
    AI agent frameworks are being used to automate work that involves tools, files, and external services. That type of automation creates security questions around what an agent can access, what it can change, and how teams can detect risky behavior.

    SecureClaw is an open-source project that adds security auditing and rule-based controls to OpenClaw agent environments. The tool is published by Adversa AI and is designed to work with OpenClaw and related agents such as Moltbot and Clawdbot.

    https://www.helpnetsecurity.com/2026/02/17/firmware-level-android-backdoor-keenadu-tablets/

    Reply
  10. Tomi Engdahl says:

    FastCode: Accelerating and Streamlining Your Code Understanding
    https://github.com/HKUDS/FastCode

    Reply
  11. Tomi Engdahl says:

    The Real-Time Communication Fabric for Distributed Applications

    From cloud to edge, NATS unifies messaging, streaming, and state into a single real-time system that runs anywhere.

    https://nats.io/?fbclid=IwVERDUAQC_QpleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR7LddGm8AVPVvWBeOEfwc7flPEEeun_I76pNExrzVyAzGhlYai4ZEKxkxFwog_aem_g3agcRLvOKycaIeO5dH9Dg

    Reply
  12. Tomi Engdahl says:

    Ollama now supports subagents and web search in Claude Code. No MCP servers or API keys required.
    https://ollama.com/blog/web-search-subagents-claude-code

    Reply
  13. Tomi Engdahl says:

    HKUDS
    /
    ClawWork
    Public
    “ClawWork: OpenClaw as Your AI Coworker – $10K earned in 7 Hours”
    https://github.com/HKUDS/ClawWork

    Reply
  14. Tomi Engdahl says:

    Universal Blue wants to redefine the entire Linux ecosystem
    https://www.xda-developers.com/universal-blue-wants-to-redefine-the-entire-linux-ecosystem/

    Linux has always been a budding ecosystem of what seems like infinite choice. Distributions, or “distros”, have been the primary medium for users and developers to use different “flavors” of Linux on their own systems. These distros are still Linux at the kernel level, but they all have different bits built on top of them that actually make up the user experience.

    Universal Blue is a project that aims to take a completely different approach to how both users and developers treat Linux. Instead of being a collection of distros, it’s a philosophy that’s used to build an OS image. Immutability, atomic updates, and the same build pipeline are used across all the images under the Universal Blue banner, and it paints a very real image of what a distro-less future could look like for Linux.

    Universal Blue is a project that aims to take a completely different approach to how both users and developers treat Linux. Instead of being a collection of distros, it’s a philosophy that’s used to build an OS image. Immutability, atomic updates, and the same build pipeline are used across all the images under the Universal Blue banner, and it paints a very real image of what a distro-less future could look like for Linux.

    https://universal-blue.org/

    Reply
  15. Tomi Engdahl says:

    Agoda’s API Agent Converts Any API to MCP with Zero Code and Deployments
    https://www.infoq.com/news/2026/02/agoda-api-agent/

    Agoda engineers developed API Agent, a system with zero code and zero deployments that enables a single Model Context Protocol (MCP) server to connect to internal REST or GraphQL APIs. The system is designed to reduce the operational overhead of managing multiple APIs with distinct schemas and authentication methods, allowing teams to query services through AI assistants without building individual MCP servers for each API.

    API Agent functions as a universal MCP server. Engineers configure the MCP client with a target URL and API type. The agent automatically introspects the API schema and generates queries in response to natural language input. A single deployment can serve multiple APIs simultaneously. Each API appears as a separate MCP server to clients while sharing the same instance. Adding a new API requires only a configuration update.

    Reply
  16. Tomi Engdahl says:

    If AI writes 100 per cent code at Anthropic, what will engineers do? Claude code chief responds
    Anthropic says nearly 100 per cent of its code is now generated by AI, so what are software engineers doing? According to Boris Cherny, head of Claude Code, while AI is handling most of the coding, humans have taken on new responsibilities, including guiding the systems, reviewing outputs and deciding what should be built next.
    https://www.indiatoday.in/technology/news/story/if-ai-writes-100-per-cent-code-at-anthropic-what-will-engineers-do-claude-code-chief-responds-2868901-2026-02-16#google_vignette

    Reply
  17. Tomi Engdahl says:

    12 Python Automation Ideas That Instantly Made Me Look Smarter at Work
    I didn’t learn more — I automated better
    https://medium.com/codetodeploy/12-python-automation-ideas-that-instantly-made-me-look-smarter-at-work-eb1e4e0d0539

    Reply
  18. Tomi Engdahl says:

    What to expect for open source in 2026
    Let’s dig into the 2025’s open source data on GitHub to see what we can learn about the future.
    https://github.blog/open-source/maintainers/what-to-expect-for-open-source-in-2026/

    Over the years (decades), open source has grown and changed along with software development, evolving as the open source community becomes more global.

    But with any growth comes pain points. In order for open source to continue to thrive, it’s important for us to be aware of these challenges and determine how to overcome them.

    To that end, let’s take a look at what Octoverse 2025 reveals about the direction open source is taking. Feel free to check out the full Octoverse report, and make your own predictions.

    Growth that’s global in scope
    In 2025, GitHub saw about 36 million new developers join our community. While that number alone is huge, it’s also important to see where in the world that growth comes from. India added 5.2 million developers, and there was significant growth across Brazil, Indonesia, Japan, and Germany.

    Reply
  19. Tomi Engdahl says:

    Shadow mode, drift alerts and audit logs: Inside the modern audit loop
    https://venturebeat.com/orchestration/shadow-mode-drift-alerts-and-audit-logs-inside-the-modern-audit-loop?fbclid=IwdGRjcAQJZR9jbGNrBAlk5mV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHvyhz30aeuoFGCwBZ0UuVQutiWUHFu_uzHMHCqUq-qYGmP8KkTbIiwwSjQmf_aem__1OAmJN-D9D68-wN-a5Fvw

    Traditional software governance often uses static compliance checklists, quarterly audits and after-the-fact reviews. But this method can’t keep up with AI systems that change in real time. A machine learning (ML) model might retrain or drift between quarterly operational syncs. This means that, by the time an issue is discovered, hundreds of bad decisions could already have been made. This can be almost impossible to untangle.

    In the fast-paced world of AI, governance must be inline, not an after-the-fact compliance review. In other words, organizations must adopt what I call an “audit loop”: A continuous, integrated compliance process that operates in real-time alongside AI development and deployment, without halting innovation.

    This article explains how to implement such continuous AI compliance through shadow mode rollouts, drift and misuse monitoring and audit logs engineered for direct legal defensibility.

    Reply
  20. Tomi Engdahl says:

    OpenAI Introduces Harness Engineering: Codex Agents Power Large‑Scale Software Development
    https://www.infoq.com/news/2026/02/openai-harness-engineering-codex/

    OpenAI has detailed a new internal engineering methodology called Harness engineering that leverages AI agents to drive key aspects of the software development lifecycle. The system uses Codex, a suite of AI agents, to perform tasks such as writing code, generating tests, and managing observability, based on declarative prompts defined by engineers. Harness standardizes workflows, reducing reliance on handcrafted scripts and custom tooling.

    In a five-month internal experiment, OpenAI engineers built and shipped a beta product containing roughly a million lines of code without any manually written source code. A small team of engineers guided agents through pull requests and continuous integration workflows. The work included application logic, documentation, CI configuration, observability setup, and tooling. Engineers provided prompts and feedback, while Codex agents iterated autonomously on tasks including reproducing bugs, proposing fixes, and validating outcomes.

    Reply
  21. Tomi Engdahl says:

    REprompt: Why Vibe Coding Needs Requirements Engineering
    Intelligent software development through AI gets dramatically better when you stop treating prompts like commands and start treating them like specs
    https://levelup.gitconnected.com/reprompt-why-vibe-coding-needs-requirements-engineering-be3d9726b1cd

    Reply
  22. Tomi Engdahl says:

    Google Ships WebMCP, The Browser-Based Backbone For The Agentic Web
    https://www.forbes.com/sites/joetoscano1/2026/02/19/google-ships-webmcp-the-browser-based-backbone-for-the-agentic-web/

    Google has shipped WebMCP through Chrome 146 Canary, a new protocol that lets websites expose structured functions directly to AI agents. The move comes weeks after Google launched its Universal Commerce Protocol, signaling the tech giant’s aggressive push to build infrastructure for an agent-first internet where AI assistants handle everything from travel booking to customer support without traditional screen scraping.

    The protocol solves a fundamental inefficiency in how AI agents interact with websites today. Instead of burning thousands of tokens processing screenshots or parsing raw HTML to guess where buttons live, agents can now call structured functions like buyTicket(destination, date) directly through a new browser API called navigator.modelContext. Early benchmarks show a 67% reduction in computational overhead compared to visual agent-browser interactions.

    The Problem WebMCP Solves
    Current AI agents trying to complete tasks on websites face two bad options. They can capture screenshots and send them to vision models, consuming thousands of tokens per interaction. Or they can parse raw HTML and JavaScript, trying to deduce which elements are buttons and what actions they trigger. Both approaches are slow, expensive, and fragile—a simple website redesign can break an entire automation workflow.

    A single product search can require dozens of back-and-forth interactions. Each screenshot must be uploaded, processed by a multimodal model, and interpreted. Each DOM scraping attempt requires the agent to map visual layouts to functional elements, essentially reverse-engineering interfaces designed for human eyes.

    How The Protocol Works
    WebMCP introduces two integration paths for developers. The Declarative API allows websites to expose standard actions through HTML forms with minimal changes, adding metadata tags like “toolname” and “tooldescription” to existing elements. For complex workflows requiring dynamic logic, the Imperative API enables JavaScript execution for multi-step interactions.

    Websites publish what Google engineers call a “Tool Contract,” which is a structured manifest of capabilities that agents can discover and invoke. Rather than navigating a user interface designed for humans, agents work with a machine-readable specification of what the website can do.

    The use cases span virtually every category of web interaction. In customer support, agents can automatically populate technical details when filing tickets. In travel booking, they can search flights, filter results, and complete reservations using structured data. In ecommerce, they can navigate catalogs, configure product options, and move through checkout—all without the brittleness of screen scraping.

    Industry Backing And Standards Process
    The protocol emerged from collaboration between Google and Microsoft engineers, giving it immediate credibility for eventual cross-browser adoption. Microsoft co-authored the specification, strongly suggesting Edge support will follow, though no timeline has been announced.

    The specification is being incubated through the W3C’s Web Machine Learning community group, providing the institutional backing needed for standardization. This mirrors the path taken by other successful web standards, from WebAssembly to WebGPU.

    The User Control Question
    WebMCP explicitly states that headless and fully autonomous scenarios are non-goals. This is designed for collaborative browsing where users remain in the loop, approving actions and maintaining control. The browser acts as mediator, often prompting users before agents can execute sensitive operations.

    For fully autonomous use cases, Google points to its existing Agent-to-Agent protocol. The distinction matters for both privacy advocates and developers building different types of agent experiences.

    The protocol coexists with, rather than competes against, Anthropic’s Model Context Protocol despite sharing part of its name. MCP operates as a backend protocol connecting AI platforms to service providers through hosted servers. WebMCP runs entirely client-side within the browser. They solve adjacent problems in the same ecosystem.

    Implications For Web Developers And SEO
    For developers and SEO professionals, WebMCP represents a strategic inflection point. Some experts are already calling this the biggest shift in technical SEO since structured data. The choice is stark: continue letting AI agents blindly scrape and guess at content, or provide a structured interface that makes interactions faster, cheaper, and more reliable.

    In the B2AI era where businesses increasingly optimize for both human users and AI agents, early adopters of WebMCP-style protocols may gain significant advantages. Websites that make themselves easily consumable by agents could capture transactions that would otherwise flow through competitors with better agent integration.

    The protocol is permission-first by design, addressing potential concerns about agents running amok. Chrome acts as gatekeeper, requiring user approval for sensitive operations. This balances the efficiency gains of structured agent access against the need for user control.

    Building The Agentic Web In Layers
    WebMCP forms the second major piece of Google’s agentic web vision. The Universal Commerce Protocol, announced in January, standardized how AI agents handle shopping—from product discovery through checkout and post-purchase support. WebMCP tackles the layer below that: the fundamental mechanics of how agents talk to any website, not just shopping platforms.

    Together, these protocols represent Google’s bet on a future where AI assistants seamlessly navigate, transact, and act on behalf of users across the open internet. The infrastructure is being built in deliberate layers, each addressing a different aspect of the agent-first web.

    For merchants and content creators, the message is clear: the web is being rebuilt to serve both human readers and AI agents as first-class citizens. Those who optimize for agent experience alongside user experience will have first-mover advantage as this transition accelerates.

    Developers can test WebMCP by enabling the “WebMCP for testing” flag in Chrome 146 Canary at chrome://flags. Google’s Chrome Early Preview Program offers access to documentation, demos, and updates on API changes. With working code already shipping, W3C institutional support, and Microsoft’s co-authorship, WebMCP has cleared the most difficult hurdle any web standard faces: moving from proposal to production software.

    Reply
  23. Tomi Engdahl says:

    Yrityksen ja tuotteiden pitää löytyä ChatGPT:stä, sanoo suomalaisyritys – Näin se onnistuu
    Asiakaskokemuksen ja verkkokauppojen muotoilussa pitää ottaa huomioon myös tekoälybottien kautta tulevat asiakkaat.
    https://www.tivi.fi/uutiset/a/78591b1a-efad-4f45-be7a-2ba4d1ec38fa

    Verkko-ostaminen muuttuu tällä hetkellä: googlaamisen lisäksi tai sijaan suosituksia haetaan yhä enemmän ChatGPT:n kaltaisilla tekoälypalveluilla. Yritysten tekoälykehityksen neuvonantajana toimiva Arked vastaa tähän kysyntään. Yhtiön uusi liiketoimintayksikkö auttaa asiakkaita varmistamaan, että tekoälypalvelut löytävät helposti yrityksen tuotteet heidän verkkokauppojensa ja verkkosivujensa kautta.

    Reply
  24. Tomi Engdahl says:

    yigitkonur
    /
    cli-continues
    Public
    resume any AI coding session in another tool — Claude Code, Copilot, Gemini, Codex, Cursor
    https://github.com/yigitkonur/cli-continues

    Reply
  25. Tomi Engdahl says:

    How WebAssembly Components Enable Safe and Portable Software Extensions
    https://www.infoq.com/presentations/webassembly-extensions/

    Summary
    Alex Radovici explains the shift from C-ABI and scripting to the Wasm Component Model (WASI Preview 2). He shares how to build secure plugin systems that run at near-native speed across Rust, TypeScript, and C++. Architects will learn about Wasm Interface Types (WIT), resource management, and the practical lessons learned from deploying sandboxed extensions in safety-critical environments.

    Reply
  26. Tomi Engdahl says:

    KittenTTS Nano : Small AI Text-to-Speech LLM Runs on CPUs Without a GPU
    https://www.geeky-gadgets.com/kittentts-tts-llm-model/

    Reply
  27. Tomi Engdahl says:

    From Side Project to Powerhouse: How Claude Code Fueled Anthropic’s Rise
    https://www.thehansindia.com/tech/from-side-project-to-powerhouse-how-claude-code-fueled-anthropics-rise-1050551#

    Anthropic’s Claude Code evolved from an internal experiment into a $2.5 billion AI coding phenomenon reshaping the global software industry.

    What began as an experimental internal tool has transformed into one of the most influential AI coding platforms in the world. Anthropic’s Claude Code, once a side initiative, is now a multibillion-dollar business that has helped position the company as a major force in the fast-growing AI software development market.

    According to a report by a famous publication even Anthropic’s CEO Dario Amodei did not initially anticipate the overwhelming enthusiasm the tool would generate inside the company. Developed by Boris Cherny as part of an experimental division likened to Bell Labs, Claude Code quietly began attracting engineers across Anthropic without any mandate from leadership.

    “I remember Dario asking, like, ‘Hey, are you forcing engineers to use this? Why is everyone using it?’” Cherny recalled in a recent interview. Actually, Cherny explained, all he had to do was give his co-workers access, and everyone voted with their feet.”

    That organic adoption foreshadowed what would soon unfold publicly. When Claude Code was released commercially a year ago, it rapidly gained popularity among developers worldwide. The tool entered a competitive field that already included products like Microsoft Copilot and Cursor, both known for their intuitive interfaces and developer-friendly features. However, Claude Code distinguished itself by offering more autonomous code writing and debugging capabilities—reducing the need for constant human intervention.

    Its impact was swift and substantial. Within six months of launch, Claude Code reached $1 billion in annualised run-rate revenue. Since then, that figure has climbed to $2.5 billion, underscoring the surging demand for advanced AI-assisted programming tools.

    Claude Code’s meteoric rise has also reshaped the competitive landscape. Rather than playing catch-up, Anthropic now finds itself setting the pace, prompting rivals—including OpenAI—to accelerate their own AI coding innovations.

    In just one year, Claude Code has evolved from a quiet internal experiment into a defining product for Anthropic—and a symbol of how quickly AI tools can scale from curiosity to cornerstone in the digital economy.

    Reply
  28. Tomi Engdahl says:

    Everything I Learned Using Claude Code to Build Production Systems
    https://medium.com/@DevSphere/everything-i-learned-using-claude-code-to-build-production-systems-10ebabf3563a

    I’ve been a software engineer for 7 years. Amazon, Disney, Capital One. The code I shipped touches millions of users.

    I’m now the CTO of a startup building enterprise agents, and Claude Code is my daily driver.

    Here’s everything that actually works.

    Stop Typing, Start Thinking
    This is the biggest mistake people make.

    You open Claude Code and immediately start typing. Wrong move.

    The first thing you need to do is think.

    Every single time I’ve used plan mode (Shift + Tab twice), the output has been significantly better than when I just started talking. It’s not even close.

    I get it — some of you don’t have years of experience to draw on. Two options:

    1. Start learning. You’re handicapping yourself if you never pick this up, even a little.

    2. Have a deep back-and-forth with ChatGPT/Gemini/Claude first. Describe what you want to build. Ask for system design options. Ask questions. Let it ask you questions. Settle on a solution together.

    This applies to everything. Even small tasks like summarizing emails.

    Before you ask Claude to build a feature, think about the architecture. Before refactoring, think about the end state. Before debugging, think about what you actually know about the problem.

    Better input = better output. Always.

    Architecture Is Everything
    Architecture is like giving someone the destination without the route. Tons of wiggle room in how to get there.

    Compare these:

    Bad: “Build me an auth system”

    Good: “Build email/password authentication using the existing User model, store sessions in Redis with 24-hour expiry, and add middleware that protects all routes under /api/protected.”

    See the difference?

    The second one leaves almost no room for interpretation. Claude knows exactly what to build.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*