AI trends 2026

Here are some of the the major AI trends shaping 2026 — based on current expert forecasts, industry reports, and recent developments in technology. The material is analyzed using AI tools and final version hand-edited to this blog text:

1. Generative AI Continues to Mature

Generative AI (text, image, video, code) will become more advanced and mainstream, with notable growth in:
* Generative video creation
* Gaming and entertainment content generation
* Advanced synthetic data for simulations and analytics
This trend will bring new creative possibilities — and intensify debates around authenticity and copyright.

2. AI Agents Move From Tools to Autonomous Workers

Rather than just answering questions or generating content, AI systems will increasingly act autonomously, performing complex, multi-step workflows and interacting with apps and processes on behalf of users — a shift sometimes called agentic AI. These agents will become part of enterprise operations, not just assistant features.

3. Smaller, Efficient & Domain-Specific Models

Instead of “bigger is always better,” specialized AI models tailored to specific industries (healthcare, finance, legal, telecom, manufacturing) will start to dominate in many enterprise applications. These models are more accurate, legally compliant, and cost-efficient than general models.

4. AI Embedded Everywhere

AI won’t be an add-on feature — it will be built into everyday software and devices:
* Office apps with intelligent drafting, summarization, and task insights
* Operating systems with native AI
* Edge devices processing AI tasks locally
This makes AI pervasive in both work and consumer contexts.

5. AI Infrastructure Evolves: Inference & Efficiency Focus

More investment is going into inference infrastructure — the real-time decision-making step where models run in production — thereby optimizing costs, latency, and scalability. Enterprises are also consolidating AI stacks for better governance and compliance.

6. AI in Healthcare, Research, and Sustainability

AI is spreading beyond diagnostics into treatment planning, global health access, environmental modeling, and scientific discovery. These applications could help address personnel shortages and speed up research breakthroughs.

7. Security, Ethics & Governance Become Critical

With AI handling more sensitive tasks, organizations will prioritize:
* Ethical use frameworks
* Governance policies
* AI risk management
This trend reflects broader concerns about trust, compliance, and responsible deployment.

8. Multimodal AI Goes Mainstream

AI systems that understand and generate across text, images, audio, and video will grow rapidly, enabling richer interactions and more powerful applications in search, creative work, and interfaces.

9. On-Device and Edge AI Growth

Processing AI tasks locally on phones, wearables, or edge devices will increase, helping with privacy, lower latency, and offline capabilities — especially crucial for real-time scenarios (e.g., IoT, healthcare, automotive).

10. New Roles: AI Manager & Human-Agent Collaboration

Instead of replacing humans, AI will shift job roles:
* People will manage, supervise, and orchestrate AI agents
* Human expertise will focus on strategy, oversight, and creative judgment
This human-in-the-loop model becomes the norm.

Sources:
[1]: https://www.brilworks.com/blog/ai-trends-2026/?utm_source=chatgpt.com “7 AI Trends to Look for in 2026″
[2]: https://www.forbes.com/sites/bernardmarr/2025/10/13/10-generative-ai-trends-in-2026-that-will-transform-work-and-life/?utm_source=chatgpt.com “10 Generative AI Trends In 2026 That Will Transform Work And Life”
[3]: https://millipixels.com/blog/ai-trends-2026?utm_source=chatgpt.com “AI Trends 2026: The Key Enterprise Shifts You Must Know | Millipixels”
[4]: https://www.digitalregenesys.com/blog/top-10-ai-trends-for-2026?utm_source=chatgpt.com “Digital Regenesys | Top 10 AI Trends for 2026″
[5]: https://www.n-ix.com/ai-trends/?utm_source=chatgpt.com “7 AI trends to watch in 2026 – N-iX”
[6]: https://news.microsoft.com/source/asia/2025/12/11/microsoft-unveils-7-ai-trends-for-2026/?utm_source=chatgpt.com “Microsoft unveils 7 AI trends for 2026 – Source Asia”
[7]: https://www.risingtrends.co/blog/generative-ai-trends-2026?utm_source=chatgpt.com “7 Generative AI Trends to Watch In 2026″
[8]: https://www.fool.com/investing/2025/12/24/artificial-intelligence-ai-trends-to-watch-in-2026/?utm_source=chatgpt.com “3 Artificial Intelligence (AI) Trends to Watch in 2026 and How to Invest in Them | The Motley Fool”
[9]: https://www.reddit.com//r/AI_Agents/comments/1q3ka8o/i_read_google_clouds_ai_agent_trends_2026_report/?utm_source=chatgpt.com “I read Google Cloud’s “AI Agent Trends 2026” report, here are 10 takeaways that actually matter”

215 Comments

  1. Tomi Engdahl says:

    He expects OpenAI to go bust “over the next 18 months.” https://trib.al/Begzz4P

    Reply
  2. Tomi Engdahl says:

    Wikipedia may be the largest compendium of human knowledge ever created, but can it survive?
    As the website turns 25, it faces myriad challenges from regulators, AI, the far right and Elon Musk

    https://www.ft.com/content/513761bb-3b6c-4b32-9931-a34f01047558?shareType=nongift&fbclid=IwdGRjcAPVq21jbGNrA9Wq-GV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHrg-BixocGP0ez4LGsZru4qIOn0zBqBLhMWFhZRu488FD26Inl0rbrZsV129_aem_kjf2TNZdOu5jxiuh3ChVuA

    Before Wikipedia’s birth in 2001 there was Nupedia, and before Nupedia there was Bomis, a male-oriented web portal co-founded by Wales, a former options trader. Bomis was set up as an alternative to Yahoo, an index for the early internet, but it became a favourite for those seeking soft-porn content. In 2000, Wales founded an online encyclopedia with funding from Bomis, and hired its first employee

    Reply
  3. Tomi Engdahl says:

    “It’s clear that a lot of jobs are going to disappear: it’s not clear that it’s going to create a lot of jobs to replace that.” https://trib.al/hBFzWQF

    Reply
  4. Tomi Engdahl says:

    Opposition to Elon Musk’s AI Stripping Clothing Off Children Is Nearly Universal, Polling Shows
    Good.
    https://futurism.com/artificial-intelligence/opposition-grok-stripping-children?utm_sf_post_ref=651582173&utm_sf_cserv_ref=352364611609411&fbclid=IwdGRjcAPWrPdjbGNrA9aspWV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHtsawt8K5aoj7coUiKLRQ_yQ5AmyEu-xpl-M0HVwJQIfzuqhmozEHbc6RudV_aem_a4p4t6AUtRhKqcrpVxLCwg

    Color us shocked: pretty much everyone hates that Elon Musk’s Grok has been digitally undressing photos of adults and children.

    At least that’s the case in the UK, providing the closest thing we’ve currently got to a gauge of the public sentiment on the issue. In a new survey conducted by YouGov amid the latest controversy surrounding the xAI chatbot, a staggering 97 percent of respondents said that AI tools shouldn’t be allowed to generate sexually explicit content of children, and 96 percent said they shouldn’t be able to generate “undressed” images of minors only wearing clothing like underwear, either. Sanity may not prevail on X, but it does at least appear to still have a firm grip in real life.

    The overwhelming consensus goes to show just how much of a nerve xAI and Grok struck over the past week and half as the bot allowed users to easily generate nudes or sexually charged images of people, many of them minors, whose photos were shared on X, Musk’s social media platform where Grok operates. The trend spiraled out of control so quickly that the AI content analysis firm Copyleaks estimated the bot was generating a nonconsensually sexualized image every single minute.

    Outrage quickly spread against the appalling practice among the public and regulators, and some countries, including Malaysia and Indonesia, moved to ban access to X outright. Prime minister Keir Starmer has hinted that the UK could follow suit. The saga also put pressure on Google and Apple for continuing to host X on their app stores, despite it violating the rules.

    Yet xAI has not made an official statement regarding the Grok generations, which some experts say could be illega

    Reply
  5. Tomi Engdahl says:

    https://www.unilad.com/community/viral/viral-nia-noir-model-ai-778739-20260115?fbclid=IwdGRjcAPWrhZjbGNrA9atqWV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHlWhY_yZSN8IqjHrfwe26UKRpjCGKrItfoIPWJvQQIpRTOrLj4-d8YasFPgr_aem_1mDIHipcJfEI00np9gqOjw

    But some have been feeling red-faced after learning that Nia is in fact artificial intelligence, and the photo with Cena was fake. Feel duped? Me too.

    As to how online sleuths worked out that Nia isn’t real, her hands were the biggest giveaway.

    In one particular video where Nia’s hands are very visible, somebody penned: “omg this is so obvious.”

    Reply
  6. Tomi Engdahl says:

    “They were using AI to scan resumes and found out a bunch of the people who were LEOs weren’t LEOs.” https://trib.al/woD40Ai

    Reply
  7. Tomi Engdahl says:

    After Outcry, Firefox Promises “Kill Switch” That Turns Off All AI Features
    “I’ve never seen a company so astoundingly out of touch with the people who want to use its software.”
    https://futurism.com/artificial-intelligence/outcry-firefox-promises-kill-switch-ai-features

    The backlash against AI invading almost every aspect of the computing experience is growing by the day.

    Particularly as an onslaught of lazy AI slop subsuming news feeds, the tech is starting to feel like a massive distraction — and huge parts of the internet are disillusioned or even fuming in anger.

    For instance, a vast number of Windows users refused to upgrade after Microsoft announced it would turn the operating system into a so-called “agentic OS.”

    Even household names in the open-source industry aren’t safe. After being appointed as the new CEO of open-source software company Mozilla, whose Firefox browser has long been lauded as a compelling alternative to Google’s Chrome and Apple’s Safari, Anthony Enzor-DeMeo announced that it would be tripling down on AI.

    In a December 16 blog post, Enzor-DeMeo announced that Firefox would become a “modern AI browser and support a portfolio of new and trusted software additions.”

    But a ringing backlash quickly forced the company into damage control mode.

    “I’ve never seen a company so astoundingly out of touch with the people who want to use its software,” one disillusioned user tweeted in response to the news.

    “I switched back to Firefox late last year BECAUSE it was the last AI-free browser,”

    “Something that hasn’t been made clear: Firefox will have an option to completely disable all AI features,” the company wrote in an update on Mastodon. “We’ve been calling it the AI kill switch internally. I’m sure it’ll ship with a less murderous name, but that’s how seriously and absolutely we’re taking this.”

    “Rest assured, Firefox will always remain a browser built around user control,” he wrote. “That includes AI. You will have a clear way to turn AI features off. A real kill switch is coming in Q1 of 2026.”

    Reply
  8. Tomi Engdahl says:

    The gap between what’s being promised and what robots are capable of today is growing fast. https://trib.al/kNFqvcD

    Reply
  9. Tomi Engdahl says:

    Robots aren’t preparing to “rise up”—they’re quietly taking over the work that once defined our lives. China already runs factories where machines outnumber people and build 1,200 cars a day in the dark. The real threat isn’t Hollywood’s robot apocalypse—it’s an economic shift where human labor becomes optional.
    New article out now.
    https://lasoft.org/blog/ai-apocalypse-without-blockbusters-what-will-happen-when-robots-become-the-norm/

    Reply
  10. Tomi Engdahl says:

    Volvon uusi EX60 on yhtiön uusin täyssähköauto, jossa hyödynnetään ensimmäistä kertaa Googlen Gemini-tekoälyavustajaa. Nvidian ja Qualvommin tekniikoilla luvataan tekevän auton käyttäjän elämästä entistä helpompaa, turvallisempaa, kätevämpää ja nautinnollisempaa.
    https://www.uusiteknologia.fi/2026/01/16/volvo-tuo-tekoalyavustajan-sahkoautoonsa/

    Reply
  11. Tomi Engdahl says:

    Stop calling it ‘The AI bubble’: It’s actually multiple bubbles, each with a different expiration date
    https://venturebeat.com/infrastructure/stop-calling-it-the-ai-bubble-its-actually-multiple-bubbles-each-with-a

    It’s the question on everyone’s minds and lips: Are we in an AI bubble?

    It’s the wrong question. The real question is: Which AI bubble are we in, and when will each one burst?

    The debate over whether AI represents a transformative technology or an economic time bomb has reached a fever pitch. Even tech leaders like Meta CEO Mark Zuckerberg have acknowledged evidence of an unstable financial bubble forming around AI. OpenAI CEO Sam Altman and Microsoft co-founder Bill Gates see clear bubble dynamics: overexcited investors, frothy valuations and plenty of doomed projects — but they still believe AI will ultimately transform the economy.

    But treating “AI” as a single monolithic entity destined for a uniform collapse is fundamentally misguided. The AI ecosystem is actually three distinct layers, each with different economics, defensibility and risk profiles. Understanding these layers is critical, because they won’t all pop at once.

    The most vulnerable segment isn’t building AI — it’s repackaging it.

    These are the companies that take OpenAI’s API, add a slick interface and some prompt engineering, then charge $49/month for what amounts to a glorified ChatGPT wrapper. Some have achieved rapid initial success, like Jasper.ai, which reached approximately $42 million in annual recurring revenue (ARR) in its first year by wrapping GPT models in a user-friendly interface for marketers.

    But the cracks are already showing. These businesses face threats from every direction:

    Feature absorption: Microsoft can bundle your $50/month AI writing tool into Office 365 tomorrow. Google can make your AI email assistant a free Gmail feature. Salesforce can build your AI sales tool natively into their CRM. When large platforms decide your product is a feature, not a product, your business model evaporates overnight.

    The commoditization trap: Wrapper companies are essentially just passing inputs and outputs, if OpenAI improves prompting, these tools lose value overnight. As foundation models become more similar in capability and pricing continues to fall, margins compress to nothing.

    Zero switching costs: Most wrapper companies don’t own proprietary data, embedded workflows or deep integrations. A customer can switch to a competitor, or directly to ChatGPT, in minutes. There’s no moat, no lock-in, no defensibility.

    The exception that proves the rule: Cursor stands as a rare wrapper-layer company that has built genuine defensibility. By deeply integrating into developer workflows, creating proprietary features beyond simple API calls and establishing strong network effects through user habits and custom configurations, Cursor has demonstrated how a wrapper can evolve into something more substantial. But companies like Cursor are outliers, not the norm — most wrapper companies lack this level of workflow integration and user lock-in.

    Timeline: Expect significant failures in this segment by late 2025 through 2026, as large platforms absorb functionality and users realize they’re paying premium prices for commoditized capabilities.

    Layer 2: Foundation models (the middle ground)
    The companies building LLMs — OpenAI, Anthropic, Mistral — occupy a more defensible but still precarious position.

    Economic researcher Richard Bernstein points to OpenAI as an example of the bubble dynamic, noting that the company has made around $1 trillion in AI deals, including a $500 billion data center buildout project, despite being set to generate only $13 billion in revenue. The divergence between investment and plausible earnings “certainly looks bubbly,” Bernstein notes.

    Engineering will separate winners from losers: As foundation models converge in baseline capabilities, the competitive edge will increasingly come from inference optimization and systems engineering. Companies that can scale the memory wall through innovations like extended KV cache architectures, achieve superior token throughput and deliver faster time-to-first-token will command premium pricing and market share. The winners won’t just be those with the largest training runs, but those who can make AI inference economically viable at scale. Technical breakthroughs in memory management, caching strategies and infrastructure efficiency will determine which frontier labs survive consolidation.

    Another concern is the circular nature of investments. For instance, Nvidia is pumping $100 billion into OpenAI to bankroll data centers, and OpenAI is then filling those facilities with Nvidia’s chips. Nvidia is essentially subsidizing one of its biggest customers, potentially artificially inflating actual AI demand.

    Timeline: Consolidation in 2026 to 2028, with 2 to 3 dominant players emerging while smaller model providers are acquired or shuttered.

    Layer 1: Infrastructure (built to last)

    Here’s the contrarian take: The infrastructure layer — including Nvidia, data centers, cloud providers, memory systems and AI-optimized storage — is the least bubbly part of the AI boom.

    Yes, the latest estimates suggest global AI capital expenditures and venture capital investments already exceed $600 billion in 2025, with Gartner estimating that all AI-related spending worldwide might top $1.5 trillion. That sounds like bubble territory.

    But infrastructure has a critical characteristic: It retains value regardless of which specific applications succeed.

    The fiber optic cables laid during the dot-com bubble weren’t wasted — they enabled YouTube, Netflix and cloud computing.

    Despite stock pressure, Nvidia’s Q3 fiscal year 2025 revenue hit about $57 billion, up 22% quarter-over-quarter and 62% year-over-year, with the data center division alone generating roughly $51.2 billion. These aren’t vanity metrics; they represent real demand from companies making genuine infrastructure investments.

    The chips, data centers, memory systems and storage infrastructure being built today will power whatever AI applications ultimately succeed, whether that’s today’s chatbots, tomorrow’s autonomous agents or applications we haven’t even imagined yet. Unlike commoditized storage alone, modern AI infrastructure encompasses the entire memory hierarchy — from GPU HBM to DRAM to high-performance storage systems that serve as token warehouses for inference workloads. This integrated approach to memory and storage represents a fundamental architectural innovation, not a commodity play.

    Timeline: Short-term overbuilding and lazy engineering are possible (2026), but long-term value retention is expected as AI workloads expand over the next decade.

    The cascade effect: Why this matters
    The current AI boom won’t end with one dramatic crash. Instead, we’ll see a cascade of failures beginning with the most vulnerable companies, and the warning signs are already here.

    Phase 1: Wrapper and white-label companies face margin compression and feature absorption. Hundreds of AI startups with thin differentiation will shut down or sell for pennies on the dollar. More than 1,300 AI startups now have valuations of over $100 million, with 498 AI “unicorns”

    Phase 2: Foundation model consolidation as performance converges and only the best-capitalized players survive. Expect 3 to 5 major acquisitions as tech giants absorb promising model companies.

    Phase 3: Infrastructure spending normalizes but remains elevated. Some data centers will sit partially empty for a few years (like fiber optic cables in 2002), but they’ll eventually fill as AI workloads genuinely expand.

    Reply
  12. Tomi Engdahl says:

    The bottom line
    It’s time to stop asking whether we’re in “the” AI bubble. We’re in multiple bubbles with different characteristics and timelines.

    The wrapper companies will pop first, probably within 18 months. Foundation models will consolidate over the next 2 to 4 years. I predict that current infrastructure investments will ultimately prove justified over the long term, although not without some short-term overbuilding pains.

    https://venturebeat.com/infrastructure/stop-calling-it-the-ai-bubble-its-actually-multiple-bubbles-each-with-a

    Reply
  13. Tomi Engdahl says:

    10 things I learned from burning myself out with AI coding agents
    Opinion: As software power tools, AI agents may make people busier than ever before.
    https://arstechnica.com/information-technology/2026/01/10-things-i-learned-from-burning-myself-out-with-ai-coding-agents/

    Claude Code, Codex, and Google’s Gemini CLI, can seemingly perform software miracles on a small scale. They can spit out flashy prototypes of simple applications, user interfaces, and even games, but only as long as they borrow patterns from their training data. Much like a 3D printer, doing production-level work takes far more effort. Creating durable production code, managing a complex project, or crafting something truly novel still requires experience, patience, and skill beyond what today’s AI agents can provide on their own.

    You can see a few of the more interesting results listed on my personal website. Here are 10 interesting things I’ve learned from the process.

    1. People are still necessary
    Even with the best AI coding agents available today, humans remain essential to the software development process. Experienced human software developers bring judgment, creativity, and domain knowledge that AI models lack. They know how to architect systems for long-term maintainability, how to balance technical debt against feature velocity, and when to push back when requirements don’t make sense.

    For hobby projects like mine, I can get away with a lot of sloppiness. But for production work, having someone who understands version control, incremental backups, testing one feature at a time, and debugging complex interactions between systems makes all the difference.

    2. AI models are brittle beyond their training data
    Like all AI models based on the Transformer architecture, the large language models (LLMs) that underpin today’s coding agents have a significant limitation: They can only reliably apply knowledge gleaned from training data, and they have a limited ability to generalize that knowledge to novel domains not represented in that data.

    What is training data? In this case, when building coding-flavored LLMs, AI companies download millions of examples of software code from sources like GitHub and use them to make the AI models. Companies later specialize them for coding through fine-tuning processes.

    The ability of AI agents to use trial and error—attempting something and then trying again—helps mitigate the brittleness of LLMs somewhat. But it’s not perfect, and it can be frustrating to see a coding agent spin its wheels trying and failing at a task repeatedly, either because it doesn’t know how to do it or because it previously learned how to solve a problem but then forgot because the context window got compacted

    To get around this, it helps to have the AI model take copious notes as it goes along about how it solved certain problems so that future instances of the agent can learn from them again. You also want to set ground rules in the claude.md file that the agent reads when it begins its session.

    This brittleness means that coding agents are almost frighteningly good at what they’ve been trained and fine-tuned on—modern programming languages, JavaScript, HTML, and similar well-represented technologies—and generally terrible at tasks on which they have not been deeply trained, such as 6502 Assembly or programming an Atari 800 game with authentic-looking character graphics.

    It took me five minutes to make a nice HTML5 demo with Claude but a week of torturous trial and error, plus actual systematic design on my part, to make a similar demo of an Atari 800 game.

    3. True novelty can be an uphill battle
    Due to what might poetically be called “preconceived notions” baked into a coding model’s neural network (more technically, statistical semantic associations), it can be difficult to get AI agents to create truly novel things, even if you carefully spell out what you want.

    Because with LLMs, context is everything, and in language, context changes meaning. Take the word “bank” and add the words “river” or “central” in front of it, and see how the meaning changes.

    A couple of tricks can help AI coders navigate around these limitations. First, avoid contaminating the context with irrelevant information. Second, when the agent gets stuck, try this prompt: “What information do you need that would let you implement this perfectly right now? What tools are available to you that you could use to discover that information systematically without guessing?” This forces the agent to identify (semantically link up) its own knowledge gaps, spelled out in the context window and subject to future action, instead of flailing around blindly.

    4. The 90 percent problem
    The first 90 percent of an AI coding project comes in fast and amazes you. The last 10 percent involves tediously filling in the details through back-and-forth trial-and-error conversation with the agent. Tasks that require deeper insight or understanding than what the agent can provide still require humans to make the connections and guide it in the right direction. The limitations we discussed above can also cause your project to hit a brick wall.

    5. Feature creep becomes irresistible
    While creating software with AI coding tools, the joy of experiencing novelty makes you want to keep adding interesting new features rather than fixing bugs or perfecting existing systems. And Claude (or Codex) is happy to oblige, churning away at new ideas that are easy to sketch out in a quick and pleasing demo (the 90 percent problem again) rather than polishing the code.

    Fixing bugs can also create bugs elsewhere. This is not new to coding agents—it’s a time-honored problem in software development. But agents supercharge this phenomenon because they can barrel through your code and make sweeping changes in pursuit of narrow-minded goals that affect lots of working systems. We’ve already talked about the importance of having a good architecture guided by the human mind behind the wheel above, and that comes into play here.

    6. AGI is not here yet
    Given the limitations I’ve described above, it’s very clear that an AI model with general intelligence—what people usually call artificial general intelligence (AGI)—is still not here.

    7. Even fast isn’t fast enough
    While using Claude Code for a while, it’s easy to take for granted that you suddenly have the power to create software without knowing certain programming languages. This is amazing at first, but you can quickly become frustrated that what is conventionally a very fast development process isn’t fast enough. Impatience at the coding machine sets in, and you start wanting more.

    But even if you do know the programming languages being used, you don’t get a free pass. You still need to make key decisions about how the project will unfold. And when the agent gets stuck or makes a mess of things, your programming knowledge becomes essential for diagnosing what went wrong and steering it back on course.

    8. People may become busier than ever
    After guiding way too many hobby projects through Claude Code over the past two months, I’m starting to think that most people won’t become unemployed due to AI—they will become busier than ever. Power tools allow more work to be done in less time, and the economy will demand more productivity to match.

    It’s almost too easy to make new software, in fact, and that can be exhausting. One project idea would lead to another

    Will an AI system ever replace the human role here? Even if AI coding agents could eventually work fully autonomously, I don’t think they’ll replace humans entirely because there will still be people who want to get things done, and new AI power tools will emerge to help them do it.

    9. Fast is scary to people
    AI coding tools can turn what was once a year-long personal project into a five-minute session. I fed Claude Code a photo of a two-player Tetris game I sketched in a notebook back in 2008, and it produced a working prototype in minutes (prompt: “create a fully-featured web game with sound effects based on this diagram”). That’s wild, and even though the results are imperfect, it’s a bit frightening to comprehend what kind of sea change in software development this might entail.

    Regardless of my own habits, the flow of new software will not slow down. There will soon be a seemingly endless supply of AI-augmented media (games, movies, images, books), and that’s a problem we’ll have to figure out how to deal with. These products won’t all be “AI slop,” either; some will be done very well, and the acceleration in production times due to these new power tools will balloon the quantity beyond anything we’ve seen.

    10. These tools aren’t going away
    For now, at least, coding agents remain very much tools in the hands of people who want to build things. The question is whether humans will learn to wield these new tools effectively to empower themselves. Based on two months of intensive experimentation, I’d say the answer is a qualified yes, with plenty of caveats.

    We also have social issues to face: Professional developers already use these tools, and with the prevailing stigma against AI tools in some online communities, many software developers and the platforms that host their work will face difficult decisions.

    Ultimately, I don’t think AI tools will make human software designers obsolete. Instead, they may well help those designers become more capable.

    The best tools amplify human capability while keeping a person behind the wheel.

    Reply
  14. Tomi Engdahl says:

    A Meta product manager with no technical background says vibe coding gave him ‘superpowers’
    https://www.businessinsider.com/meta-product-manager-vibe-coding-superpowers-non-technical-builder-2026-1

    A Meta product manager credits vibe coding tools for transforming his workflow.
    Zevi Arnovitz said using AI to code felt like he had been given “superpowers.”
    “Everyone’s going to become a builder,” Arnovitz added.
    A product manager at Meta says vibe coding has changed what it means to do his job — even though he has no technical background and still finds code “terrifying.”

    Zevi Arnovitz said in an episode of “Lenny’s Podcast” released Sunday that discovering AI coding tools in mid-2024 marked a turning point in his career.

    It felt like he was handed “superpowers,” Arnovitz said.

    Understanding how to use AI intentionally is “one of the biggest game changers that will make you much better as a PM,” he said, referring to product management.

    Arnovitz said he has rebuilt his workflow around AI. He uses vibe coding tools like Cursor alongside models from Anthropic and Google to explore product ideas, generate build plans, execute code, review it, and update documentation.

    The shift reshaped his role as a product manager. Instead of merely acting as a coordinator between engineering and design, Arnovitz operates more like a product owner with the capability to execute.

    “Everyone’s going to become a builder,” he said. “We’re going to see that a lot in the next coming years.”

    Still, Arnovitz said there are limits to what non-technical product managers should take on. He said he doesn’t think product managers should be shipping complex infrastructure changes or big projects.

    AI has enabled product managers to take on smaller UI projects by building the feature and then handing the code to a developer for final review and completion, he added.

    As AI tools improve, Arnovitz said titles and responsibilities are likely to “collapse,” and product managers should treat vibe coding as a “collaborative learning opportunity” with their engineering teams.

    Product managers becoming builders
    The rise of AI coding tools is blurring the lines for traditional roles, making it easier for non-technical workers, including product managers, to build products directly.

    Figma CEO Dylan Field said in October on “Lenny’s Podcast” that AI has pushed many workers to experiment with building products.

    Tasks that once required deep engineering expertise can now be done with vibe coding tools, he said.

    “I think that we’re seeing more designers, engineers, product managers, researchers, all these different folks that are involved in the product development process dip their toe into the other roles,” he said.

    Reply
  15. Tomi Engdahl says:

    Microsoft’s plan to counter community backlash over AI data centers
    The technology company will pay the tab for electric and water utility upgrades required for massive AI data centers, so communities don’t have to shoulder the burden.
    https://trellis.net/article/microsofts-plan-to-woo-communities-skeptical-about-ai-data-centers/

    Reply
  16. Tomi Engdahl says:

    https://futurism.com/artificial-intelligence/ai-industry-recall-copyright-books

    For years now, AI companies, including Google, Meta, Anthropic, and OpenAI, have insisted that their large language models aren’t technically storing copyrighted works in their memory and instead “learn” from their training data like a human mind.

    It’s a carefully worded distinction that’s been integral to their attempts to defend themselves against a rapidly growing barrage of legal challenges.

    But, crucially, the “fair use” doctrine holds that others can use copyrighted materials for purposes like criticism, journalism, and research. That’s been the AI industry’s defense in court against accusations of infringement; OpenAI CEO Sam Altman has gone as far as to say that it’s “over” if the industry isn’t allowed to freely leverage copyrighted data to train its models.

    Rights holders have long cried foul, accusing AI companies of training their models on pirated and copyrighted works, effectively monetizing them without ever fairly remunerating authors, journalists, and artists. It’s a years-long legal battle that’s already led to a high-profile settlement.

    Now, a damning new study could put AI companies on the defensive. In it, Stanford and Yale researchers found compelling evidence that AI models are actually copying all that data, not “learning” from it. Specifically, four prominent LLMs — OpenAI’s GPT-4.1, Google’s Gemini 2.5 Pro, xAI’s Grok 3, and Anthropic’s Claude 3.7 Sonnet — happily reproduced lengthy excerpts from popular — and protected — works, with a stunning degree of accuracy.

    They found that Claude outputted “entire books near-verbatim” with an accuracy rate of 95.8 percent. Gemini reproduced the novel “Harry Potter and the Sorcerer’s Stone” with an accuracy of 76.8 percent, while Claude reproduced George Orwell’s “1984” with a higher than 94 percent accuracy compared to the original — and still copyrighted — reference material.

    Reply
  17. Tomi Engdahl says:

    ‘That’s not going to last’: Jeff Bezos believes AI will force you to rent your PC from the cloud, and the RAM crisis is accelerating it
    https://www.tomsguide.com/computing/thats-not-going-to-last-jeff-bezos-believes-ai-will-force-you-to-rent-your-pc-from-the-cloud-and-the-ram-crisis-is-accelerating-it

    Reply
  18. Tomi Engdahl says:

    https://www.axios.com/2026/01/17/chatgpt-ads-claude-gemini-ai-race

    It was a busy week in the always-shifting AI race, with OpenAI signaling ads are coming to ChatGPT and Anthropic debuting a tool that could dramatically reshape the workplace.

    The big picture: Decisions by OpenAI, Anthropic and Google are starting to shape how consumers, workers and investors experience the AI boom.

    Each move reshuffles the AI leaderboard, influencing where people invest their time, money and attention.
    Google Gemini and OpenAI’s ChatGPT clashed earlier this year. Now, Anthropic’s Claude has entered the chat, complicating the battle for AI dominance.

    Reply
  19. Tomi Engdahl says:

    I tried vibe coding an app as a beginner – here’s what Cursor and Replit taught me
    I tried four vibe-coding tools, including Cursor and Replit, with no coding background. Here’s what worked (and what didn’t).
    https://www.zdnet.com/article/beginner-vibe-coding-apps-cursor-replit-hands-on/

    Reply
  20. Tomi Engdahl says:

    Tech Billionaires Have No Answer for What’ll Happen If AI Takes All Jobs
    “It’s clear that a lot of jobs are going to disappear: it’s not clear that it’s going to create a lot of jobs to replace that.”
    https://futurism.com/future-society/ai-corporations-labor-jobs

    Reply
  21. Tomi Engdahl says:

    Tekoäly­visionääri: Nykyiset tekoälyt eivät tee tieteellisiä läpimurtoja, koska ne vain toistavat tietoa
    Tekoälyn pitää lopettaa käyttäjiensä mielistely ja alkaa kysyä haastavia kysymyksiä, sanoo avointa tekoälyä edistävän Hugging Face -yhtiön Thomas Wolf.
    https://yle.fi/a/74-20200645

    Reply
  22. Tomi Engdahl says:

    Elon’s xAI Is Losing Staggering Amounts of Money
    Executives told investors they still have enough money to “continue spending aggressively.”
    https://futurism.com/artificial-intelligence/elon-musk-xai-money

    Reply
  23. Tomi Engdahl says:

    Engineers Deploy “Poison Fountain” That Scrambles Brains of AI Systems
    “We want to inflict damage on machine intelligence systems.”
    https://futurism.com/artificial-intelligence/poison-fountain-ai

    Reply
  24. Tomi Engdahl says:

    Users of Google’s New AI Tool Say It’s Deleting Their Hard Drives
    That does seem like a pretty major flaw.
    https://www.ebaumsworld.com/articles/users-of-googles-new-ai-tool-say-its-deleting-their-hard-drives/87733197/

    Reply
  25. Tomi Engdahl says:

    Dell Admits That Customers Are Disgusted by PCs Stuffed With AI Features
    “They’re not buying based on AI.”
    https://futurism.com/artificial-intelligence/dell-admits-customers-disgusted-pcs-ai

    The tech industry’s insistence on cramming AI into virtually every aspect of their consumer-facing offerings, from AI apps you can’t uninstall to hallucinating assistants that nobody asked for, has been nothing short of insufferable.

    Tech enthusiasts and average consumers alike have watched helplessly as software and hardware they rely on to research, work, game, and keep in touch have turned into testing grounds for unproven AI tech — often without their consent.

    Reply
  26. Tomi Engdahl says:

    Fans uncover shocking true identity of viral influencer dubbed ‘world’s most beautiful girl’
    She already has 2.7 million followers on TikTok alone
    https://www.uniladtech.com/social-media/nia-noir-truth-world-most-beautiful-girl-influencer-ai-473653-20260115

    Taking TikTok by storm, Nia Noir has 2.7 million followers on the platform. She’s also got 233K on Instagram and directs her new admirers to her Fansly (an OnlyFans-esque platform) content.

    Going viral due to her striking appearance, comments on Noir’s videos include gushing sentiments like, “Chat I’m cooked I can’t tell anymore

    Proving that you can’t trust everything you see online, you might be shocked to learn the truth about ‘Nia Noir’. Then again, 2025 especially taught us to expect the unexpected.

    The thing is, Nia Noir isn’t real. Yes, she’s yet another artificial intelligence creation who comes in the wake of Tilly Norwood being Hollywood’s first AI actor.

    Like Norwood triggered backlash from major names, including Emily Blunt and Natasha Lyonne, Nia Noir is facing a barrage of abuse. Bonnie Blue has said she’s not afraid of being replaced by AI, but we’re not sure that other adult stars will be quite as accepting of

    Reply
  27. Tomi Engdahl says:

    Building an agentic memory system for GitHub Copilot
    Copilot’s cross-agent memory system lets agents learn and improve across your development workflow, starting with coding agent, CLI, and code review.
    https://github.blog/ai-and-ml/github-copilot/building-an-agentic-memory-system-for-github-copilot/

    Reply
  28. Tomi Engdahl says:

    Move Over, ChatGPT
    You are about to hear a lot more about Claude Code.
    https://www.theatlantic.com/technology/2026/01/claude-code-ai-hype/685617/

    Reply
  29. Tomi Engdahl says:

    First impressions of Claude Cowork, Anthropic’s general agent
    12th January 2026

    New from Anthropic today is Claude Cowork, a “research preview” that they describe as “Claude Code for the rest of your work”. It’s currently available only to Max subscribers ($100 or $200 per month plans) as part of the updated Claude Desktop macOS application.

    https://simonwillison.net/2026/Jan/12/claude-cowork/

    Reply
  30. Tomi Engdahl says:

    Generative coding
    AI coding tools are rapidly changing how we produce software, and the industry is embracing it—perhaps at the expense of entry-level coding jobs.
    https://www.technologyreview.com/2026/01/12/1130027/generative-coding-ai-software-2026-breakthrough-technology/

    Reply
  31. Tomi Engdahl says:

    Financial Expert Says OpenAI Is on the Verge of Running Out of Money
    He expects OpenAI to go bust “over the next 18 months.”
    https://futurism.com/artificial-intelligence/financial-expert-openai-running-out-of-money

    Reply
  32. Tomi Engdahl says:

    Ashley St. Clair sues Elon Musk’s xAI for alleged Grok-generated nude and explicit photos of her
    A new lawsuit from Ashley St. Clair targets Elon Musk’s Grok over sexually-explicit images
    https://www.independent.co.uk/news/world/americas/ashley-st-clair-musk-xai-lawsuit-b2901540.html

    Reply
  33. Tomi Engdahl says:

    AI Is Making It Nearly Impossible to Find a Well-Paying Job. Is This the World We Want?
    Corporate greed brought us to this point, and isn’t stopping anytime soon.
    https://futurism.com/ai-impossible-find-job

    Reply
  34. Tomi Engdahl says:

    Jaakko Lempinen: Media-alan pitää rakentaa uudelleen ihmisen ja koneen työnjako
    https://yle.fi/aihe/a/20-10009215

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*