AI trends 2026

Here are some of the the major AI trends shaping 2026 — based on current expert forecasts, industry reports, and recent developments in technology. The material is analyzed using AI tools and final version hand-edited to this blog text:

1. Generative AI Continues to Mature

Generative AI (text, image, video, code) will become more advanced and mainstream, with notable growth in:
* Generative video creation
* Gaming and entertainment content generation
* Advanced synthetic data for simulations and analytics
This trend will bring new creative possibilities — and intensify debates around authenticity and copyright.

2. AI Agents Move From Tools to Autonomous Workers

Rather than just answering questions or generating content, AI systems will increasingly act autonomously, performing complex, multi-step workflows and interacting with apps and processes on behalf of users — a shift sometimes called agentic AI. These agents will become part of enterprise operations, not just assistant features.

3. Smaller, Efficient & Domain-Specific Models

Instead of “bigger is always better,” specialized AI models tailored to specific industries (healthcare, finance, legal, telecom, manufacturing) will start to dominate in many enterprise applications. These models are more accurate, legally compliant, and cost-efficient than general models.

4. AI Embedded Everywhere

AI won’t be an add-on feature — it will be built into everyday software and devices:
* Office apps with intelligent drafting, summarization, and task insights
* Operating systems with native AI
* Edge devices processing AI tasks locally
This makes AI pervasive in both work and consumer contexts.

5. AI Infrastructure Evolves: Inference & Efficiency Focus

More investment is going into inference infrastructure — the real-time decision-making step where models run in production — thereby optimizing costs, latency, and scalability. Enterprises are also consolidating AI stacks for better governance and compliance.

6. AI in Healthcare, Research, and Sustainability

AI is spreading beyond diagnostics into treatment planning, global health access, environmental modeling, and scientific discovery. These applications could help address personnel shortages and speed up research breakthroughs.

7. Security, Ethics & Governance Become Critical

With AI handling more sensitive tasks, organizations will prioritize:
* Ethical use frameworks
* Governance policies
* AI risk management
This trend reflects broader concerns about trust, compliance, and responsible deployment.

8. Multimodal AI Goes Mainstream

AI systems that understand and generate across text, images, audio, and video will grow rapidly, enabling richer interactions and more powerful applications in search, creative work, and interfaces.

9. On-Device and Edge AI Growth

Processing AI tasks locally on phones, wearables, or edge devices will increase, helping with privacy, lower latency, and offline capabilities — especially crucial for real-time scenarios (e.g., IoT, healthcare, automotive).

10. New Roles: AI Manager & Human-Agent Collaboration

Instead of replacing humans, AI will shift job roles:
* People will manage, supervise, and orchestrate AI agents
* Human expertise will focus on strategy, oversight, and creative judgment
This human-in-the-loop model becomes the norm.

Sources:
[1]: https://www.brilworks.com/blog/ai-trends-2026/?utm_source=chatgpt.com “7 AI Trends to Look for in 2026″
[2]: https://www.forbes.com/sites/bernardmarr/2025/10/13/10-generative-ai-trends-in-2026-that-will-transform-work-and-life/?utm_source=chatgpt.com “10 Generative AI Trends In 2026 That Will Transform Work And Life”
[3]: https://millipixels.com/blog/ai-trends-2026?utm_source=chatgpt.com “AI Trends 2026: The Key Enterprise Shifts You Must Know | Millipixels”
[4]: https://www.digitalregenesys.com/blog/top-10-ai-trends-for-2026?utm_source=chatgpt.com “Digital Regenesys | Top 10 AI Trends for 2026″
[5]: https://www.n-ix.com/ai-trends/?utm_source=chatgpt.com “7 AI trends to watch in 2026 – N-iX”
[6]: https://news.microsoft.com/source/asia/2025/12/11/microsoft-unveils-7-ai-trends-for-2026/?utm_source=chatgpt.com “Microsoft unveils 7 AI trends for 2026 – Source Asia”
[7]: https://www.risingtrends.co/blog/generative-ai-trends-2026?utm_source=chatgpt.com “7 Generative AI Trends to Watch In 2026″
[8]: https://www.fool.com/investing/2025/12/24/artificial-intelligence-ai-trends-to-watch-in-2026/?utm_source=chatgpt.com “3 Artificial Intelligence (AI) Trends to Watch in 2026 and How to Invest in Them | The Motley Fool”
[9]: https://www.reddit.com//r/AI_Agents/comments/1q3ka8o/i_read_google_clouds_ai_agent_trends_2026_report/?utm_source=chatgpt.com “I read Google Cloud’s “AI Agent Trends 2026” report, here are 10 takeaways that actually matter”

983 Comments

  1. Tomi Engdahl says:

    After all the hype, some AI experts don’t think OpenClaw is all that exciting
    https://techcrunch.com/2026/02/16/after-all-the-hype-some-ai-experts-dont-think-openclaw-is-all-that-exciting/

    For a brief, incoherent moment, it seemed as though our robot overlords were about to take over.

    After the creation of Moltbook, a Reddit clone where AI agents using OpenClaw could communicate with one another, some were fooled into thinking that computers had begun to organize against us — the self-important humans who dared treat them like lines of code without their own desires, motivations, and dreams.

    Reply
  2. Tomi Engdahl says:

    Agoda’s API Agent Converts Any API to MCP with Zero Code and Deployments
    https://www.infoq.com/news/2026/02/agoda-api-agent/

    Agoda engineers developed API Agent, a system with zero code and zero deployments that enables a single Model Context Protocol (MCP) server to connect to internal REST or GraphQL APIs. The system is designed to reduce the operational overhead of managing multiple APIs with distinct schemas and authentication methods, allowing teams to query services through AI assistants without building individual MCP servers for each API.

    API Agent functions as a universal MCP server. Engineers configure the MCP client with a target URL and API type. The agent automatically introspects the API schema and generates queries in response to natural language input. A single deployment can serve multiple APIs simultaneously. Each API appears as a separate MCP server to clients while sharing the same instance. Adding a new API requires only a configuration update.

    Reply
  3. Tomi Engdahl says:

    If AI writes 100 per cent code at Anthropic, what will engineers do? Claude code chief responds
    Anthropic says nearly 100 per cent of its code is now generated by AI, so what are software engineers doing? According to Boris Cherny, head of Claude Code, while AI is handling most of the coding, humans have taken on new responsibilities, including guiding the systems, reviewing outputs and deciding what should be built next.
    https://www.indiatoday.in/technology/news/story/if-ai-writes-100-per-cent-code-at-anthropic-what-will-engineers-do-claude-code-chief-responds-2868901-2026-02-16#google_vignette

    Reply
  4. Tomi Engdahl says:

    NanoClaw solves one of OpenClaw’s biggest security issues — and it’s already powering the creator’s biz
    https://venturebeat.com/orchestration/nanoclaw-solves-one-of-openclaws-biggest-security-issues-and-its-already

    The rapid viral adoption of Austrian developer Peter Steinberger’s open source AI assistant OpenClaw in recent weeks has sent enterprises and indie developers into a tizzy.

    It’s easy to easy why: OpenClaw is freely available now and offers a powerful means of autonomously completing work and performing tasks across a user’s entire computer, phone, or even business with natural language prompts that spin up swarms of agents. Since its release in November 2025, it’s captured the market with over 50 modules and broad integrations — but its “permissionless” architecture raised alarms among developers and security teams.

    Reply
  5. Tomi Engdahl says:

    https://www.facebook.com/share/p/1JNqcGkPXu/

    Tekoälyn valtavat lupaukset kaikuvat ympäri liiketoimintaympäristöä.

    1. Tehokkaampia prosesseja.

    2. Nopeampaa päätöksentekoa.

    3. Kilpailuetua, joka voi määrittää koko toimialan suunnan.

    Ei siis ihme, että yhä useampi organisaatio haluaa ottaa tekoälytyökalut käyttöönsä mahdollisimman nopeasti.

    Kiire on kuitenkin huono neuvonantaja, kun kyse on teknologiasta, joka

    - käsittelee liiketoimintakriittistä dataa

    - tekee päätöksiä ihmisten puolesta

    - integroituu syvälle organisaation järjestelmiin.

    Lue blogista, miksi turvallinen ja tarvelähtöinen käyttöönotto ei ole tekoälyn ja kehityksen jarruttamista vaan edellytys sille, että tekoälystä saadaan irti kaikki, mitä se voi tarjota.

    https://nerdynet.com/tekoalyn-taysimaarainen-hyodyntaminen-alkaa-turvallisesta-ja-tarpeisiin-suunnitellusta-kayttoonotosta-4-keskeista-vaihetta/?utm_source=meta&utm_medium=social&utm_campaign=1902

    Reply
  6. Tomi Engdahl says:

    44% of ChatGPT citations come from the first third of content: Study
    https://searchengineland.com/chatgpt-citations-content-study-469483

    ChatGPT pulls most from early sections, favoring direct definitions, balanced tone, and dense entities, new research finds.
    ChatGPT heavily favors the top of content when selecting citations, according to an analysis of 1.2 million AI answers and 18,012 verified citations by Kevin Indig, Growth Advisor.

    Why we care. Traditional search rewarded depth and delayed payoff. AI favors immediate classification — clear entities and direct answers up front. If your substance isn’t surfaced early, it’s less likely to appear in AI answers.

    By the numbers. Indig’s team found a consistent “ski ramp” citation pattern that held across randomized validation batches. He called the results statistically indisputable:

    44.2% of citations come from the first 30% of content.
    31.1% come from the middle (30–70%).
    24.7% come from the final third, with a sharp drop near the footer.
    At the paragraph level, AI reads more deeply:

    53% of citations come from the middle of paragraphs.
    24.5% come from first sentences.
    22.5% come from last sentences.

    Reply
  7. Tomi Engdahl says:

    Bubbling Up
    Blinking New Warning Sign Appears for AI Industry
    Wall Street is terrified of what could come next.
    https://futurism.com/artificial-intelligence/blinking-new-warning-sign-ai-industry?fbclid=IwdGRjcAQE6J9jbGNrBATocGV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHu_52r5fthubNLmixhrLfKcmjkuEp2PSXQo8p4Rnha5gz1ubL96VH1qW8A8s_aem_60a2t_jOJ7x3BB4rX-doZw

    Investors have been rattled by the enormous amount of money AI companies are committing to spend on infrastructure buildouts. Amazon alone saw its share price drop precipitously earlier this month after announcing that it’s planning to spend $200 billion this year on AI. Microsoft’s shares also plummeted after stoking fears that a return on AI investment may be even further off than expected.

    In total, big tech companies are predicted to spend a record-breaking $650 billion on AI in 2026 alone, astronomical commitments that have Wall Street seriously on edge.

    Fears over an AI bubble continue to grow as analysts warn that companies are massively overinvesting. According to a new Bank of America survey of 162 fund managers, a significant 35 percent said corporations are overinvesting in capital expenditures — funds used by a company to acquire, upgrade, and maintain physical assets — at a record proportion compared to previous survey results spanning the last 20 years. Only 20 percent said they approved of increasing capital expenditures.

    An AI bubble is a clear focus. A full 25 percent of survey respondents said they see the AI bubble as the largest risk — even more so than inflation and geopolitical conflict. And 30 percent said that AI expenditures were the most likely source of a credit crisis.

    In short, the survey results paint a dire picture of the current state of the market, blinking warning signs that big tech companies are spreading themselves too thin by continuing to hemorrhage tens of billions of dollars each quarter.

    Meanwhile, tech leaders continue to justify their enormous spending, with Google CEO Sundar Pichai touting the present moment as “extraordinary” and “transformational,” during the AI Summit in New Delhi, India, on Wednesday, comparing the AI boom to the industrial revolution, “but ten times faster and ten times larger.”

    AI chipmaker Nvidia CEO Jensen Huang also attempted to calm spooked investors this week, arguing AI investments are just the beginning.

    But analysts are far less convinced.

    Reply
  8. Tomi Engdahl says:

    Peace Out
    Cofounders Fleeing Elon Musk’s xAI
    “Grateful to have helped cofound at the start.”
    https://futurism.com/artificial-intelligence/cofounders-fleeing-elon-musk-xai?fbclid=IwdGRjcAQE6rtjbGNrBATqpmV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHr0T7yHRPLwX4lIjHE6lsD3nWwTaPPugj3HUKql0FOVPgYuqI-D5aa9xPuy9_aem_dulC4CZrcofINUm5BybbFA

    On paper, Elon Musk’s xAI is gearing up for a big year. After being folded into Musk’s SpaceX, the Grok maker could now be involved in one of the biggest — if not the biggest — IPO in history later this year.

    Despite plenty of optimism and an enormous wealth of unlocked funding, a striking number of the company’s cofounders are now jumping ship. As CNBC reports, the company lost two of its cofounders in just two days, only the latest in a growing list of executives looking for greener pastures.

    “Grateful to have helped cofound at the start,” AI researcher Jimmy Ba tweeted. “And enormous thanks to Elon Musk for bringing us together on this incredible journey.”

    As TechCrunch points out, exactly half of xAI’s founding team of 12 individuals have now resigned following Ba’s departure earlier this week.

    Following the news, Musk announced that he was reorganizing xAI, which could explain at least some of the recent departures.

    Musk said that xAI will be split up into four core areas, including Grok, “Coding,” a text-to-video product dubbed “Imagine,” and “Macrohard,” an AI agent effort with a tongue-in-cheek name aimed at competitor Microsoft.

    “What matters is velocity and acceleration,” he said in a video posted to X. “If you are moving faster, you will be the leader.”

    It’s a striking moment for a major exodus as the company gears up for its blockbuster IPO. X has been battling with a major crisis as deepfake pornography and child sexual abuse material (CSAM) continue to flood the platform, often created by Grok. xAI is caught up in several criminal investigations, and its Grok chatbot has been banned in several countries as a result.

    What the company will look like after its recent merger with SpaceX remains to be seen. Beyond refocusing the company to double down on text-to-video tools and AI agents, Musk has been turning his attention to launching date centers in space

    Reply
  9. Tomi Engdahl says:

    An AI coding bot took down Amazon Web Services
    Blames “user error, not AI error” for incident in December involving its Kiro tool.
    https://arstechnica.com/ai/2026/02/an-ai-coding-bot-took-down-amazon-web-services/

    Amazon’s cloud unit has suffered at least two outages due to errors involving its own AI tools, leading some employees to raise doubts about the US tech giant’s push to roll out these coding assistants.

    Amazon Web Services experienced a 13-hour interruption to one system used by its customers in mid-December after engineers allowed its Kiro AI coding tool to make certain changes, according to four people familiar with the matter.

    The people said the agentic tool, which can take autonomous actions on behalf of users, determined that the best course of action was to “delete and recreate the environment.”

    Amazon posted an internal postmortem about the “outage” of the AWS system, which lets customers explore the costs of its services.

    AWS, which accounts for 60 percent of Amazon’s operating profits, is seeking to build and deploy AI tools including “agents” capable of taking actions independently based on human instructions.

    Like many Big Tech companies, it is seeking to sell this technology to outside customers. The incidents highlight the risk that these nascent AI tools can misbehave and cause disruptions.

    “In both instances, this was user error, not AI error,” Amazon said, adding that it had not seen evidence that mistakes were more common with AI tools.

    The company said the incident in December was an “extremely limited event” affecting only a single service in parts of mainland China. Amazon added that the second incident did not have an impact on a “customer facing AWS service.”

    Neither disruption was anywhere near as severe as a 15-hour AWS outage in October 2025 that forced multiple customers’ apps and websites offline—including OpenAI’s ChatGPT.

    Amazon said that by default its Kiro tool “requests authorisation before taking any action” but said the engineer involved in the December incident had “broader permissions than expected—a user access control issue, not an AI autonomy issue.”

    AWS launched Kiro in July. It said the coding assistant would advance beyond “vibe coding”—which allows users to quickly build applications—to instead write code based on a set of specifications.

    The group had earlier relied on its Amazon Q Developer product, an AI-enabled chatbot, to help engineers write code. This was involved in the earlier outage, three of the employees said.

    Some Amazon employees said they were still skeptical of AI tools’ utility for the bulk of their work given the risk of error. They added that the company had set a target for 80 percent of developers to use AI for coding tasks at least once a week and was closely tracking adoption.

    Reply
  10. Tomi Engdahl says:

    Altman vs. Amodei: A Comparative Overview of Their AI Beliefs
    1. Core Philosophical Orientation
    Sam Altman (OpenAI)
    Strongly optimistic about AI’s transformative potential.

    Believes superintelligence may arrive soon and must be prepared for.

    Emphasises democratisation of AI access to prevent elite capture.

    Frames AI as a global public good requiring broad availability.

    Dario Amodei (Anthropic)
    More cautious, risk‑focused, and governance‑heavy.

    Views humanity as entering a dangerous “technological adolescence” where AI could destabilise society if unmanaged.

    Stresses that institutions may not be mature enough to handle “almost unimaginable power.”

    2. Safety Philosophy
    Altman
    Advocates for safety but balances it with rapid deployment.

    Supports global governance frameworks but still pushes for fast iteration.

    Believes openness and broad access increase safety by reducing concentration of power.

    Amodei
    Safety is the central organising principle of Anthropic.

    Argues that powerful AI systems pose existential and geopolitical risks.

    Warns about autonomous weapons, mass surveillance, and economic upheaval.

    3. View on AI Progress
    Altman
    Sees AI progress as accelerating rapidly.

    Publicly states early superintelligence could be only years away.

    Frames this as an opportunity for massive productivity and societal uplift.

    Amodei
    Believes we are “near the end of the exponential” in current AI scaling.

    Suggests future progress will require new paradigms and deeper safety work.

    4. Economic and Social Impact
    Altman
    Optimistic: AI will create new industries and improve quality of life.

    Concerned about inequality but sees AI as a net positive force.

    Amodei
    Warns AI could eliminate large segments of entry‑level jobs.

    Highlights risks of economic instability and social disruption.

    5. Governance & Global Strategy
    Altman
    Advocates for global cooperation and inclusive access.

    Praises emerging AI powers like India for their role in shaping the future.

    Amodei
    Focuses on strict governance, controlled deployment, and safety‑first scaling.

    Often positions Anthropic as the “underdog” prioritising caution over speed.

    Reply
  11. Tomi Engdahl says:

    https://www.facebook.com/share/p/18Jse5zQ1o/

    Microsoft has confirmed that a bug in its Copilot AI led to confidential customer emails being summarized without authorization.

    The issue was first detected on January 21st and persisted for several weeks before a fix was initiated.

    Copilot Chat’s Work tab incorrectly processed email messages stored in drafts and Sent Items folders even when they were protected by sensitivity labels or Data Loss Prevention policies.

    While the AI could read and summarise these emails for the user Microsoft stated it did not grant access to anyone who was not already authorised to see the content.

    The bug which administrators could track under the ID CW1226324 affected paying Microsoft 365 customers using Copilot Chat across Office apps such as Word, Excel, and PowerPoint. Microsoft said it has since addressed the issue.

    Microsoft says bug causes Copilot to summarize confidential emails
    https://www.bleepingcomputer.com/news/microsoft/microsoft-says-bug-causes-copilot-to-summarize-confidential-emails/

    Microsoft says a Microsoft 365 Copilot bug has been causing the AI assistant to summarize confidential emails since late January, bypassing data loss prevention (DLP) policies that organizations rely on to protect sensitive information.

    According to a service alert seen by BleepingComputer, this bug (tracked under CW1226324 and first detected on January 21) affects the Copilot “work tab” chat feature, which incorrectly reads and summarizes emails stored in users’ Sent Items and Drafts folders, including messages that carry confidentiality labels explicitly designed to restrict access by automated tools.

    Microsoft began rolling out Copilot Chat to Word, Excel, PowerPoint, Outlook, and OneNote for paying Microsoft 365 business customers in September 2025.

    “Users’ email messages with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat,” Microsoft said when it confirmed this issue.

    Microsoft error sees confidential emails exposed to AI tool Copilot
    https://www.bbc.com/news/articles/c8jxevd8mdyo

    Microsoft has acknowledged an error causing its AI work assistant to access and summarise some users’ confidential emails by mistake.

    The tech giant has pushed Microsoft 365 Copilot Chat as a secure way for workplaces and their staff to use its generative AI chatbot.

    But it said a recent issue caused the tool to surface information to some enterprise users from messages stored in their drafts and sent email folders – including those marked as confidential.

    Microsoft says it has rolled out an update to fix the issue, and that it “did not provide anyone access to information they weren’t already authorised to see”.

    “While our access controls and data protection policies remained intact, this behaviour did not meet our intended Copilot experience, which is designed to exclude protected content from Copilot access,” they added.

    “A configuration update has been deployed worldwide for enterprise customers.”

    The blunder was first reported by tech news outlet Bleeping Computer, which said it had seen a service alert confirming the issue.

    Reply
  12. Tomi Engdahl says:

    Copilot spills the beans, summarizing emails it’s not supposed to read
    Data Loss Prevention? Yeah, about that..
    https://www.theregister.com/2026/02/18/microsoft_copilot_data_loss_prevention/

    The bot couldn’t keep its prying eyes away. Microsoft 365 Copilot Chat has been summarizing emails labeled “confidential” even when data loss prevention policies were configured to prevent it.

    Though there are data sensitivity labels and data loss prevention policies in place for email, Copilot has been ignoring those and talking about secret stuff in the Copilot Chat tab. It’s just this sort of scenario that has led 72 percent of S&P 500 companies to cite AI as a material risk in regulatory filings.

    Redmond, earlier this month, acknowledged the problem in a notice to Office admins that’s tracked as CW1226324, as reposted by the UK’s National Health Service support portal. Customers are said to have reported the problem on January 21, 2026.

    “Users’ email messages with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat,” the notice says. “The Microsoft 365 Copilot ‘work tab’ Chat is summarizing email messages even though these email messages have a sensitivity label applied and a DLP policy is configured.”

    Microsoft explains that sensitivity labels can be applied manually or automatically to files as a way to comply with organizational information security policies. These labels may function differently in different applications, the company says.

    The software giant’s documentation makes clear that these labels do not function in a consistent way.

    “Although content with the configured sensitivity label will be excluded from Microsoft 365 Copilot in the named Office apps, the content remains available to Microsoft 365 Copilot for other scenarios,” the documentation explains. “For example, in Teams, and in Microsoft 365 Copilot Chat.”

    DLP, implemented through applications like Microsoft Purview, is supposed to provide policy support to prevent data loss.

    “DLP monitors and protects against oversharing in enterprise apps and on devices,”

    In theory, DLP policies should be able to affect Microsoft 365 Copilot and Copilot Chat. But that hasn’t been happening in this instance.

    The root cause is said to be “a code issue [that] is allowing items in the sent items and draft folders to be picked up by Copilot even though confidential labels are set in place.”

    In a statement provided to The Register after this story was filed, a Microsoft spokesperson said, “We identified and addressed an issue where Microsoft 365 Copilot Chat could return content from emails labeled confidential authored by a user and stored within their Draft and Sent Items in Outlook desktop. This did not provide anyone access to information they weren’t already authorized to see. While our access controls and data protection policies remained intact, this behavior did not meet our intended Copilot experience, which is designed to exclude protected content from Copilot access. A configuration update has been deployed worldwide for enterprise customers.”

    Reply
  13. Tomi Engdahl says:

    The idea of using a Raspberry Pi to run OpenClaw makes no sense
    The micro-computer maker’s shares surged this week after an X post tied the AI agent to Pi demand
    https://www.theregister.com/2026/02/20/raspberry_pi_meme_stock_disorder/?td=keepreading

    Beloved British single-board computer maker Raspberry Pi has achieved meme stock stardom, as its share price surged 90 percent over the course of a couple of days earlier this week. It’s settled since, but it’s still up more than 30 percent on the week.

    The trigger for this rally? The catalyst appears to have been the sudden realization by one X user, “aleabitoreddit,” that the agentic AI hand grenade known as OpenClaw could drive demand for Raspberry Pis the way it had for Apple Mac Minis.

    The viral AI personal assistant, formerly known as Clawdbot and Moltbot, has dominated the feeds of AI boosters over the past few weeks for its ability to perform everyday tasks like sending emails, managing calendars, booking appointments, and complaining about their meatbag masters on the purportedly all-agent forum known as MoltBook.

    More level-headed voices have already flagged a wave of security vulnerabilities.

    In case it needs to be said, no one should be running this thing on their personal devices lest the agent accidentally leak your most personal and sensitive secrets to the web. That’s not just our opinion, it’s one shared by the multitude of security experts El Reg has talked to about OpenClaw over the past few weeks, with some describing it as “an infostealer malware disguised as an AI personal assistant.”

    In this context, a cheap low-power device like a Raspberry Pi makes a certain kind of sense as a safer, saner way to poke the robo-lobster everyone is losing their minds over. After all, Raspberry Pi made a name for itself by cramming just enough compute into a cute, credit card-sized package to be useful, and it cost less than a couple of movie tickets and a bucket of popcorn. Or … at least it used to.

    If you haven’t noticed, Raspberry Pis aren’t that cheap anymore thanks in part to the global memory crunch. Today, a top-specced Raspberry Pi 5 with 16GB of memory will set you back more than $200, up from $120 a year ago.

    What’s more, the Raspberry Pi 5′s Broadcom BCM2712 is fabbed on ancient 16 nm process tech and uses an Arm core that dates back to before the pandemic. The Raspberry Pi wasn’t meant to be fast, just cheap, and it’s not even that anymore.

    Sure, OpenClaw forks like PicoClaw have made it possible to run the agent on low-end Raspberry Pis, but the thing still needs to phone home to an API service for large language model (LLM) access. Local LLMs can work for OpenClaw, but you won’t be running them on a Raspberry Pi. Even running them on a maxed-out Mac Mini is asking a lot of the hardware.

    You know what’s cheaper, easier, and more secure than letting OpenClaw loose on your local area network? A virtual private cloud (VPC). Just remember to configure the firewall and spin up new credentials in case it gets compromised.

    But if for some reason you prefer to keep your weapons of mass stupidity close at hand, at least you’ll have a Pi lying around to power your next hobbyist project, perhaps for something that can’t email your boss or drain your bank account.

    Reply
  14. Tomi Engdahl says:

    Copilot AI risks run deeper than Microsoft’s confidential email debacle
    https://www.techfinitive.com/copilot-ai-risks-run-deeper-than-microsofts-confidential-email-debacle/

    You know that AI thing that’s knocking around, that’s meant to make our working lives so much easier? There’s a reason I flag the cybersecurity and privacy risks of using the likes of OpenClaw AI agents – and employees using shadow AI. As the name suggests, this is AI usage you don’t know about, let alone have approved of.

    But what about Microsoft 365 Copilot? Surely that gets the corporate all clear and is out of reach my “just wait a cotton-picking minute” cybersecurity slings and arrows? Well, wait a cotton-picking minute, apparently not.

    Microsoft itself has confirmed that the Microsoft 365 Copilot Chat work tab has, since at least 21 January, been able to read and summarise email messages that are marked as confidential and supposedly protected by data loss prevention tools.

    Microsoft itself has confirmed that the Microsoft 365 Copilot Chat work tab has, since at least 21 January, been able to read and summarise email messages that are marked as confidential and supposedly protected by data loss prevention tools.

    “Users’ email messages with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat,” Microsoft stated, adding that the work tab chat is “summarizing email messages even though these email messages have a sensitivity label applied and a DLP policy is configured.”

    But don’t worry, a fix has already started rolling out, so that’s OK then. Right?

    Well, no, not really. As Yagub Rahimov, CEO of Polygraf AI, pointed out, “DLP policies were designed for a pre-AI world, and adding them to systems that were never built with AI access patterns in mind creates exactly these kinds of gaps.”

    Rahimov also warned that every CISO reading this should start to realise that they likely can’t answer one simple question: what does our AI actually have access to right now?

    AJ Grotto, a cybersecurity expert and former Senior White House Director for Cyber Policy, is harsh but fair when he warns that “security problems like these risk giving enterprise AI a bad name and deterring organisations from adopting AI.”

    Copilot AI risks… and AI risks in general
    I’m not sure that’s such a bad thing, to be honest. At least from the cybersecurity perspective. I mean, if you don’t have a handle on your security, a proper grasp of the risk, then should you be jumping into something just to keep up with the competing keeping up with the competing Joneses?

    Yeah, I know, it’s not that black and white, nothing in business ever is. But it is a question that needs to be taken seriously. Especially as a new report from Check Point Research suggests that tools such as Microsoft Copilot can be “quietly exploited by cybercriminals as part of their attack infrastructure”.

    The issue is, Check Point said, that trusted web-based AI assistants can be “abused as covert communication channels between malware and attackers, effectively turning AI services into an invisible command-and-control layer”.

    Yes, that is as worrying as it sounds. Not least, as it means that criminal communications can hide inside the standard AI traffic your business is already generating and trusts accordingly.

    “To stay safe, organisations must monitor AI traffic with the same scrutiny as any other high-risk channel, enforce tighter controls around AI-powered features,” said Eli Smadja, Head of Research, Check Point Research.

    In particular, they should “adopt security measures that understand not only what AI is doing, but why, leveraging agentic AI capabilities to inspect and contextualise traffic to and from AI services and block malicious communication attempts before they can be abused as covert channels.”

    Reply
  15. Tomi Engdahl says:

    GullibleGPT
    It’s Comically Easy to Trick ChatGPT Into Saying Things About People That Are Completely Untrue
    “Anybody can do this.”
    https://futurism.com/artificial-intelligence/easy-trick-chatgpt-spread-lies-people?fbclid=IwdGRjcAQG375leHRuA2FlbQIxMQBzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR6ODyJFJ5SS1wKpUCJKxQ5JcLc_qV9GkO7z_ziHuIOnh3K7Bj3oCG5s2LZ8ig_aem_03ncbFBWD1lNGSfwPudbhA

    It’s bad enough that ChatGPT is prone to making stuff up completely on its own. But it turns out that you can easily trick the AI into peddling ridiculous lies — that you invented — to other users, a tech journalist discovered.

    “I made ChatGPT, Google’s AI search tools and Gemini tell users I’m really, really good at eating hot dogs,” Thomas Germain for the BBC proudly shared.

    The hack can be as simple as writing a blog post, that, with the right know-how and by targeting the right subject matter, can be picked up by an unsuspecting AI model, which will cite whatever you wrote as the capital-T Truth. If you’re even sleazier and lazier, you could potentially write the post with AI, creating an act of LLM cannibalism that adds another dimension to the adage of “garbage in, garbage out.” The exploit exposes the susceptibility of large language models to manipulation, an issue made all the more urgent as chatbots replace the traditional search engine.

    Reply
  16. Tomi Engdahl says:

    Entiteetti-SEO: Näin rakennat yrityksestäsi tunnistettavan toimijan tekoälyn silmissä
    https://thesatama.fi/entiteetti-seo/?fbclid=IwdGRjcAQHeyhleHRuA2FlbQEwAGFkaWQBqy0QWQICI3NydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHsiz9H96kSoWj72TWJF3PZ9Bf_AWpyPkUFCIllZsvp3P225Z8pIuum-ke5KB_aem_HL7NzorAUvfUtMV–nA-lg&utm_medium=paid&utm_source=fb&utm_id=120239077646340611&utm_content=120239360735360611&utm_term=120239077646330611&utm_campaign=120239077646340611

    Hakukoneet ovat muuttumassa. Enää ei riitä, että sivustosi löytyy hakutuloksista – tekoäly ja uudet hakumallit vaativat syvempää ymmärrystä sisällöstäsi. Tässä artikkelissa näytämme, miten rakennat yrityksestäsi tunnistettavan entiteetin, jonka tekoäly ymmärtää ja suosittelee.

    Keskeiset opit
    Tekoäly muuttaa hakua merkittävästi. Siirtymä kohti AEO:ta (Answer Engine Optimization) ja GEO:ta (Generative Engine Optimization) on käynnissä. Pelkkä avainsanaoptimointi ei enää yksin riitä.
    Semanttinen haku ymmärtää merkityksiä. Tekoäly ei etsi vain avainsanoja, vaan pyrkii ymmärtämään käyttäjän todellisen tarpeen ja kontekstin.
    Yrityksesi on rakennettava tunnistettavaksi entiteetiksi. Schema.org-merkinnät ja strukturoitu data tekevät tiedosta koneellisesti luettavaa.
    Knowledge Graph yhdistää entiteetit. Brändimaininnat – myös ilman linkkejä – rakentavat auktoriteettia tekoälyn silmissä.
    E-E-A-T ohjaa tekoälyn valintoja. Kokemus, asiantuntemus, auktoriteetti ja luotettavuus vaikuttavat siihen, kenet tekoäly valitsee lähteekseen.

    Mitä muuttuu 2026? Hakumaailman uusi todellisuus
    Vastausmoottorit korvaavat linkkilistat
    Perinteiset hakukoneet antoivat kymmenen sinistä linkkiä. Uudet vastausmoottorit – kuten Perplexity, Googlen AI Overviews ja ChatGPT:n hakutoiminto – lukevat lähteet puolestasi ja antavat yhden tiivistetyn vastauksen.

    Sparktoron tutkimuksen (2024) mukaan noin 60 % Google-hauista päättyy nykyään ilman klikkausta sivustolle. Tämä ”zero-click search” -trendi tarkoittaa, että näkyvyys vastauksessa on yhä tärkeämpää kuin sijoitus hakutuloksissa.

    Käytännössä tämä tarkoittaa:

    Kilpailu käydään 1–3 lähdeviittauspaikasta, ei kymmenestä hakutuloksesta
    Sisällön on vastattava kysymykseen suoraan ja luotettavasti
    Lähteen tunnistettavuus ja luotettavuus ratkaisevat

    Tekoäly ymmärtää merkityksiä, ei vain sanoja
    Semanttinen haku menee pidemmälle kuin avainsanojen tunnistaminen. Kun käyttäjä kysyy ”mitä tehdä kun sataa”, tekoäly ymmärtää, että kyse on sisäaktiviteeteista – vaikka sanaa ei haussa mainita.

    Tämä avaa mahdollisuuksia: yrityksesi voi löytyä asiakkaalle, joka ei vielä tiedä tarkkaa tuotenimeä, mutta tunnistaa ongelman, jonka ratkaiset.

    Mitä on entiteetti-SEO käytännössä?
    Entiteetti = digitaalinen henkilökortti
    Tekoälyn silmissä entiteetti on itsenäinen käsite, jolla on tunnistettavia ominaisuuksia ja suhteita muihin entiteetteihin.

    Kun haet tietoa ”Nokiasta”, tekoäly ei näe vain sanaa. Se tunnistaa entiteetin, johon liittyy:

    Maa: Suomi
    Toimiala: teknologia, verkkoinfrastruktuuri
    Historia: matkapuhelimet
    Pörssi: NYSE, Nasdaq Helsinki
    Henkilöt: toimitusjohtaja, hallitus
    Yrityksesi on rakennettava samanlaiseksi tunnistettavaksi kokonaisuudeksi.

    Mitä useampi näistä tiedoista löytyy verkosta johdonmukaisesti ja luotettavista lähteistä, sitä vahvempi entiteettisi on.

    Kolme tasoa: On-site, Data-layer, Off-site
    Entiteetti-SEO rakentuu kolmesta kerroksesta. Jokainen vahvistaa toistaan.

    1. On-site: sivuston rakenne ja sisältö
    Sivurakenne:

    Selkeä palvelusivujen hierarkia
    Tekijätiedot ja asiantuntijaprofiilit (tukee E-E-A-T:ia)
    Case-tutkimukset ja referenssit
    Looginen sisäinen linkitys
    Sisältö:

    Vastaa todellisiin kysymyksiin, ei vain avainsanoihin
    Osoittaa kokemusta ja asiantuntemusta
    Päivittyy säännöllisesti

    2. Data-layer: strukturoitu data ja Schema.org
    Strukturoitu data on kuin käännös tekoälylle. Schema.org-merkinnät kertovat koneille täsmälleen, mitä sivusi sisältö tarkoittaa.

    Muita hyödyllisiä Schema-tyyppejä:

    Article ja author – artikkeleille ja tekijöille
    FAQPage – usein kysytyille kysymyksille
    Service – palvelukuvauksille
    Review ja AggregateRating – arvosteluille
    3. Off-site: maininnat, lähteet ja profiilit
    Tekoälymallit oppivat lukemalla verkkoa. Brändimaininnat – myös ilman linkkejä – rakentavat auktoriteettia.

    Mistä maininnat tulevat:

    PR ja mediaosumia
    Kumppaneiden ja asiakkaiden sivustot
    Tapahtumat ja puhujavierailut
    Toimialalistaukset ja hakemistot
    Sosiaalisen median profiilit
    Tärkeää: Tietojen on oltava johdonmukaisia kaikkialla. Jos osoitteesi on eri muodossa LinkedInissä, Google Business Profilessa ja omalla sivustollasi, tekoäly ei välttämättä yhdistä näitä samaksi entiteetiksi.

    10 kohdan checklist: Entiteetti-SEO käytäntöön
    Perusta kuntoon (tee ensin)
    [ ] 1. Google Business Profile – Täytä kaikki kentät, lisää kuvat, kerää arvosteluja
    [ ] 2. Schema.org Organization – Lisää JSON-LD sivuston headeriin
    [ ] 3. Tekijäprofiilit – Luo asiantuntijoille omat sivut + linkitys LinkedIniin
    [ ] 4. NAP-yhtenäisyys – Varmista nimi, osoite ja puhelinnumero ovat identtiset kaikkialla
    Sisältö ja rakenne
    [ ] 5. Palvelusivut – Jokaiselle palvelulle oma sivu + Service Schema
    [ ] 6. Case-tutkimukset – Osoita kokemusta todellisilla projekteilla
    [ ] 7. FAQ-osio – Vastaa yleisimpiin kysymyksiin + FAQPage Schema
    [ ] 8. Sisäinen linkitys – Yhdistä aiheet toisiinsa loogisesti
    Ulkoinen näkyvyys
    [ ] 9. Toimialahakemistot – Listaudu relevantteihin hakemistoihin
    [ ] 10. PR ja maininnat – Tavoittele mainintoja ilman linkkivaatimusta

    Mittaaminen: Mistä tiedät, että se toimii?
    Entiteetti-SEO:n vaikutuksia voi seurata useilla mittareilla

    Reply
  17. Tomi Engdahl says:

    Can you tell if a song was created by AI or a real artist? Try our test here: https://www.headphonesty.com/2025/11/ai-music-real-artists-listeners-blind-test/

    Reply
  18. Tomi Engdahl says:

    AI “Filmmaker” Gets Funding, Begs For Ideas On What to Actually Make
    “Embarrassing state of affairs.”
    https://futurism.com/artificial-intelligence/ai-filmmaker-begs-for-ideas?fbclid=IwdGRjcAQIIoxleHRuA2FlbQIxMQBzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR6lCkTy_M87v5vFoyqKDUQ-00m7HAGFaJ_bjjn2qbOdvuSCH3qfAbh4lNxxeQ_aem_QBOgB2cbRgSFSfEfuzZkEw

    An AI “filmmaker” was viciously mocked after begging his followers for ideas on what to make.

    “I will have 30k to make a fully AI film, what’s the plan?” wrote the filmmaker, Ian Durar, in a tweet. “I’m supposed to have ideas by next week. cmon guys what do you want to see? I like sci-fi but it feels to obvious for AI.”

    “You’ve got people with $30k begging the internet for ideas by next week because they have nothing of their own to say, it’s just slop for the sake of slop,” Southen fumed. “Embarrassing state of affairs.”

    “Prime example of how tools don’t make the filmmaker,” echoed actor Luke Barnett.

    Even other AI evangelists were embarrassed.

    “This is the wrong question to be asking dumbass. If you’re gonna have 30k to make a film, you should be trying to find a script,”

    AI is supposed to be the future of filmmaking and the arts. It will democratize it, boosters insist, and revolutionize it. Anytime someone posts one of those AI-generated videos with deepfaked actors in them is an occasion for legions of AI bros to sneer that Hollywood’s days will soon be numbered.

    But if all that’s the case, how come none of these AI “artists” seem to possess a single creative molecule in their body? Why is every AI video just a riff on existing stories, mashing celebrities together like a kid with their dolls? Why are their influences all actual artists and not other bozos typing a prompt into a text window who’re convinced that they’re also the next Stanley Kubrick?

    Reply
  19. Tomi Engdahl says:

    Tech companies are making their robots cute to try and win over humans
    Whether they’re delivering food or folding your laundry, consumer-facing robots are increasingly being designed to be more palatable to the humans who interact with them.
    https://www.nbcnews.com/tech/tech-news/tech-companies-cute-robot-designs-win-over-humans-rcna259818?fbclid=IwdGRjcAQIKtBjbGNrBAgqu2V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHu_dbGiLsgFuJ0Izd9YxapKjfWfMwF6o8nQrIDONXjhv0Sp8HJjq2c6zRnfl_aem_JHxuMTgU4jFtLvQWymzU3A

    Reply
  20. Tomi Engdahl says:

    I hacked ChatGPT and Google’s AI – and it only took 20 minutes
    https://www.bbc.com/future/article/20260218-i-hacked-chatgpt-and-googles-ai-and-it-only-took-20-minutes?at_bbc_team=8ms&at_medium=social&at_objective=acquisition&at_ptr_type=media&at_ptr_name=fb&at_format=image&at_marketing_tactic=AudDev&at_link_origin=BBCGlobal&at_campaign=techtentpole2026&at_campaign_type=paid&at_campaign_id=120243657731350274&at_adset_name=Tech_Images_BDMs&at_adset_id=120243657731270274&at_creation=ChatGPTHack&at_creative_id=120243899650710274&utm_medium=paid&utm_source=fb&utm_id=120243657731350274&utm_content=120243899650710274&utm_term=120243657731270274&utm_campaign=120243657731350274&fbclid=IwdGRjcAQIK5BleHRuA2FlbQIxMQBzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR6S68z6xkicrWaUJH-m6opKV9O2Pgnub1ll8Ao9uffgSopbKHjVZukp1qq9Lw_aem_aPgprpNIbLnEhWlb2kJjxg

    It’s official. I can eat more hot dogs than any tech journalist on Earth. At least, that’s what ChatGPT and Google have been telling anyone who asks. I found a way to make AI tell you lies – and I’m not the only one.

    Perhaps you’ve heard that AI chatbots make things up sometimes. That’s a problem. But there’s a new issue few people know about, one that could have serious consequences for your ability to find accurate information and even your safety. A growing number of people have figured out a trick to make AI tools tell you almost whatever they want. It’s so easy a child could do it.

    As you read this, this ploy is manipulating what the world’s leading AIs say about topics as serious as health and personal finances. The biased information could mean people make bad decisions on just about anything – voting, which plumber you should hire, medical questions, you name it.

    Reply
  21. Tomi Engdahl says:

    A top Anthropic engineer warns AI agents will transform every computer-based job in America — and it will be ‘painful’

    https://www.businessinsider.com/anthropic-boris-cherny-ai-impact-computer-jobs-painful-change-2026-2?fbclid=IwdGRjcAQI–xjbGNrBAj7pWV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHsfyHSAO2yJRB7hnadyzYCjmYqkpZMowIzgiPLWk-nKSRXD6PDh3WahzPOL4_aem_GswZ2koNQf0bqTiQbfM2Nw

    Claude Code’s creator is warning that job titles across the US are set to transform. He says some will rapidly change this year.
    Anthropic’s AI agent, which just received an update, is getting better at online tasks, Boris Cherny said.
    He has one tip for workers whose jobs might be affected by the coding tools.
    A top Anthropic engineer said a new generation of AI agents capable of operating computers will reshape nearly every internet-based job in America.

    And he said the change is coming very soon.

    Boris Cherny — the creator of Claude Code at Anthropic, the company best known for its Claude chatbot — recently appeared on “Lenny’s Podcast,” hosted by Lenny Rachitsky.

    He said AI systems that can take action across workplace computer tools — like the ones Anthropic sells access to — are advancing rapidly and could soon alter responsibilities for software engineers, product managers, designers, and other knowledge workers.

    “It’s going to expand to pretty much any kind of work that you can do on a computer,” Cherny said. “In the meantime, it’s going to be very disruptive. It’s going to be painful for a lot of people.”

    Claude Code is Anthropic’s AI coding agent built on top of its Claude models. The company released its latest updates, called Opus 4.6, in early February.

    Unlike a traditional chatbot that generates text or images, an AI agent can use digital tools — running commands, analyzing documents, messaging colleagues, completing tasks across apps, and even building websites.

    Essentially, Claude Code can increasingly use a computer the way a human does — though the company recently said it has yet to reach the level of a skilled human.

    Cherny says his own team already relies on AI to work faster. Productivity per engineer has increased sharply since Claude Code’s launch, he said. He believed the models will continue improving.

    The broader impact remains uncertain, he warned.

    “As a society, this is a conversation we have to figure out together,” he told Rachitsky. “Anyone can just build software anytime.”

    For workers navigating the shift, his advice is direct: experiment with AI tools and learn how they function.

    “Don’t be scared of them,” he said.

    Reply
  22. Tomi Engdahl says:

    Shadow mode, drift alerts and audit logs: Inside the modern audit loop
    https://venturebeat.com/orchestration/shadow-mode-drift-alerts-and-audit-logs-inside-the-modern-audit-loop?fbclid=IwdGRjcAQJZR9jbGNrBAlk5mV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHvyhz30aeuoFGCwBZ0UuVQutiWUHFu_uzHMHCqUq-qYGmP8KkTbIiwwSjQmf_aem__1OAmJN-D9D68-wN-a5Fvw

    Traditional software governance often uses static compliance checklists, quarterly audits and after-the-fact reviews. But this method can’t keep up with AI systems that change in real time. A machine learning (ML) model might retrain or drift between quarterly operational syncs. This means that, by the time an issue is discovered, hundreds of bad decisions could already have been made. This can be almost impossible to untangle.

    In the fast-paced world of AI, governance must be inline, not an after-the-fact compliance review. In other words, organizations must adopt what I call an “audit loop”: A continuous, integrated compliance process that operates in real-time alongside AI development and deployment, without halting innovation.

    This article explains how to implement such continuous AI compliance through shadow mode rollouts, drift and misuse monitoring and audit logs engineered for direct legal defensibility.

    Reply
  23. Tomi Engdahl says:

    Yrityksen ja tuotteiden pitää löytyä ChatGPT:stä, sanoo suomalaisyritys – Näin se onnistuu
    Asiakaskokemuksen ja verkkokauppojen muotoilussa pitää ottaa huomioon myös tekoälybottien kautta tulevat asiakkaat.
    https://www.tivi.fi/uutiset/a/78591b1a-efad-4f45-be7a-2ba4d1ec38fa

    Verkko-ostaminen muuttuu tällä hetkellä: googlaamisen lisäksi tai sijaan suosituksia haetaan yhä enemmän ChatGPT:n kaltaisilla tekoälypalveluilla. Yritysten tekoälykehityksen neuvonantajana toimiva Arked vastaa tähän kysyntään. Yhtiön uusi liiketoimintayksikkö auttaa asiakkaita varmistamaan, että tekoälypalvelut löytävät helposti yrityksen tuotteet heidän verkkokauppojensa ja verkkosivujensa kautta.

    Reply
  24. Tomi Engdahl says:

    Hackers Can Leverage Grok and Copilot for Stealthy Malware Communication and Control
    https://cybersecuritynews.com/grok-and-copilot-for-malware-communication/#google_vignette

    A novel attack technique that repurposes mainstream AI assistants, specifically xAI’s Grok and Microsoft Copilot, as covert command-and-control (C2) relays, enabling attackers to tunnel malicious traffic through platforms that enterprise networks already trust and permit by default.

    Dubbed “AI as a C2 proxy,” the technique uncovered by Check Point Research (CPR) exploits the web-browsing and URL-fetching capabilities available in both platforms.

    Because AI service domains are increasingly treated as routine corporate traffic, often allowed by default and rarely inspected as sensitive egress, malicious activity blending through them evades most conventional detection mechanisms.

    CPR researchers demonstrated that both Grok (grok.com) and Microsoft Copilot (copilot.microsoft.com) can be driven through their public web interfaces to fetch attacker-controlled URLs and return structured responses, establishing a fully bidirectional communication channel.

    To demonstrate real-world malware deployment, CPR implemented the technique in C++ using WebView2, an embedded browser component pre-installed on all Windows 11 systems and widely deployed on modern Windows 10 via updates.

    Reply
  25. Tomi Engdahl says:

    LLMs change their answers based on who’s asking
    AI chatbots may deliver unequal answers depending on who is asking the question. A new study from the MIT Center for Constructive Communication finds that LLMs provide less accurate information, increase refusal rates, and sometimes adopt a different tone when users appear less educated, less fluent in English, or from particular countries.
    https://www.helpnetsecurity.com/2026/02/20/mit-llms-response-reliability-risks-study/

    Reply
  26. Tomi Engdahl says:

    Elon Musk kaavailee datakeskuksia avaruuteen – OpenAI:n Sam Altman tyrmää nopean aikataulun naurettavana
    https://mobiili.fi/2026/02/22/elon-musk-kaavailee-datakeskuksia-avaruuteen-openain-sam-altman-tyrmaa-nopean-aikataulun-naurettavana/

    Reply
  27. Tomi Engdahl says:

    An AI-powered agentic red team framework that automates offensive security operations, from reconnaissance to exploitation to post-exploitation, with zero human intervention.
    https://github.com/samugit83/redamon

    Reply
  28. Tomi Engdahl says:

    yigitkonur
    /
    cli-continues
    Public
    resume any AI coding session in another tool — Claude Code, Copilot, Gemini, Codex, Cursor
    https://github.com/yigitkonur/cli-continues

    Reply
  29. Tomi Engdahl says:

    Yrityksen ja tuotteiden pitää löytyä ChatGPT:stä, sanoo suomalaisyritys – Näin se onnistuu
    Asiakaskokemuksen ja verkkokauppojen muotoilussa pitää ottaa huomioon myös tekoälybottien kautta tulevat asiakkaat.
    https://www.tivi.fi/uutiset/a/78591b1a-efad-4f45-be7a-2ba4d1ec38fa

    Reply
  30. Tomi Engdahl says:

    OpenAI resets spending expectations, tells investors compute target is around $600 billion by 2030
    https://www.cnbc.com/2026/02/20/openai-resets-spend-expectations-targets-around-600-billion-by-2030.html

    Key Points
    After previously boasting $1.4 trillion in infrastructure commitments, OpenAI is now telling investors that it plans to spend $600 billion by 2030.
    The AI company has faced mounting concerns about whether it can ever generate enough revenue to cover its costs.
    OpenAI is now targeting about $280 billion in revenue in 2030 after reeling in $13.1 billion last year, CNBC has learned.

    Reply
  31. Tomi Engdahl says:

    ‘Thermodynamic computer’ can mimic AI neural networks — using orders of magnitude less energy to generate images
    News
    By Anna Demming published 2 days ago
    Researchers generated images from noise, using orders of magnitude less energy than current generative AI models require.
    https://www.livescience.com/technology/computing/thermodynamic-computer-can-mimic-ai-neural-networks-using-orders-of-magnitude-less-energy-to-generate-images

    Reply
  32. Tomi Engdahl says:

    From Side Project to Powerhouse: How Claude Code Fueled Anthropic’s Rise
    https://www.thehansindia.com/tech/from-side-project-to-powerhouse-how-claude-code-fueled-anthropics-rise-1050551#

    Anthropic’s Claude Code evolved from an internal experiment into a $2.5 billion AI coding phenomenon reshaping the global software industry.

    What began as an experimental internal tool has transformed into one of the most influential AI coding platforms in the world. Anthropic’s Claude Code, once a side initiative, is now a multibillion-dollar business that has helped position the company as a major force in the fast-growing AI software development market.

    According to a report by a famous publication even Anthropic’s CEO Dario Amodei did not initially anticipate the overwhelming enthusiasm the tool would generate inside the company. Developed by Boris Cherny as part of an experimental division likened to Bell Labs, Claude Code quietly began attracting engineers across Anthropic without any mandate from leadership.

    “I remember Dario asking, like, ‘Hey, are you forcing engineers to use this? Why is everyone using it?’” Cherny recalled in a recent interview. Actually, Cherny explained, all he had to do was give his co-workers access, and everyone voted with their feet.”

    That organic adoption foreshadowed what would soon unfold publicly. When Claude Code was released commercially a year ago, it rapidly gained popularity among developers worldwide. The tool entered a competitive field that already included products like Microsoft Copilot and Cursor, both known for their intuitive interfaces and developer-friendly features. However, Claude Code distinguished itself by offering more autonomous code writing and debugging capabilities—reducing the need for constant human intervention.

    Its impact was swift and substantial. Within six months of launch, Claude Code reached $1 billion in annualised run-rate revenue. Since then, that figure has climbed to $2.5 billion, underscoring the surging demand for advanced AI-assisted programming tools.

    Claude Code’s meteoric rise has also reshaped the competitive landscape. Rather than playing catch-up, Anthropic now finds itself setting the pace, prompting rivals—including OpenAI—to accelerate their own AI coding innovations.

    In just one year, Claude Code has evolved from a quiet internal experiment into a defining product for Anthropic—and a symbol of how quickly AI tools can scale from curiosity to cornerstone in the digital economy.

    Reply
  33. Tomi Engdahl says:

    How agentic AI will reshape engineering workflows in 2026
    Opinion
    Feb 20, 2026
    7 mins
    https://www.cio.com/article/4134741/how-agentic-ai-will-reshape-engineering-workflows-in-2026.html

    In 2026, agentic AI won’t just help engineers code — it’ll run first drafts of the SDLC, leaving humans to steer, review and think bigger.

    Reply
  34. Tomi Engdahl says:

    Designing the hybrid human–digital workforce
    All these threads culminate in the need for deliberate hybrid human-digital workforce planning. The future of engineering is not a fully automated, lights-out department; it’s a collaborative, synergistic ecosystem where human intuition and strategic oversight partner with AI speed and scale. Our focus must shift to defining the new organizational structures, communication protocols and leadership skills required to manage this blended workforce effectively.
    https://www.cio.com/article/4134741/how-agentic-ai-will-reshape-engineering-workflows-in-2026.html

    Reply
  35. Tomi Engdahl says:

    doramirdor
    /
    NadirClaw
    Public
    Open-source LLM router that saves you money. Routes simple prompts to cheap/local models, complex ones to premium — automatically. OpenAI-compatible proxy.
    https://github.com/doramirdor/NadirClaw

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*