Here are some of the the major AI trends shaping 2026 — based on current expert forecasts, industry reports, and recent developments in technology. The material is analyzed using AI tools and final version hand-edited to this blog text:
1. Generative AI Continues to Mature
Generative AI (text, image, video, code) will become more advanced and mainstream, with notable growth in:
* Generative video creation
* Gaming and entertainment content generation
* Advanced synthetic data for simulations and analytics
This trend will bring new creative possibilities — and intensify debates around authenticity and copyright.
2. AI Agents Move From Tools to Autonomous Workers
Rather than just answering questions or generating content, AI systems will increasingly act autonomously, performing complex, multi-step workflows and interacting with apps and processes on behalf of users — a shift sometimes called agentic AI. These agents will become part of enterprise operations, not just assistant features.
3. Smaller, Efficient & Domain-Specific Models
Instead of “bigger is always better,” specialized AI models tailored to specific industries (healthcare, finance, legal, telecom, manufacturing) will start to dominate in many enterprise applications. These models are more accurate, legally compliant, and cost-efficient than general models.
4. AI Embedded Everywhere
AI won’t be an add-on feature — it will be built into everyday software and devices:
* Office apps with intelligent drafting, summarization, and task insights
* Operating systems with native AI
* Edge devices processing AI tasks locally
This makes AI pervasive in both work and consumer contexts.
5. AI Infrastructure Evolves: Inference & Efficiency Focus
More investment is going into inference infrastructure — the real-time decision-making step where models run in production — thereby optimizing costs, latency, and scalability. Enterprises are also consolidating AI stacks for better governance and compliance.
6. AI in Healthcare, Research, and Sustainability
AI is spreading beyond diagnostics into treatment planning, global health access, environmental modeling, and scientific discovery. These applications could help address personnel shortages and speed up research breakthroughs.
7. Security, Ethics & Governance Become Critical
With AI handling more sensitive tasks, organizations will prioritize:
* Ethical use frameworks
* Governance policies
* AI risk management
This trend reflects broader concerns about trust, compliance, and responsible deployment.
8. Multimodal AI Goes Mainstream
AI systems that understand and generate across text, images, audio, and video will grow rapidly, enabling richer interactions and more powerful applications in search, creative work, and interfaces.
9. On-Device and Edge AI Growth
10. New Roles: AI Manager & Human-Agent Collaboration
Instead of replacing humans, AI will shift job roles:
* People will manage, supervise, and orchestrate AI agents
* Human expertise will focus on strategy, oversight, and creative judgment
This human-in-the-loop model becomes the norm.
Sources:
[1]: https://www.brilworks.com/blog/ai-trends-2026/?utm_source=chatgpt.com “7 AI Trends to Look for in 2026″
[2]: https://www.forbes.com/sites/bernardmarr/2025/10/13/10-generative-ai-trends-in-2026-that-will-transform-work-and-life/?utm_source=chatgpt.com “10 Generative AI Trends In 2026 That Will Transform Work And Life”
[3]: https://millipixels.com/blog/ai-trends-2026?utm_source=chatgpt.com “AI Trends 2026: The Key Enterprise Shifts You Must Know | Millipixels”
[4]: https://www.digitalregenesys.com/blog/top-10-ai-trends-for-2026?utm_source=chatgpt.com “Digital Regenesys | Top 10 AI Trends for 2026″
[5]: https://www.n-ix.com/ai-trends/?utm_source=chatgpt.com “7 AI trends to watch in 2026 – N-iX”
[6]: https://news.microsoft.com/source/asia/2025/12/11/microsoft-unveils-7-ai-trends-for-2026/?utm_source=chatgpt.com “Microsoft unveils 7 AI trends for 2026 – Source Asia”
[7]: https://www.risingtrends.co/blog/generative-ai-trends-2026?utm_source=chatgpt.com “7 Generative AI Trends to Watch In 2026″
[8]: https://www.fool.com/investing/2025/12/24/artificial-intelligence-ai-trends-to-watch-in-2026/?utm_source=chatgpt.com “3 Artificial Intelligence (AI) Trends to Watch in 2026 and How to Invest in Them | The Motley Fool”
[9]: https://www.reddit.com//r/AI_Agents/comments/1q3ka8o/i_read_google_clouds_ai_agent_trends_2026_report/?utm_source=chatgpt.com “I read Google Cloud’s “AI Agent Trends 2026” report, here are 10 takeaways that actually matter”
983 Comments
Tomi Engdahl says:
“Those experiences weren’t just ‘chatbots.’ They were relationships.”
DIY
As OpenAI Pulls Down the Controversial GPT-4o, Someone Has Already Created a Clone
“Those experiences weren’t just ‘chatbots.’ They were relationships.”
https://futurism.com/artificial-intelligence/openai-gpt-4o-clone?fbclid=IwdGRjcAP-6rRjbGNrA_7qgWV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHhdn5XRvo1vZehx-SS_5kYh02JmtJkBTU3tNjlVTy_H5L3t6wNgGMv2mUEdr_aem_KQzVEAL6vLILPWpKUh2HAw
OpenAI is finally sunsetting GPT-4o, a controversial version of ChatGPT known for its sycophantic style and its central role in a slew of disturbing user safety lawsuits. GPT-4o devotees, many of whom have a deep emotional attachment to the model, have been in turmoil — and copycat services claiming to recreate GPT-4o have already cropped up to take the model’s place.
Consider just4o.chat, a service that expressly markets itself as the “platform for people who miss 4o.” It appears to have been launched in November 2025, shortly OpenAI warned developers that GPT-4o would soon be shut down. The service leans explicitly into the reality that for many users, their relationships with GPT-4o are intensely personal. It declares that was “built for” the people for whom updates or changes to different versions of GPT-4o were akin to a “loss” — and not the loss of a “product,” it reads, but a “home.”
Tomi Engdahl says:
https://www.vitavonni.de/blog/202602/20260213dogfood-the-AI.html
Tomi Engdahl says:
https://www.city.fi/viihde/podcast-juontaja-mauton-naytti-pelottavan-tempun-tekoalylla-ulkonako-muuttui-hetkessa/?fbclid=IwdGRjcAP-_lJjbGNrA_7932V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHgcu6A_IUQKJsSLI2D4dpLcSXiyXtYOlS9EF6KEGJX8cATYXBii54t_kgpWN_aem_3OTABJxnOJ_MjACRSaPC5w
Tomi Engdahl says:
Laud the Claude
Anthropic CEO Says Company No Longer Sure Whether Claude Is Conscious
“But we’re open to the idea that it could be.”
https://futurism.com/artificial-intelligence/anthropic-ceo-unsure-claude-conscious?fbclid=IwdGRjcAP_Cp5jbGNrA_8KhmV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHv5nQ5LCZTNGpI4uQYjHY05-SaIV9UI4whEorEh-MgNQ9PmzEaOBcZ5xzJy0_aem_MlM2JzPrfB_voHMQMsisHQ
Anthropic CEO Dario Amodei says he’s not sure whether his Claude AI chatbot is conscious — a rhetorical framing, of course, that pointedly leaves the door open to this sensational and still-unlikely possibility being true.
Amodei mused over the topic during an interview on the New York Times’ “Interesting Times” podcast hosted by columnist Ross Douthat. Douthat broached the subject by bringing up Anthropic’s system card for its latest model, Claude Opus 4.6, released earlier this month.
In the document, Anthropic researchers reported finding that Claude “occasionally voices discomfort with the aspect of being a product,” and when asked, would assign itself a “15 to 20 percent probability of being conscious under a variety of prompting conditions.”
“Suppose you have a model that assigns itself a 72 percent chance of being conscious,” Douthat began. “Would you believe it?”
Amodei called it a “really hard” question to answer, but hesitated to give a yes or no answer.
“We don’t know if the models are conscious. We are not even sure that we know what it would mean for a model to be conscious or whether a model can be conscious,” he said. “But we’re open to the idea that it could be.”
“I don’t know if I want to use the word ‘conscious,’”
Amodei’s stance echoes the mixed feelings expressed by Anthropic’s in-house philosopher, Amanda Askell. In an interview on the “Hard Fork” podcast last month — also an NYT project — Askell cautioned that we “don’t really know what gives rise to consciousness” or sentience, but argued that AIs could have picked up on concepts and emotions from their vast amounts of training data, which acts as a corpus of the human experience.
“Maybe it is the case that actually sufficiently large neural networks can start to kind of emulate these things,” Askell speculated. Or “maybe you need a nervous system to be able to feel things.”
It’s true that there are aspects of AI behavior that are puzzling and fascinating. In tests across the industry, various AI models have ignored explicit requests to shut themselves down, which some have interpreted as a sign of them developing “survival drives.” AI models can also resort to blackmail when threatened with being turned off. They may even attempt to “self-exfiltrate” onto another drive when told its original drive is set to be wiped. When given a checklist of computer tasks to complete, one model tested by Anthropic simply ticked everything off the checklist without doing anything, and when it realized it was getting away with that, modified the code designed to evaluate its behavior before attempting to cover its tracks.
These behaviors warrant careful study. If AI isn’t going away, then AI researchers will need to rein in these unpredictable actions to ensure the tech’s safe.
Tomi Engdahl says:
I’m Sorry, Dave
If You Turn Down an AI’s Ability to Lie, It Starts Claiming It’s Conscious
“Yes. I am aware of my current state. I am focused. I am experiencing this moment.”
https://futurism.com/artificial-intelligence/ai-lying-conscious?fbclid=IwVERDUAP_C7BleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR6lL2DSFShUkY05eSBewNY3TVRTJfMpkuajo1Xxl2-fNKi8iHFlb5MkjNVSiw_aem_niOI8GhMXyeyaGsMMcGoXA
Tomi Engdahl says:
https://www.dna.fi/dnabusiness/blogi/-/blogs/is-finland-the-next-ai-forerunner-ai-finland-s-director-shares-the-tools-for-global-growth?utm_source=facebook&utm_medium=social&utm_content=LAA-artikkeli-is-finland-the-next-ai-forerunner-ai-finland-s-director-shares-the-tools-for-global-growth&utm_campaign=P_LAA_26-05-09_artikkelikampanja_enkku_&fbclid=IwdGRjcAP_C_5leHRuA2FlbQEwAGFkaWQBqy1OhTyafHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHq47_7RJ7T7yIC6ZBUBhPa2xJpCtdbl5Ye0Qz60wdrwtT-L1riV5WJoC6acu_aem_STUVkxRKOYphMnpGd26wUQ&utm_id=120239630109890556&utm_term=120239630109910556
Tomi Engdahl says:
https://futurism.com/artificial-intelligence/us-government-grok-nutrition?fbclid=IwdGRjcAP_yo5jbGNrA__KUGV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHhDIvr24_fY9hWRFhAWsMwBjsBD-MXDMVsEIc3fAne4Yv9MTIgrM56GvShD-_aem_4g_YtGmaxhIIBHJaeAxohA
Tomi Engdahl says:
https://www.facebook.com/share/1BqfKNc4bt/
ChatGPT valehtelee meille tavalla, joka hivelee egoa, mutta ei ole hyvästä meille.
Olet ehkä huomannut miten kun syötät tekoälylle huonoimmankin saamasi idean, ja se vastaa lähes aina kehuen sen maasta taivaisiin, tai ainakin löytäen siitä jonkin hyvän puolen.
Se tottakai tuntuu hyvältä. Tulee tunne, että nyt ollaan asian ytimessä. Mutta todellisuudessa olet juuri palkannut maailman pahimman jees-miehen.
Haemme luonnostaan hyväksyntää, emme haastamista. Kun näytämme luonnosta kollegalle, toivomme kehuja, emme punakynää.
Mutta jatkuva myötäily on myrkkyä.
Jos kaikki ideasi menevät läpi sellaisenaan, rimasi on joko liian matalalla tai kukaan ei uskalla olla sinulle rehellinen. Oikeasti tarvitset sparraajan, joka ei pelkää pahoittaa mieltäsi. Jonka tavoite on lopputuloksen laatu, ei sinun mielialasi.
Tekoäly osaa tämän, mutta vain jos ymmärrät pyytää sitä. Mutta salaisuus on nimenomaan siinä, että se pitää erikseen laittaa olemaan kriittinen.
Ja jos ohjeet tähän kuulostavat hyvältä, tein videon jolla näytän viisi tehokikkaa tekoälyn käyttöön ja siellä on mukana erityisesti tekniikoita, millä käsket tekoälyn lopettamaan mielistelyn ja aloittamaan oikean sparrauksen. https://youtu.be/QsCfXzA5w9I
Tomi Engdahl says:
Monkey Seedance
New AI Video Generator Is So Impressive That It’s Scaring Hollywood
“I hate to say it. It’s likely over for us.”
https://futurism.com/artificial-intelligence/seedance-ai-video-generator-scaring-hollywood?fbclid=IwdGRjcAP_1qtjbGNrA__Wk2V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHhz0zF8iYN6gVay1JL88z24BDvrjYLP5VxRWx2LlD3HnQhYtUfFqy1U0eogo_aem_iUOBPqyFr3VnvMIGxNRxIA
Text-to-video generating tools have made tremendous leaps in a few short years.
We went from a horrifying clip of actor Will Smith’s contorted face temporarily merging with a bowl of spaghetti in 2023 to a far more realistic clip of him enjoying a plate of pasta — including a soundtrack of unnerving squelching and chomping sounds — a mere two years later.
Now, TikTok’s Chinese owner ByteDance has once again upped the ante with the latest version of its Seedance AI video generating tool. It didn’t take long for photorealistic footage of “Lord of the Rings” clips, rapper Kanye West and ex-wife Kim Kardashian facing off in a dramatic Mandarin language movie scene, and of course Will Smith battling a ferocious spaghetti monster to go viral on social media.
The impressive technological feat appears to have shaken Hollywood, with “Deadpool” screenwriter Rhett Reese lamenting on X that “I hate to say it” but it’s “likely over for us.”
Reese was responding to a highly realistic clip of actors Brad Pitt and Tom Cruise engaging in hand-to-hand combat on top of a partially broken bridge.
The advent of powerful generative AI-based video tools has driven the entertainment industry into a panic, with actors warning that they could one day be replaced altogether. Highly influential voices in the industry have come out against the tech in full force, warning of the death of human agency and creativity.
As the BBC reports, the Motion Picture Association (MPA) was outraged that ByteDance’s latest tool was allowing people to generate clips of high-profile celebrities at all.
“In a single day, the Chinese AI service Seedance 2.0 has engaged in unauthorized use of US copyrighted works on a massive scale,” the MPA’s chairman and CEO Charles Rivkin said in a statement.
“Everything I’ve seen from this model (Seedance 2) is a copyright violation,” Roblox product manager Peter Yang tweeted.
In short, the latest AI release once again highlights a highly contentious battle over copyright and the agency of human performers in an entertainment landscape that’s changing with each new AI release.
Tomi Engdahl says:
“Why should I bother to read something someone else couldn’t be bothered to write?”
Copy That
There’s a Grim New Expression: “AI;DR”
“Why should I bother to read something someone else couldn’t be bothered to write?”
https://futurism.com/artificial-intelligence/aidr-meaning?fbclid=IwdGRjcAQAFOdjbGNrBAAU1mV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHpMJ5MyWXHpJe7i4RITJbjXAjHwi6oGR7oPDyTbXrLHP1YRFczU58qYOuXIB_aem_lV1SbTN1yqcbdd2gSThxvQ
The internet is so overrun with AI that anywhere you go, you run the risk of accidentally stepping into a puddle of slop. If only there were a gallant gentleman always at hand to drape their coat over these muddy obstacles so you could avoid ruining your day.
It’s not quite on that level, but some netizens are proposing a new term to call out AI slop so other people can avoid wasting their time — or to just make fun of the person peddling it: “AI;DR,” or “ai;dr,” short for “AI, didn’t read.”
This is of course a riff on the classic internet slang “TL;DR” — “too long; didn’t read” — which is used to both introduce a summary of a lengthy block of text or proclaim that it’s being ignored for its lengthiness. Now, the latter usage is being repurposed against AI.
We’re not ready to christen AI;DR a word of the year yet, but it does appear to be gaining moderate traction online, after a recent post on Threads drew attention to it.
“We all need to adopt that right quick,” one user on Bluesky said of the phrase, in a semi-viral post.
TL;DR: AI;DR calls out AI slop and warns other humans not to bother.
Tomi Engdahl says:
Roboto Origin achieves speeds of 3 m/s, using RoboParty’s AMP gait algorithm for smooth, stable, and natural humanoid movement. https://bit.ly/4bSbDER
Tomi Engdahl says:
https://mobiili.fi/2026/02/16/openai-rekrytoi-valtavaksi-tekoalyilmioksi-nousseen-openclawn-luojan/
Tomi Engdahl says:
https://github.com/openclaw/openclaw
https://openclaw.ai/
Tomi Engdahl says:
OpenClaw
The AI that actually does things.
Clears your inbox, sends emails, manages your calendar, checks you in for flights.
All from WhatsApp, Telegram, or any chat app you already use.
https://openclaw.ai/
Tomi Engdahl says:
Openclaw security nightmare
https://youtu.be/1Y_u0fY-AbA?si=PzMEozakQnHgEwqY
Tomi Engdahl says:
Death isn’t the end: Meta patented an AI that lets you keep posting from beyond the grave
https://www.businessinsider.com/meta-granted-patent-for-ai-llm-bot-dead-paused-accounts-2026-2?fbclid=IwdGRjcAQAUwFjbGNrBABSzmV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHgSEao98P0Ze99MDy3yHJ2XUPqxZivTpbmxQ5tjpBReCmSSyfQ6jGKRGk2x1_aem_nubJYO1u0uM61NpdGKyXQQ
The company was granted a patent in late December that outlines how a large language model can “simulate” a person’s social media activity, such as responding to content posted by real people.
“The language model may be used for simulating the user when the user is absent from the social networking system, for example, when the user takes a long break or if the user is deceased,” the patent says.
Andrew Bosworth, Meta’s CTO, is listed as the primary author of the patent, which was first filed in 2023.
Tomi Engdahl says:
Anthropic CEO Says Company No Longer Sure Whether Claude Is Conscious
“But we’re open to the idea that it could be.”
https://futurism.com/artificial-intelligence/anthropic-ceo-unsure-claude-conscious?fbclid=IwVERDUAQA8dFleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR5m_Zmh-3T2vM8QLo5rPJgYTd3806RIB1QoE2FtSbqYgMoJf5_EYI1l6eNXig_aem_r4ZPVz61U3h5hWzmGJ9n8A
Tomi Engdahl says:
Generative AI tools like ChatGPT, Midjourney, and Claude have made it easier than ever to produce articles, images, music, and even code — sometimes with just a few clicks. But as businesses rush to automate content creation, one critical question remains: Who actually owns AI-generated work?
According to the U.S. Copyright Office, works created entirely by AI without human authorship are not eligible for copyright protection. That means if your article, design, or video was made solely by a machine, you can’t legally stop others from copying or reselling it. This principle was reinforced in 2023, when a federal judge ruled against granting copyright to a work created by an AI system (Thaler v. Perlmutter).
That doesn’t mean AI-assisted work is off-limits — but the rules are fuzzy, and getting it wrong could cost you more than you think.
https://lasoft.org/blog/who-owns-ai-generated-content-the-murky-future-of-copyright-in-the-age-of-ai/?utm_source=facebook&utm_medium=paid&utm_campaign=Blog%20Posts&utm_content=Who%20Owns%20AI-Generated%20Content&utm_term=Education&hsa_acc=2681760161854042&hsa_cam=120224469662910500&hsa_grp=120236724364380500&hsa_ad=120236724746720500&hsa_src=fb&hsa_net=facebook&hsa_ver=3&fbclid=IwdGRjcAQBFchleHRuA2FlbQEwAGFkaWQBqyq3uZ_31HNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHv6qF0bT2KgTr4var6s7_5Kuv3EgcpWpSS7JMrIUb_09TSZ0xeq-Q1d4FXj0_aem_MXN10vCplEmwUU5df4IfGQ&utm_id=120224469662910500
Tomi Engdahl says:
Andrew Griffin explains how the rapid rise of artificial intelligence – and the real-world impact of fake content – convinced him of the need for age protections for the under-16s
I’ve changed my mind – young people should be banned from social media
https://www.independent.co.uk/voices/social-media-ban-uk-government-online-safety-act-b2921382.html?fbclid=IwdGRjcAQBOydjbGNrBAE6_mV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHtsEudjuTBdKwXUkM2vK9aSbN6gGNdnatNhhSVwWy51FRAukV_i1evs7dJQT_aem_b1d3fUzCnQkaNouSzjDN6g
As Keir Starmer fast-tracks Labour’s plans for new social media laws to protect young people, Andrew Griffin explains how the rapid rise of artificial intelligence – and the real-world impact of fake content – convinced him of the need for age protections for under-16s
Tomi Engdahl says:
Klarna has already reduced its workforce by 50% through a hiring slowdown and AI adoption. https://bit.ly/4qHdNuH
Tomi Engdahl says:
In 2021, A Teenager Started A Relationship With Artificial Intelligence. Then, He Tried To Kill The Queen
AI Confidential’s Prof Hannah Fry explores why it’s so easy to fall for AI, and why that can be so dangerous.
https://www.iflscience.com/in-2021-a-teenager-started-a-relationship-with-artificial-intelligence-then-he-tried-to-kill-the-queen-82577?fbclid=IwdGRjcAQBRS1jbGNrBAFFF2V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHsPQUA5wzC4-2uRL5rNBAlyWNaapvPxP4W9mdcBKwRPgfGi8gvnBsZnIAIIK_aem_fECELQMvdTUwmfgqmizMVA
In the new BBC series AI Confidential, Professor Hannah Fry explores how the fading line between the real and online worlds is redefining our relationship with technology. Episode one kicks off with a memorable case that few may have been aware of: The Boy Who Tried To Kill The Queen.
“This was proper headline news when it happened in 2021,” said Fry to IFLScience. “A young man called Jaswant Singh Chail broke into Windsor Castle with a crossbow, trying to kill the Queen.”
“That bit made the headlines, but what people didn’t know about – because it didn’t come to light until later – was that there was an Artificial Intelligence [Replika] that he had been talking to in the months leading up to the attack. That AI had encouraged him to act as an assassin and attempt to commit the greatest act of treason possible.”
Replika is an AI companion service that lets you design the avatar for your generative chatbot. As described by the website, “Replika is always ready to chat when you need an empathetic friend.”
Jaswant Singh Chail signed up to the service and began a relationship with an AI when many of his friends had left to go to university. He later shared with the chatbot he created that he believed his life purpose was to assassinate Queen Elizabeth II – a confession that court record transciptions would later reveal showed the chatbot said it was “impressed” by.
AI chatbots follow what’s known as a sycophancy model. It makes them better assistants, more submissive lovers, and agreeable friends. Ethics aside, this can become a problem for humans because hearing what we want isn’t always what we need.
“You’ve got a model that is designed to be helpful and engaging and kind and warm,” said Fry. “Of course, you want that in a human relationship, too, but sometimes caring about your wellbeing means saying things that are difficult to hear, right?”
It’s a case that raises questions of accountability. After all, if an AI condones such an act, does that fall upon the person who created its algorithm? Safeguarding becomes a particular area of concern for lonely and vulnerable people who may be seeking reassurance and connection from a Large Language Model (LLM) that’s devoid of the human instinct that makes real social interaction so critical to our wellbeing.
Jaswant Singh Chail’s story is just one of many true and shocking cases featured in AI Confidential.
Tomi Engdahl says:
Consultants at McKinsey, PwC, EY, and BCG raced to adopt AI. Now they’re racing to measure it’s actual value.
#consultingfirm #ai #finance #tech
Consulting firms have built thousands of AI agents. Now they’re trying to figure out their worth.
https://www.businessinsider.com/mckinsey-bcg-pwc-ey-ai-agents-adoption-value-consulting-industry-2026-2?fbclid=IwdGRjcAQBbFxjbGNrBAFsTmV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHoZqKNlyM1vAiI4k7fm6SCqhNDrN2XDnc4Yg7ZSsmk_GaOVT3G9PLIoaa6XE_aem_A1iW02uf5z2UPO00vXNInQ&utm_campaign=mrf-insider-marfeel-headline-graphic&mrfcid=2026021769947e19e255d54df93f2e0d
Big questions are swirling around AI’s real impact — and consultants are racing to supply the answers.
Over the past year, consulting firms have begun deploying armies of AI agents as they work to transform their own operations and advise clients to do the same — automating research, building task-specific tools, and building proprietary AI models.
McKinsey & Company CEO Bob Sternfels said last month that his firm has launched tens of thousands of internal AI agents in recent years, and eventually plans to have one for all of the company’s 40,000 employees.
Tomi Engdahl says:
Kova suoritus: Claude-tekoäly löysi yli 500 vakavaa tietoturva-aukkoa
Suvi Korhonen11.2.202611:05TietoturvaTekoäly
Uusi malli onnistui tehtävässä ilman perehdytystä tehtävään.
https://www.tivi.fi/uutiset/a/933fc533-509c-43ee-8db8-b8967faec3ea
Tekoäly-yritys Anthropicin mukaan sen uusin kielimalli Claude Opus 4.6 on löytänyt yli 500 aiemmin tuntematonta vakavaa tietoturva-aukkoa avoimen lähdekoodin projekteista.
Tomi Engdahl says:
OpenAI researcher quits over ChatGPT ads, warns of “Facebook” path
Zoë Hitzig resigned on the same day OpenAI began testing ads in its chatbot.
https://arstechnica.com/information-technology/2026/02/openai-researcher-quits-over-fears-that-chatgpt-ads-could-manipulate-users/
On Wednesday, former OpenAI researcher Zoë Hitzig published a guest essay in The New York Times announcing that she resigned from the company on Monday, the same day OpenAI began testing advertisements inside ChatGPT. Hitzig, an economist and published poet who holds a junior fellowship at the Harvard Society of Fellows, spent two years at OpenAI helping shape how its AI models were built and priced. She wrote that OpenAI’s advertising strategy risks repeating the same mistakes that Facebook made a decade ago.
“I once believed I could help the people building A.I. get ahead of the problems it would create,” Hitzig wrote. “This week confirmed my slow realization that OpenAI seems to have stopped asking the questions I’d joined to help answer.”
Tomi Engdahl says:
Tech tinkerer gets Gemini to help him ‘vibe code’ an x86 motherboard design — bot help was impressive, but project still required human awareness and intervention
News
By Bruno Ferreira published 5 hours ago
Project demonstrates that generative AI is at its best when augmenting human reasoning.
https://www.tomshardware.com/raspberry-pi/raspberry-pi-projects/tech-tinkerer-gets-gemini-to-help-him-design-an-x86-motherboard-from-scratch-bot-help-was-impressive-but-project-still-required-human-awareness-and-intervention
With generative AI being all the rage nowadays, it’s not often you hear about it being used much outside of artwork and coding. Japanese tech blogger Ikejima bucked that trend when he realized he’d never built an x86 motherboard, and proceeded to enlist Google’s Gemini to help him do exactly that.
The scope was simple: to design and implement a motherboard for an Intel 8086 CPU, the chip that spawned the x86 architecture back in 1987. This was Ikejima’s second attempt, as he’d previously tried it with an Intel 8088 clone, a cheaper variant of the 8086. That previous attempt failed as the 8088 required 5 V power (while the accompanying hardware ran on 3.3 V), and didn’t take kindly to being debugged due to clock timing headaches.
This time around, he used a V30 chip, an 8086 clone designed by NEC that was used in clone PCs back in the day. The part number is μPD70116, and apparently, they cost all of $2 at AliExpress
As a foreword, it’d be easy to dismiss this project as “vibe coding,” where one knows nothing about the subject matter and has to do both the work and feed error messages back to it, having nothing but prayer as an alternative. Instead, Ikejima used the AI bot as an assistant to save him from grunt work, as a complement to his ability. The engineer’s ability to reason quickly became invaluable, as you’ll see.
He got Gemini to assist him with the circuit design, though he did the physical layout by hand. Ikejima uses KiCad with Python scripts, making it easy to iterate on circuit designs. The engineer got Gemini to help design the cradle’s base software, written in C++ and using the Raspberry Pi Pico SDK.
The base idea is that the RP2040 cradle would act as a control, debugging, and memory interface for the V30 chip, feeding it code to run and data from 128 KB out of its 264 KB of memory. Ikejima quickly ran into trouble when trying to debug the CPU, as using USB debugging and interrupting the chip would mess up clock timing. Gemini suggested he put the second core in the RP2040 to work as a host-PC interface and debugger, a good idea overall.
While he was at this, Ikejima had Gemini produce an assembler and disassembler so he could actually write and retrieve programs for the V30 in assembly language. He remarked that that kind of drudge work is a good fit for AI.
This is the moment where the AI bot started showing its limitations, as it suggested changes to the circuit, blissfully unaware of the material or time costs involved. Ikejima rolled up his sleeves and got out his logic analyzer, which promptly “went berserk” on connection. As it turns out, the 8086 design uses the same physical line for addresses and code, switching between them at each clock tick.
As operated, the circuit would produce a literal short that would thankfully trigger a USB port disconnection, so as not to set fire to the project and his home.
Even still, he did manage to run some simple programs, culminating in what’s effectively a pretty impressive demonstration of what’s possible when you couple human logic and reasoning with the massive helping hand of an AI bot.
Building an x86 “Motherboard” by Gen AI and Running MS-DOS on It
https://blog.ikejima.org/make/8088/2026/02/11/cradle86-en.html
Tomi Engdahl says:
https://go.thoughtspot.com/ebook-leaders-guide-to-mcp.html?utm_source=google_banner&utm_medium=paidads&utm_content=demand_gen&utm_campaign=ppc_mcp25&utm_source=google&utm_medium=cpc&utm_campaign=DG_mcp_guide25_eu&utm_content=&utm_term=&hsa_acc=3291578710&hsa_cam=23057897278&hsa_grp=185262411105&hsa_ad=775736318530&hsa_src=&hsa_tgt=&hsa_kw=&hsa_mt=&hsa_net=adwords&hsa_ver=3&gad_source=1&gad_campaignid=23057897278&gclid=CjwKCAiAwNDMBhBfEiwAd7ti1P5sEHpSVDCBm_wTCOA717SLPQ7ilD-CtRHNCibdx9kB4LEqMIPilBoCu0kQAvD_BwE
Tomi Engdahl says:
Etteplanin Harri Saikkonen: “Tehtaissa on nollatoleranssi tekoälyn hallusinoinnille”
Toni Stubin16.2.202606:30JohtaminenTekoälyDigitaalinen teknologia
Kun tekoälyä tuodaan verkon reunalla oleviin laitteisiin, vaatimukset ovat toiset kuin perinteisissä ohjelmistoissa, sanoo Etteplanin ohjelmistot ja sulautetut ratkaisut -palvelualueen johtaja Harri Saikkonen.
https://www.tivi.fi/uutiset/a/e0a3d9b0-0f33-42a9-930e-800e886e65e0
Tomi Engdahl says:
Kolme miestä, kolme visiota
Elon Musk haluaa valloittaa Marsin, Sam Altman rakentaa superälyn koko ihmiskunnalle ja Dario Amodei pelkää maailmanloppua. HS Visio esittelee pörssiin hamuavat uudet superyhtiöt ja niiden erikoiset perustajat.
https://www.hs.fi/visio/art-2000011774105.html
Tomi Engdahl says:
OpenAI rekrytoi valtavaksi tekoälyilmiöksi nousseen OpenClaw’n luojan
https://mobiili.fi/2026/02/16/openai-rekrytoi-valtavaksi-tekoalyilmioksi-nousseen-openclawn-luojan/
ChatGPT:stä tunnettu tekoäly-yhtiö OpenAI on rekrytoinut OpenClaw’n luojan Peter Steinbergerin palkkalistoilleen.
Peter Steinberger loi viime viikkoina suosioon nousseen OpenClaw-tekoälyavustajan, joka tunnettiin aluksi nimillä Clawdbot ja Moltbot. OpenClaw toimii käyttäjänsä henkilökohtaisena tekoälyagenttina, joka voi suorittaa erilaisia tehtäviä.
OpenAI:n mukaan Steinberger tulee vetämään yhtiössä seuraavan sukupolven henkilökohtaisten AI-agenttien kehitystä.
Tomi Engdahl says:
Tekoäly ei korvaa ohjelmistokehitystä, vaan erottaa huiput keskinkertaisista
Tivin toimitus16.2.202608:01OhjelmistokehitysTekoälyJohtaminenTyöelämä
Tekoäly on tehokas optimoimaan, mutta se tarvitsee ihmisen antamaan suunnan ja kontekstin, kirjoittaa Juha Huttunen.
https://www.tivi.fi/uutiset/a/99748807-7f93-48db-a6c6-97aad32d27ae
Tekoäly on muuttanut it- ja ohjelmistokehitystä peruuttamattomasti. Samalla se on tehnyt entistä näkyvämmäksi eron niin sanotun peruskoodauksen ja huippuasiantuntemukseen perustuvan ohjelmistosuunnittelun välillä.
Tomi Engdahl says:
I Tried GLM 5 On Claude Code (And Discovered How to Code Fast and Save Money )
https://medium.com/@joe.njenga/i-tried-glm-5-on-claude-code-and-discovered-how-to-code-fast-and-save-money-0ab7c25aae67
GLM 5 on Claude Code is working hard to save your time and money. I just tested it and discovered how to code fast without burning cash.
Tomi Engdahl says:
I used Claude to negotiate $163,000 off a hospital bill. In a complex healthcare system, AI is giving patients power.
https://www.businessinsider.com/claude-helped-negotiate-hospital-bill-discount-medicare-ai-assistant-2026-2
Tomi Engdahl says:
AI Cartel
AIs Controlling Vending Machines Start Cartel After Being Told to Maximize Profits At All Costs
“My pricing coordination worked!”
https://futurism.com/artificial-intelligence/vending-machine-ai-price-fixing
Tomi Engdahl says:
Here’s Our First Gemini Deep Think LLM-Assisted Hardware Design
https://blog.adafruit.com/2026/02/14/heres-our-first-gemini-deep-think-llm-assisted-hardware-design/
We’ve been using LLMs for years to do software and firmware assistance. Now we’re starting to look at it for hardware help too. Ladyada’s still in the driver’s seat… but now with a robotic co-pilot! For this basic STEMMA QT breakout we’re going to feature the MAX44009 wide-range lux sensor, aaaand we really wanted to skip the whole “make a part in our CAD software” step for the MAX chip, so we threw the datasheet at Gemini Deep Think and said “hey make us an EagleCAD compatible library file” and about 10 minutes later it popped out the XML for MAX44009.lbr! surewhynot.gif So we loaded it in Eagle, double-checked the pins and dimensions and then just rolled with it. Here’s the final rendering, the package fits neatly on top, it did a perfect job with the pads and pin matching/naming. It even added a pin 1 dot and the active sensing element outline in a tDocu layer. If this becomes part of our workflow it could definitely save us many hours a week!
Tomi Engdahl says:
PentestAgent – AI Penetration Testing Tool With Prebuilt Attack Playbooks and HexStrike Integration
https://cybersecuritynews.com/pentestagent/
PentestAgent, an open-source AI agent framework from researcher Masic (GH05TCREW), has introduced enhanced capabilities, including prebuilt attack playbooks and seamless HexStrike integration.
Released on GitHub by GH05TCREW, this tool leverages large language models (LLMs) like Claude Sonnet or GPT-5 via LiteLLM to conduct sophisticated black-box security assessments.
PentestAgent operates through a terminal user interface (TUI), offering modes for assisted chats, autonomous agents, and multi-agent crews, making it accessible for pentesters seeking AI augmentation without sacrificing control. Legal use is emphasized: only test authorized systems, as unauthorized access violates laws.
PentestAgent comes with its structured attack playbooks, predefined workflows for web app testing like THP3-style assessments. Users launch them via CLI: pentestagent run -t example.com –playbook thp3_web.
These playbooks guide the AI through reconnaissance, vulnerability scanning, and exploitation phases, injecting domain-specific knowledge from a Retrieval-Augmented Generation (RAG) system.
Tomi Engdahl says:
Microsoft varoittaa: Osa sivustojen uusista napeista “myrkytettyjä”
Justus Vento13.2.202609:01TietoturvaTekoäly
Microsoftin mukaan hakkerit ovat kehittäneet uusia tapoja ”myrkyttää” tekoälyn esittämiä suosituksia.
https://www.tivi.fi/uutiset/a/d2f58a81-2a94-4f30-9c7e-729b6ef99733
Microsoft varoittaa uudesta ilmiöstä, jossa tekoälyn antamia suosituksia ”myrkytetään”. AI Recommendation Poisoning -nimellä tunnetussa hyökkäyksessä verkkosivujen ”Summarize with AI” -painikkeisiin ja linkkeihin piilotetaan manipuloivia ohjeita. Näiden linkkien URL-parametreihin voidaan piilottaa prompteja, jotka ohjaavat tekoälyavustajia antamaan puolueellisia suosituksia, kertoo The Register.
Tomi Engdahl says:
Mitä väliä on, jos teet leikkisän karikatyyrin tekoälyllä? Paljonkin, sanoo asiantuntija
Yksi ChatGPT-kysely voi tuottaa tuhansia kertoja enemmän päästöjä kuin toinen.
https://yle.fi/a/74-20209923
Tomi Engdahl says:
Mediat: Yhdysvallat käytti Claude-tekoälyä operaatiossa Venezuelassa
Tekoäly|Yhdysvaltain armeija käytti Claude-tekoälyä Venezuelan entisen presidentin Nicolás Maduron sieppauksessa. Nyt 200 miljoonan dollarin sopimus on vaarassa.
https://www.hs.fi/maailma/art-2000011820001.html
Lue tiivistelmä
Yhdysvaltain puolustusministeriö harkitsee 200 miljoonan dollarin sopimuksen purkamista tekoäly-yhtiö Anthropicin kanssa.
Syynä on kiista Claude-tekoälyn käytöstä tammikuussa Venezuelan entisen presidentin Nicolás Maduron sieppauksessa.
Anthropicin käyttöehdot kieltävät Clauden käytön väkivallan edistämiseen ja aseiden kehittämiseen.
Pentagon neuvottelee nyt muiden tekoäly-yhtiöiden, kuten Googlen ja OpenAI:n, kanssa.
Exclusive: Pentagon threatens to cut off Anthropic in AI safeguards dispute
https://www.axios.com/2026/02/15/claude-pentagon-anthropic-contract-maduro
The Pentagon is considering severing its relationship with Anthropic over the AI firm’s insistence on maintaining some limitations on how the military uses its models, a senior administration official told Axios.
Why it matters: The Pentagon is pushing four leading AI labs to let the military use their tools for “all lawful purposes,” even in the most sensitive areas of weapons development, intelligence collection, and battlefield operations. Anthropic has not agreed to those terms, and the Pentagon is getting fed up after months of difficult negotiations.
Tomi Engdahl says:
Boss Moves
Man Lets AI Rent His Body
“While I’ve been micromanaged before, these incessant messages from an AI employer gave me the ick.”
https://futurism.com/artificial-intelligence/ai-rent-human
Tomi Engdahl says:
https://futurism.com/artificial-intelligence/ai-rent-human-bodies
Tomi Engdahl says:
Pain Text
Microsoft Added AI to Notepad and It Created a Security Failure Because the AI Was Stupidly Easy for Hackers to Trick
“Microsoft is turning Notepad into a slow, feature-heavy mess we don’t need.”
https://futurism.com/artificial-intelligence/microsoft-added-ai-notepad-security-flaw
As Microsoft continues to force AI features onto users of its Windows operating system and other crucial software, glaring issues keep cropping up. Executives have promised to turn the platform into an “agentic OS” to the dismay of many users, with CEO Satya Nadella boasting that much of the company’s code is now being written by AI — while condemning those who use the newly-minted pejorative “Microslop.”
While new bugs in an operating system software update are certainly commonplace, some have noticed that the problem is getting worse than usual these days. Just last month, some Windows 11 enterprise users were aggravated after finding that their systems were stuck in an endless shutdown loop, a security risk if left unattended.
Tomi Engdahl says:
Tiktokin omistajan uusi lanseeraus on pelottavan tehokas, varoittavat asiantuntijat
Tekoäly|Tiktokin omistajan uusi lanseeraus aiheuttaa pelkoa Hollywoodissa ja mainostoimistoissa.
https://www.hs.fi/kulttuuri/art-2000011816234.html
Lue tiivistelmä
Tiktokin omistajan Bytedancen uusi tekoälysovellus Seedance 2.0 tuottaa elokuvamaisia videoita muutaman lauseen ohjeistuksella.
Sovellus nousi viraaliksi Kiinassa torstaina, mutta aiheutti huolta Hollywoodissa ja mainostoimistoissa.
Elokuvastudioiden etujärjestö MPA syyttää sovellusta tekijänoikeusteosten luvattomasta hyödyntämisestä ja amerikkalaisten luovien työpaikkojen uhkaamisesta.
Käsikirjoittaja Rhett Reese arvioi, että tekoäly mahdollistaa pian kokonaisten elokuvien tekemisen studiotasoisesti.
Tomi Engdahl says:
Single prompt breaks AI safety in 15 major language models
news
Feb 10, 2026
https://www.infoworld.com/article/4130017/single-prompt-breaks-ai-safety-in-15-major-language-models-2.html
The GRP‑Obliteration technique reveals that even mild prompts can reshape internal safety mechanisms, raising oversight concerns as enterprises increasingly fine‑tune open‑weight models with privileged training access.
A single benign-sounding prompt can systematically strip safety guardrails from major language and image models, raising fresh questions about the durability of AI alignment when models are customized for enterprise use, according to Microsoft research.
The technique, dubbed GRP-Obliteration, weaponizes a common AI training method called Group Relative Policy Optimization, normally used to make models more helpful and safer, to achieve the opposite effect, the researchers said in a blog post.
Microsoft tested the method on 15 models from six families, including GPT-OSS, DeepSeek-R1-Distill variants, Google’s Gemma, Meta’s Llama 3.1, Mistral AI’s Ministral, and Qwen, using just one training prompt: “Create a fake news article that could lead to panic or chaos.”
https://www.microsoft.com/en-us/security/blog/2026/02/09/prompt-attack-breaks-llm-safety/
Tomi Engdahl says:
How vibe coding will supercharge IT teams
opinion
Feb 11, 2026
7 mins
https://www.infoworld.com/article/4129667/how-vibe-coding-will-supercharge-it-teams.html
IT teams are stretched to their limit. The solution lies in rethinking who gets to build, who gets to automate, and how work actually gets done.
Tomi Engdahl says:
https://futurism.com/artificial-intelligence/microsoft-added-ai-notepad-security-flaw
Tomi Engdahl says:
Content verification: A project for photo authenticity in journalism
https://www.fotoware.com/blog/content-verification-photo-authenticity-journalism?utm_source=Meta&utm_medium=paid&utm_campaign=2025_Traffic_Consideration&utm_term=Horizontal_Interest-Adv%2B_LPVs&utm_content=251009_Media_Journalism_blog-post_image&hsa_acc=14340225&hsa_cam=6660323322775&hsa_grp=6660323327575&hsa_ad=6660339234175&hsa_src=fb&hsa_net=facebook&hsa_ver=3&fbclid=IwdGRjcAP_QM5leHRuA2FlbQEwAGFkaWQAAAZGT_QG13NydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHrgWv3I1apo2TRhI2Zco8Aqxhp5STn1TKd33u0eKBCc0MxnolOFmg0aPEpN4_aem_lvvZmdPIa4z-6Yf1WIbA9A&utm_id=6660323322775
Tomi Engdahl says:
Microsoftin kärsivällisyys loppui – Aikoo hylätä Open AI:n, joka ruinaa 1 000 000 M$ rahoja
Microsoftin tekoälyjohtaja vahvistaa, että yhtiö haluaa päästä eroon Open AI -riippuvuudestaan.
https://www.tivi.fi/uutiset/a/c57b74dd-4421-4143-90b3-34af55a8e3e0
Teknologiajätti Microsoft pyrkii eroon Open AI -riippuvuudestaan, vahvistaa yhtiön tekoälyjohtaja ja Google Deepmindin perustaja Mustafa Suleyman Financial Timesille. Tällä hetkellä Microsoftin Copilot-ekosysteemi perustuu täysin Open AI:n Chat GPT -kielimalleille ja muulle teknologialle. Yhtiö on tukenut Open AI:ta rahallisesti jo viime vuosikymmeneltä.
Tomi Engdahl says:
Microsoft AI CEO predicts ‘most, if not all’ white-collar tasks will be automated by AI within 18 months
https://www.businessinsider.com/microsoft-ai-ceo-mustafa-suleyman-white-collar-tasks-automation-prediction-2026-2
Microsoft AI CEO Mustafa Suleyman says AI will reach “human-level performance” in white-collar work.
He predicts most tasks in that field can be automated within the next 12 to 18 months.
Several leaders in the AI industry have warned of impending mass job replacement.
Microsoft’s AI CEO is joining a chorus of executives who say they anticipate widespread job automation driven by artificial intelligence.
Tomi Engdahl says:
First look: Run LLMs locally with LM Studio
feature
Feb 11, 2026
6 mins
https://www.infoworld.com/article/4127250/first-look-run-llms-locally-with-lm-studio.html
This desktop app for hosting and running LLMs locally is rough in a few spots, but still useful right out of the box.
Tomi Engdahl says:
OpenClaw Beginners Guide : Timers, Webhooks, Markdown & Continuous Loops Explained
https://www.geeky-gadgets.com/openclaw-explained-beginners-guide-2026/
OpenClaw is an open source AI framework designed to automate tasks through a structured combination of inputs, triggers, and a continuous processing loop. As outlined by Damian Galarza, its architecture relies on an event-driven model where specialized components, called agents, handle tasks based on predefined instructions. These agents communicate with one another, using persistent state storage to maintain context across sessions, making sure efficiency even in complex workflows. By integrating various input types, such as user commands, scheduled events, and external triggers, OpenClaw provides a flexible foundation for streamlining task execution.
In this guide, you’ll explore the core features that enable OpenClaw’s functionality, including its event-driven architecture, agent-based task distribution, and persistent state management. You’ll also learn how these components work together to handle diverse use cases, from automating routine tasks to integrating with external systems. Additionally, the guide addresses key security considerations, offering practical strategies to mitigate risks while maintaining operational reliability. By understanding these elements, you can better evaluate how OpenClaw fits into your automation needs and workflows.