Here are some of the the major AI trends shaping 2026 — based on current expert forecasts, industry reports, and recent developments in technology. The material is analyzed using AI tools and final version hand-edited to this blog text:
1. Generative AI Continues to Mature
Generative AI (text, image, video, code) will become more advanced and mainstream, with notable growth in:
* Generative video creation
* Gaming and entertainment content generation
* Advanced synthetic data for simulations and analytics
This trend will bring new creative possibilities — and intensify debates around authenticity and copyright.
2. AI Agents Move From Tools to Autonomous Workers
Rather than just answering questions or generating content, AI systems will increasingly act autonomously, performing complex, multi-step workflows and interacting with apps and processes on behalf of users — a shift sometimes called agentic AI. These agents will become part of enterprise operations, not just assistant features.
3. Smaller, Efficient & Domain-Specific Models
Instead of “bigger is always better,” specialized AI models tailored to specific industries (healthcare, finance, legal, telecom, manufacturing) will start to dominate in many enterprise applications. These models are more accurate, legally compliant, and cost-efficient than general models.
4. AI Embedded Everywhere
AI won’t be an add-on feature — it will be built into everyday software and devices:
* Office apps with intelligent drafting, summarization, and task insights
* Operating systems with native AI
* Edge devices processing AI tasks locally
This makes AI pervasive in both work and consumer contexts.
5. AI Infrastructure Evolves: Inference & Efficiency Focus
More investment is going into inference infrastructure — the real-time decision-making step where models run in production — thereby optimizing costs, latency, and scalability. Enterprises are also consolidating AI stacks for better governance and compliance.
6. AI in Healthcare, Research, and Sustainability
AI is spreading beyond diagnostics into treatment planning, global health access, environmental modeling, and scientific discovery. These applications could help address personnel shortages and speed up research breakthroughs.
7. Security, Ethics & Governance Become Critical
With AI handling more sensitive tasks, organizations will prioritize:
* Ethical use frameworks
* Governance policies
* AI risk management
This trend reflects broader concerns about trust, compliance, and responsible deployment.
8. Multimodal AI Goes Mainstream
AI systems that understand and generate across text, images, audio, and video will grow rapidly, enabling richer interactions and more powerful applications in search, creative work, and interfaces.
9. On-Device and Edge AI Growth
10. New Roles: AI Manager & Human-Agent Collaboration
Instead of replacing humans, AI will shift job roles:
* People will manage, supervise, and orchestrate AI agents
* Human expertise will focus on strategy, oversight, and creative judgment
This human-in-the-loop model becomes the norm.
Sources:
[1]: https://www.brilworks.com/blog/ai-trends-2026/?utm_source=chatgpt.com “7 AI Trends to Look for in 2026″
[2]: https://www.forbes.com/sites/bernardmarr/2025/10/13/10-generative-ai-trends-in-2026-that-will-transform-work-and-life/?utm_source=chatgpt.com “10 Generative AI Trends In 2026 That Will Transform Work And Life”
[3]: https://millipixels.com/blog/ai-trends-2026?utm_source=chatgpt.com “AI Trends 2026: The Key Enterprise Shifts You Must Know | Millipixels”
[4]: https://www.digitalregenesys.com/blog/top-10-ai-trends-for-2026?utm_source=chatgpt.com “Digital Regenesys | Top 10 AI Trends for 2026″
[5]: https://www.n-ix.com/ai-trends/?utm_source=chatgpt.com “7 AI trends to watch in 2026 – N-iX”
[6]: https://news.microsoft.com/source/asia/2025/12/11/microsoft-unveils-7-ai-trends-for-2026/?utm_source=chatgpt.com “Microsoft unveils 7 AI trends for 2026 – Source Asia”
[7]: https://www.risingtrends.co/blog/generative-ai-trends-2026?utm_source=chatgpt.com “7 Generative AI Trends to Watch In 2026″
[8]: https://www.fool.com/investing/2025/12/24/artificial-intelligence-ai-trends-to-watch-in-2026/?utm_source=chatgpt.com “3 Artificial Intelligence (AI) Trends to Watch in 2026 and How to Invest in Them | The Motley Fool”
[9]: https://www.reddit.com//r/AI_Agents/comments/1q3ka8o/i_read_google_clouds_ai_agent_trends_2026_report/?utm_source=chatgpt.com “I read Google Cloud’s “AI Agent Trends 2026” report, here are 10 takeaways that actually matter”
214 Comments
Tomi Engdahl says:
Dude, Where’s My Return?
Majority of CEOs Alarmed as AI Delivers No Financial Returns
They’re worried they’re not spending enough on
https://futurism.com/artificial-intelligence/ceos-ai-returns?utm_sf_post_ref=658255447&utm_sf_cserv_ref=352364611609411&fbclid=IwdGRjcAPhQ-xjbGNrA-FDz2V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHqT38bZsuaAH5T4guYzKIW9AIYEbdQb0Qt1EZBzFeHBrGvEbvRzb3_J-Yu6z_aem_jcgIyGWKfLlMaaodR_-nUw
Investors continue to fret over an AI bubble “reckoning,” as gains in productivity from the tech remain elusive.
According to a recent survey by professional services network PwC, more than half of the 4,454 CEO respondents said “their companies aren’t yet seeing a financial return from investments in AI.”
Only 30 percent reported increased revenue from AI in the last 12 months. However, a far more significant 56 percent said AI has failed to either boost revenue or lower costs. A mere 12 percent of CEOs reported that it’d accomplished both goals.
The findings once again underline lingering questions about the effectiveness of the tech. That’s despite AI companies pouring tens of billions into data center buildouts and related infrastructure.
Instead of looking for other avenues for growth, though, PwC found that executives are worried about falling behind by not leaning into AI enough.
“A small group of companies are already turning AI into measurable financial returns, whilst many others are still struggling to move beyond pilots,”
For now, the prognosis is still looking somewhat grim. Last year, a frequently-cited MIT report found that a staggering 95 percent of attempts to incorporate generative AI into business so far are failing to lead to “rapid revenue acceleration.”
The effectiveness of the tech itself has also repeatedly been called into question, from frequent hallucinations and an inability to complete real-world office tasks to ongoing concerns over data security.
Tomi Engdahl says:
https://futurism.com/artificial-intelligence/investors-bracing-ai-bubble-reckoning?fbclid=IwVERDUAPhRPVleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR7d3bnfPDkDAcNYYRa0vHHI5Ro5uqF_xs1aftO2mXie2L1oK_GckTQmlWwmLw_aem_DCn07Ma9bh-863kfYAi0AQ
Tomi Engdahl says:
Oh cool it helped you do maths and reporting faster and now someone will fact-check it. Hope you’re all ready for this rollercoaster
Tomi Engdahl says:
The Math on AI Agents Doesn’t Add Up
A research paper suggests AI agents are mathematically doomed to fail. The industry doesn’t agree.
https://www.wired.com/story/ai-agents-math-doesnt-add-up/
The big AI companies promised us that 2025 would be “the year of the AI agents.” It turned out to be the year of talking about AI agents, and kicking the can for that transformational moment to 2026 or maybe later. But what if the answer to the question “When will our lives be fully automated by generative AI robots that perform our tasks for us and basically run the world?” is, like that New Yorker cartoon,
Tomi Engdahl says:
Something is breaking inside the education system — and it’s happening faster than universities can react. In lecture halls from Boston to Berlin, professors face a new kind of student: one who turns in perfectly polished assignments yet cannot defend a single idea in them.
Generative AI has not just entered the classroom—it has started replacing the very process of learning.
https://lasoft.org/blog/renting-out-the-mind-ai-is-accelerating-the-decline-of-academic-skills/
Tomi Engdahl says:
AI bullshit about someone using AI bullshit… bizarre
The irony is palpable. Using AI to criticize the use of AI
Tomi Engdahl says:
Mitä jos pornoa tekevä Lotta Maija, 24, esiintyykin kohta VR:n tai Lidlin asussa? ”Olemme tunnistaneet riskit”
https://www.iltalehti.fi/digiuutiset/a/48c1f16f-7207-4e44-ad63-2106ae2eaa89
Iltalehti uutisoi viime viikolla Finnairin työntekijänä esiintyneestä Lotta Maijasta, jonka someprofiililla markkinoitiin aikuisviihdettä. Sama voisi tapahtua myös muille suomalaisfirmoille.
Tekoälyhuijaukset tuottavat päänvaivaa tunnetuissa suomalaisyrityksissä.
”Finnairin keissillä” Partonen viittaa viime viikolla julkisuuteen tulleeseen Lotta Maija -nimiseen profiiliin, joka esiintyi sosiaalisessa mediassa Finnairin työntekijänä. Tekoälyavusteisen profiilin kautta markkinoitiin aikuisviihdesisältöä, jota 24-vuotiaaksi esittäytynyt Lotta Maija tarjosi Fanvue-palvelussa.
Finnairin viestintäjohtaja Päivyt Tallqvist kommentoi tuoreeltaan, ettei Lotta Maija ole lentoyhtiön oikea työntekijä. Hän totesi, että profiilin yksityiskohdat viittasivat siihen, että tekoälyä oli käytetty ”jollain tavalla”.
Tämän jälkeen Jyväskylän yliopiston viestinnän johtamisen professori Vilma Luoma-aho sanoi, että suurien brändien on varauduttava deepfake-huijauksiin eli syväväärennöksiin tulevaisuudessa yhä enenevissä määrin. Syväväärennökset ovat tekoälyn avulla luotuja kuvamanipulaatioita, jotka voivat muistuttaa erehdyttävän paljon aitoa videokuvaa.
Syväväärennösten uhat kytkeytyvät suomalaisyrityksissä muihin tietoturva-asioihin.
– Teknologia ja menetelmät, joilla niitä tehdään, menevät nopeasti eteenpäin.
Myös S-ryhmä, Lidl ja VR ovat tiedostaneet tekoälyhuijausten vaaran.
– Olemme tunnistaneet tekoälyn nopean kehityksen mukanaan tuomat riskit. Seuraamme aihetta tarkasti viranomaisten kanssa ja huomioimme sen omassa toiminnassamme, VR:ltä viestitetään.
Alkon tietohallintojohtaja Partonen toteaa, että Alko on tekoälyhuijauksista puhuttaessa samassa veneessä muiden tunnettujen brändien kanssa.
– Tämä on tätä ajankuvaa. Näille ei oikein voi mitään, mutta niitä vastaan toki pystyy suojautumaan.
– Yhtä kollegaa siteeraten on hyvä muistaa, että mitä kovempi kiireen tuntu jossain viestissä on, sitä tärkeämpää on pysähtyä varmistamaan viestin aitoutta.
Tomi Engdahl says:
As confirmed by an AI agent (and, eventually, a human) the work is non-recoverable.
Scientist Loses Two Years Of Work After Clicking The Wrong Button On ChatGPT, And People Are Less Than Sympathetic
As confirmed by an AI agent (and, eventually, a human) the work is non-recoverable.
https://www.iflscience.com/scientist-loses-two-years-of-work-after-clicking-the-wrong-button-on-chatgpt-and-people-are-less-than-sympathetic-82331?fbclid=IwdGRjcAPkxxtjbGNrA-TG8mV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHhnDVhaddjTQsRqB_0i7cM94vwbDmlK6mYaPM5hdAK9sjOrhAjU_jSBHECCA_aem_Zw25qGvZ0RxxVskI_gPobA
Ascientist has written a piece for the esteemed journal Nature explaining how he lost two years’ worth of research when he pressed the wrong button on ChatGPT.
Marcel Bucher, professor of plant sciences at the University of Cologne in Germany, admitted in the piece that he had come to rely on OpenAI’s large language model (LLM) over the last few years. According to the professor, he used it for everything from writing emails and analyzing student responses, to revising publications, planning lectures, and preparing grant applications. He knew that these chatbots “hallucinate” – or place words in a pleasing order that seems plausible, but is not true – but said that he relied upon the apparently stable workspace that the LLM provided.
That changed in August 2025, when he played around with his data consent options in the LLM.
“I temporarily disabled the ‘data consent’ option because I wanted to see whether I would still have access to all of the model’s functions if I did not provide OpenAI with my data,” Bucher wrote in his Nature piece. “At that moment, all of my chats were permanently deleted and the project folders were emptied – two years of carefully structured academic work disappeared. No warning appeared. There was no undo option. Just a blank page.”
Having spent several years being reasonably happy with artificial intelligence (AI) responses, Bucher now sought some human help.
“When I contacted OpenAI’s support, the first responses came from an AI agent,” he added. “Only after repeated enquiries did a human employee respond, but the answer remained the same: the data were permanently lost and could not be recovered.”
OpenAI explained to Bucher that the issue was actually a privacy feature. Once a user chooses to deactivate sharing their data, their chat history is deleted and cannot be recovered.
With many people concerned about AI plagiarizing the work of the people it is trained on, the piece did not draw much sympathy online.
“Maybe next time, actually do the work you are paid to do *yourself*, instead of outsourcing it to the climate-killing, suicide-encouraging plagiarism machine,” one BlueSky user wrote, while another joked “All My Apes Gone, academia edition.”
Tomi Engdahl says:
Anthropic prepares to release Security Center for Claude Code
Anthropic is set to launch Security Center for Claude Code, offering users an overview of security scans, detected issues, and manual scan options in one place.
https://www.testingcatalog.com/anthropic-prepares-to-release-security-center-for-claude-code/#google_vignette
Tomi Engdahl says:
AI Is Causing Cultural Stagnation, Researchers Find
“No new data was added. Nothing was learned. The collapse emerged purely from repeated use.”
https://futurism.com/artificial-intelligence/ai-cultural-stagnation
Generative AI relies on a massive body of training material, primarily made up of human-authored content haphazardly scraped from the internet.
Scientists are still trying to better understand what will happen when these AI models run out of that content and have to rely on synthetic, AI-generated data instead, closing a potentially dangerous loop. Studies have found that AI models start cannibalizing this AI-generated data, which can eventually turn their neural networks into mush. As the AI iterates on recycled content, it starts to spit out increasingly bland and often mangled outputs.
Tomi Engdahl says:
There’s also the question of what will happen to human culture as AI systems digest and produce AI content ad infinitum. As AI executives promise that their models are capable enough to replace creative jobs, what will future models be trained on?
Tomi Engdahl says:
AI Agents Are Mathematically Incapable of Doing Functional Work, Paper Finds
“There is no way they can be reliable.”
https://futurism.com/artificial-intelligence/ai-agents-incapable-math
A months-old but until now overlooked study recently featured in Wired claims to mathematically prove that large language models “are incapable of carrying out computational and agentic tasks beyond a certain complexity” — that level of complexity being, crucially, pretty low.
The paper, which has not been peer reviewed, was written by Vishal Sikka, a former CTO at the German software giant SAP, and his son Varin Sikka. Sikka senior knows a thing or two about AI: he studied under John McCarthy, the Turing Award-winning computer scientist who literally founded the entire field of artificial intelligence, and in fact helped coin the very term.
“There is no way they can be reliable,” Vishal Sikka told Wired.
When asked by the interviewer, Sikka also agreed that we should forget about AI agents running nuclear power plants and other strident promises thrown around by AI boosters.
Ignore the rhetoric that tech CEOs spew onstage and pay attention to what the researchers that work for them are finding, and you’ll find that even the AI industry agrees that the tech has some fundamental limitations baked into its architecture. In September, for example, OpenAI scientists admitted that AI hallucinations, in which LLMs confidently make up facts, were still a pervasive problem even in increasingly advanced systems, and that model accuracy would “never” reach 100 percent.
That would seemingly put a big dent in the feasibility of so-called AI agents, which are models designed to autonomously carry out tasks without human intervention
AI leaders insist that stronger guardrails external to the AI models can filter out the hallucinations. They may always be prone to hallucinating, but if these slip-ups are rare enough, then eventually companies will trust them to start doing tasks that they once entrusted to flesh and blood grunts. In the same paper that OpenAI researchers conceded that the models would never reach perfect accuracy, they also dismissed the idea that hallucinations are “inevitable,” because LLMs “can abstain when uncertain.”
“Our paper is saying that a pure LLM has this inherent limitation — but at the same time it is true that you can build components around LLMs that overcome those limitations,”
Tomi Engdahl says:
Viisi suurinta tekoälymuutosta 2026 – AI-agenttien vauhti kiihtyy ja robottien nousu alkaa
https://digia.com/blogi/viisi-suurinta-teko%C3%A4lymuutosta-2026-ai-agenttien-vauhti-kiihtyy-ja-robottien-nousu-alkaa
Tekoälyn kehitys ja hyödyntäminen jatkuu kiihtyvällä tahdilla. Millaisia vaikutuksia sillä on liiketoimintaan, kun AI-agentit yleistyvät ja niiden käyttö monipuolistuu? Toisaalta sääntely ja kansainvälinen toimintaympäristö muokkaavat tekoälyn käyttöä. Myös robottien nousu alkaa.
1. AI-agentit muuttuvat työkavereiksi
Viime vuoden suurin murros tekoälyn hyödyntämisessä yrityksissä oli AI-agenttien käyttöönotto.
2. Erikokoisten mallien hybridikäyttö yleistyy – myös pakosta
Usein uutta ja mullistavaa teknologiaa käytetään ensin hyvin yksinkertaisesti, esimerkiksi vain yhtä kyvykkyyttä hyödyntäen.
3. Sääntely tulee mukaan suunnitteluun
Viime vuosina EU on valmistellut paljon tekoälyyn liittyvää sääntelyä, ja se alkaa näkyä yhä enemmän liiketoiminnoissa.
”Tekoälykehittäjien ja ratkaisutoimittajien on mietittävä entistä tarkemmin, että sääntelynmukaisuus tulee mukaan kaikkiin toimituksiin”, Juppo kuvailee.
4. Turvallisuuden merkitys korostuu
Nykyisessä moniriskien riivaamassa maailmassa tekoäly tulee mukaan aiempaa vahvemmin myös turvallisuuteen ja puolustukseen.
”Tässä näemme isoja hyppyjä kansainvälisesti, mutta myös kansallisesti. Esimerkiksi Puolustusvoimat panostaa tekoälyyn voimakkaasti, ja on ilmoittanut avaavansa ensi vuoden alussa tekoälykeskuksen yhdessä kumppaneiden kanssa”, Juhana Juppo sanoo.
5. Fyysiset tekoälyjärjestelmät ilmestyvät tuotantotiloihin
Tekoälyä on tähän saakka pidetty lähinnä yhtenä IT:n osa-alueena. Kuitenkin nopeaa kehitystä tapahtuu myös niin sanotussa fyysisessä tekoälyssä – siis robotiikassa. Humanoidirobottien ja laajojen kielimallien yhdistelmästä on tulossa erittäin tehokas fyysisen työn tekijä.
”Humanoidirobottien kehitys on ollut viime aikoina aivan käsittämätöntä.”
Robottien käyttäminen on jokseenkin saman hintaista kaikkialla maailmassa
Miten kävi viime vuoden ennusteille?
Juhana Juppo ja Sami Paihonen ennustivat vuosi sitten, että AI-agentit yleistyvät vauhdilla, ja näin onkin käynyt. He ennustivat myös, että tekoälymallien määrä kasvaa ja “kestävästä tekoälystä” tulee trendi. Varsinkin energian suhteen tekoälyn kestävyydessä on vielä kehittämisen varaa.
Kolmas ennuste koski tekoälyn vaikutusta luoviin aloihin. Ainakaan vielä isompaa myllerrystä ei ole tullut
Neljänneksi Juppo ja Paihonen arvioivat, että eri toimialoille alkaa syntyä toimialakohtaisia tekoälyn tappajasovelluksia, jotka disruptoivat nykyisiä toimintatapoja. Niitä ehkä vielä laajemmin odotetaan.
Viides ennuste koski nykyistä levotonta yhteiskunnallista tilaa. Tekoäly on myös rikollisten ja huijareiden työkalu, ja sen haitallinen käyttö lisääntyy. Siinä Juppo ja Paihonen olivat täsmälleen oikeassa.
Tärkeintä on saada tekoäly tuotantoon – turvallisesti
Tomi Engdahl says:
Everything Claude Code: The Repo That Won Anthropic Hackathon (Here’s a Breakdown)
https://medium.com/@joe.njenga/everything-claude-code-the-repo-that-won-anthropic-hackathon-33b040ba62f3
Tomi Engdahl says:
https://thenewstack.io/llms-create-a-new-blind-spot-in-observability/
Tomi Engdahl says:
The era of agentic AI demands a data constitution, not better prompts
https://venturebeat.com/infrastructure/the-era-of-agentic-ai-demands-a-data-constitution-not-better-prompts
The industry consensus is that 2026 will be the year of “agentic AI.” We are rapidly moving past chatbots that simply summarize text. We are entering the era of autonomous agents that execute tasks. We expect them to book flights, diagnose system outages, manage cloud infrastructure and personalize media streams in real-time.
As a technology executive overseeing platforms that serve 30 million concurrent users during massive global events like the Olympics and the Super Bowl, I have seen the unsexy reality behind the hype: Agents are incredibly fragile.
Tomi Engdahl says:
Helsingin kaupunki kokeili tekoälyä – Nopeutti asiakaspalvelua 50 prosentilla
Anna Helakallio26.1.202609:00|päivitetty26.1.202609:31Tekoäly
Kokeilussa ilmeni, että tekoälyavustajan käyttö nopeutti tiedonhakua yli 50 prosentilla.
https://www.tivi.fi/uutiset/a/ca5fa50b-0fd3-4949-8e2d-716d0f5f438b
Generatiivinen tekoäly nopeutti Helsingin kaupungin asiakasohjauksessa tarvittavaa tiedonhakua noin 50 prosentilla, Digia tiedottaa.
Tomi Engdahl says:
Sam Altman said OpenAI is planning to ‘dramatically slow down’ its pace of hiring
https://www.businessinsider.com/sam-altman-said-openai-plan-dramatically-slow-down-hiring-ai-2026-1
Tomi Engdahl says:
Claude Cowork turns Claude from a chat tool into shared AI infrastructure
https://venturebeat.com/orchestration/claude-cowork-turns-claude-from-a-chat-tool-into-shared-ai-infrastructure
Claude Cowork is now available to more Claude users, alongside new updates aimed at team workflows.
Anthropic made Claude Cowork accessible to users on Team and Enterprise plans, and it brings the platform closer to being a collaborative AI infrastructure. For enterprise teams, the change matters less as a feature update than as a shift in how Claude is meant to be used. Cowork reframes Claude as a shared, persistent workspace where context, files, and tasks live beyond a single user session. This aligns more closely with how teams actually operate than one-off chat interactions.
Tomi Engdahl says:
Claude-pomolta villi ennustus siitä, mihin tekoäly saattaa vielä johtaa
Anthropicin kruununjalokivi on Claude-tekoäly.
https://muropaketti.com/tietotekniikka/tietotekniikkauutiset/tekoalypomo-ehdottaa-yhteista-hyvaa-kaikille-kansalaisille/#google_vignette
Tekoäly-yhtiö Anthropicin toimitusjohtaja Dario Amodei varoittaa, että tekoälyn tuoma taloudellinen kasvu saattaa keskittyä liian harvoille, jos valtio ei puutu asiaan. Amodein mukaan hallitusten tulisi luoda mekanismeja, joilla vältetään haitat ja varmistetaan, että teknologian tuottama vauraus hyödyttää koko yhteiskuntaa eikä vain teknologiajättejä.
Hän hahmotteli The Wall Street Journalille skenaarion, jossa talous kasvaisi 5–10 prosenttia, mutta työttömyys nousisi samaan aikaan. Amodei sanoi myös, että kymmenen miljoonaa ihmistä, enimmäkseen Piilaaksosta, voisivat irtautua muusta yhteiskunnasta ja nauttia jopa 50 prosentin talouskasvusta, samalla kun muu väestö jää jälkeen.
Tomi Engdahl says:
Claude Code Tasks Are Here (New Update Turns Claude Code ToDos to Tasks)
https://medium.com/@joe.njenga/claude-code-tasks-are-here-new-update-turns-claude-code-todos-to-tasks-a0be00e70847
Claude Code Tasks is a new feature that is an upgrade of Todos, a little-known feature that most users skip over.
Tasks aren’t new to other AI coding tools; Google Antigravity, Augment Code, and others have featured them prominently for a while, and I’ve written about them before.
Tomi Engdahl says:
Building AI Agents in 2026: Chatbots to Agentic Architectures
This is the engineering blueprint for building production-ready agentic systems that actually work.
https://levelup.gitconnected.com/the-2026-roadmap-to-ai-agent-mastery-5e43756c0f26
Tomi Engdahl says:
OpenAI is generating over $1 billion from something that has nothing to do with ChatGPT
https://www.businessinsider.com/openai-1-billion-a-month-api-business-chatgpt-sam-altman-2026-1
OpenAI has made more than $1 billion from something other than ChatGPT.
That revenue comes “just from our API business,” Sam Altman said.
His comments come as OpenAI looks beyond model subscriptions to help cover soaring compute costs.
OpenAI has pulled in a billion-dollar month from something other than ChatGPT.
Sam Altman said in a post on X on Thursday that OpenAI added more than $1 billion in annual recurring revenue in the past month “just from our API business.”
“People think of us mostly as ChatGPT, but the API team is doing amazing work!” the OpenAI CEO wrote.
OpenAI’s API enables other companies and developers to embed its models into their own products, from internal productivity software to coding tools.
Many of Silicon Valley’s high-profile startups rely on OpenAI’s models as core infrastructure. Perplexity uses OpenAI’s models to power parts of its AI search and answer engine. Harvey, one of the fastest-growing legal tech startups, is built on OpenAI’s models to assist lawyers with research and drafting.
Last week, the company said it is gearing up to test ads inside ChatGPT as it faces about $1.4 trillion in spending commitments over the coming years.
It’s a notable shift for a company that once treated ads as taboo. Less than two years ago, Altman said advertising was a “last resort.”
Tomi Engdahl says:
Claude Code is turning non-programmers into builders. Here’s how to start.
From an 8-year-old making games to a Google engineer completing a year’s work in one hour. And how a $20 subscription might change your life too.
https://blog.devgenius.io/claude-code-is-turning-non-programmers-into-builders-heres-how-to-start-6a70d06cdcfd
Tomi Engdahl says:
https://venturebeat.com/technology/memrl-outperforms-rag-on-complex-agent-benchmarks-without-fine-tuning
Tomi Engdahl says:
https://thenewstack.io/developer-proves-ai-agents-can-be-reprogrammed-via-new-exploit/
Tomi Engdahl says:
Viisi suurinta tekoälymuutosta 2026 – AI-agenttien vauhti kiihtyy ja robottien nousu alkaa
https://digia.com/blogi/viisi-suurinta-teko%C3%A4lymuutosta-2026-ai-agenttien-vauhti-kiihtyy-ja-robottien-nousu-alkaa
Tomi Engdahl says:
Build an agent into any app with the GitHub Copilot SDK
Now in technical preview, the GitHub Copilot SDK can plan, invoke tools, edit files, and run commands as a programmable layer you can use in any application.
https://github.blog/news-insights/company-news/build-an-agent-into-any-app-with-the-github-copilot-sdk/
Tomi Engdahl says:
https://events.aiunleashedglobalsummit.com/ai-summit-europe-jan-29-26?fbclid=Iwb21leAPhQdBleHRuA2FlbQIxMQBzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR6MndG-dP9pOajI3f_OQOlzd9MW7lbm1Dcbwk2LwURuE9QRyc-Kep-MWd5HTQ_aem_YDTHkWn3BoWZMuNZAVdEDg&utm_source=Facebook+Ad&utm_medium=120239293904340681&utm_campaign=2026.01.20_AI+Unleashed+Summit+Europe+1&utm_content=2026.01.20_AI+Unleashed+Summit+Europe+1_A%2B+Expanded+AI+Interests+Audience_Photo+Ad_H3+T3+I58_1.16&Placement=Facebook_Mobile_Feed&fbc_id=120239293904460681&h_ad_id=120239293904400681
Tomi Engdahl says:
Tech Billionaires Have No Answer for What’ll Happen If AI Takes All Jobs
“It’s clear that a lot of jobs are going to disappear: it’s not clear that it’s going to create a lot of jobs to replace that.”
https://futurism.com/future-society/ai-corporations-labor-jobs
At this point, tech corporations have made it no secret that their end goal is to replace all jobs with AI — thus cementing themselves as indispensable to the world economy. But what happens if we actually get to that point?
Either they don’t have a clue, or they don’t want to say.
Tomi Engdahl says:
Overrun with AI slop, cURL scraps bug bounties to ensure “intact mental health”
The onslaught includes LLMs finding bogus vulnerabilities and code that won’t compile.
https://arstechnica.com/security/2026/01/overrun-with-ai-slop-curl-scraps-bug-bounties-to-ensure-intact-mental-health/
The project developer for one of the Internet’s most popular networking tools is scrapping its vulnerability reward program after being overrun by a spike in the submission of low-quality reports, much of it AI-generated slop.
“We are just a small single open source project with a small number of active maintainers,” Daniel Stenberg, the founder and lead developer of the open source app cURL, said Thursday. “It is not in our power to change how all these people and their slop machines work. We need to make moves to ensure our survival and intact mental health.”
Tomi Engdahl says:
Sam Altman said OpenAI is planning to ‘dramatically slow down’ its pace of hiring
Tomi Engdahl says:
https://www.mandatumtrader.fi/sisallot/artikkelit/outrageous-predictions-2026/
Tomi Engdahl says:
Internal emails show Bank of America having difficulties with Nvidia’s AI Factory, showing the challenges of integrating AI in regulated industries.
(Credit: Getty Images)
#technews #ai #nvidia
Leaked emails show Bank of America’s struggles with Nvidia AI: ‘You have to help us as local car mechanics drive the race car!’ : https://mrf.lu/LJ2T
Tomi Engdahl says:
app
Bank of America struggled with adopting Nvidia’s AI software, internal emails showed.
Its challenges with Nvidia AI Factory highlight difficulties for regulated industries like banking.
AI deployment hurdles are common across sectors, experts say.
https://www.businessinsider.com/bank-of-america-nvidia-ai-internal-emails-2026-1?fbclid=IwdGRjcAPl8d5jbGNrA-Xxv2V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHhIgzhf0-2NH2UTsTiVCUTokDsNkb1ukqGTQ5SHXKBBTTBZ0d9cGs4ZClGHz_aem_fzKwqH9jEJOfT_tZxABa7A&utm_campaign=mrf-insider-marfeel-headline-graphic&mrfcid=202601276978dbd3c91ef4487df7e13a
Tomi Engdahl says:
AI is winning over consumers — but it’s way behind with businesses, say Goldman Sachs analysts
https://www.businessinsider.com/ai-tech-bubble-consumer-impact-business-adoption-goldman-sachs-2025-11?fbclid=IwVERDUAPl8h9leHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR5T_bPBnqfUSGMnTNyRqYiHHH72N9BB03nLq-xK7wcD54Ub5zKnqOWICrTF3Q_aem_XuWhuUyWKp6majV55kIgyQ&utm_campaign=mrf-insider-marfeel-headline-graphic
Tomi Engdahl says:
https://www.facebook.com/share/p/1J1L9dbwLu/
A German university professor lost two full years of academic work after a single setting change inside ChatGPT permanently erased his saved conversations and project folders, with no recovery option available from OpenAI, according to a report published by Nature.
Marcel Bucher, a professor of plant sciences at the University of Cologne, had been using ChatGPT Plus as a central workspace for a wide range of professional tasks. These included drafting grant proposals, preparing lectures and exams, revising academic papers, organizing teaching materials, and analysing student responses. Over time, the chat history and project folders inside ChatGPT effectively became an informal archive of his ongoing research and teaching output.
The data loss occurred when Bucher attempted to disable ChatGPT’s data consent option to see whether the service would continue to function without retaining his information. Instead of merely limiting data usage, the action immediately deleted all of his chats and emptied his project folders. There was no warning explaining the consequences, no confirmation dialog that clearly stated the deletion was irreversible, and no undo option. The interface simply refreshed to a blank workspace.
Initially assuming it was a glitch, Bucher checked multiple browsers, devices, and networks. He cleared caches, reinstalled applications, and even reverted the setting change, but nothing restored the missing content. Partial backups existed for some materials he had manually saved elsewhere, but large portions of his work were lost permanently.
Why did ChatGPT delete the professor’s work?
ChatGPT deleted the professor’s work when he attempted to disable the data consent option, which was mistakenly understood by the system as a request to delete all his chats and project folders. There was no warning or confirmation dialog to confirm the irreversible deletion, resulting in the permanent loss of two years of his academic work.
See here: https://wonderfulengineering.com/professor-loses-two-years-of-research-work-after-clicking-the-wrong-button-on-chatgpt/
Tomi Engdahl says:
The company is looking to “dramatically slow down” hiring. https://trib.al/aYTGIKQ
Sam Altman Says OpenAI Is Slashing Its Hiring Pace as Financial Crunch Tightens
The company is looking to “dramatically slow down” hiring.
https://futurism.com/artificial-intelligence/sam-altman-openai-slashing-hiring?utm_sf_cserv_ref=352364611609411&utm_sf_post_ref=660413792&fbclid=IwdGRjcAPl-4ZjbGNrA-X7YmV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHocklm_SkfvWZ0CYgmKhiSqgjFmsWu_5MMFnaXHdhlytxaG2yo91Xokgl-X7_aem_cEeEA8dbBNXvLFM3VHa5Uw
Tomi Engdahl says:
https://www.facebook.com/share/p/1Fcdqi8pqT/
A North Korean threat group is using AI-generated PowerShell malware to target blockchain developers through phishing campaigns in Japan, Australia, and India.
#NorthKorea #cybersecurity
You shouldn’t have powershell on your desktops or servers if it’s not needed/used, and its permissions should be limited. Principals of Least Privilege and Attack Surface Reduction and don’t give attackers the tools they need to live off your land…
Tomi Engdahl says:
The demonstration claims the world’s first autonomous deployment of humanoid robots emerging from shipping containers and moving in coordination. https://bit.ly/49XhOVn
Tomi Engdahl says:
Lawslop
Trump Department Responsible for Airline Safety Using AI to Write New Regulations, So They Can Be Churned Out as Fast as Possible
“We’re flooding the zone.”
https://futurism.com/artificial-intelligence/trump-regulations-ai?utm_sf_post_ref=660442205&utm_sf_cserv_ref=352364611609411&fbclid=IwdGRjcAPmGU1jbGNrA-YZLWV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHhN8wFbx9LZ9otNGozwfXEitOyN505Zz-iffpUKGpvlSkFICBh47tLX4ayBj_aem_011l-tbR8QuyFYMCf1aiMA
The Department of Defense might be the first government agency to roll out a department-wide AI chatbot, but the Department of Transportation is about to be the first to draft actual binding regulations with the tech.
According to a new investigation by ProPublica, the top transportation agency has tapped Google Gemini to help write new regulations affecting aviation, automotive, railroad, and maritime safety. In internal communications from DoT attorney Daniel Cohen, agency staffers were presented with the plan along with a demonstration of AI’s “potential to revolutionize the way we draft rulemakings.”
Tomi Engdahl says:
https://www.facebook.com/share/p/1Mzr8P1g5i/
The interim head of America’s cyber defense agency decided it was
A-okay to upload sensitive documents into ChatGPT. Read more @2600net irc.2600.n e t/#secnews
#CISA #ChatGPT #AI
For the article: https://cybernews.com/security/madhu-gottumukkala-cisa-chatgpt/
Proof in a way that even if you may have policy, tooling and other controls… they may not work as expected.
Tomi Engdahl says:
MCP shipped without authentication. Clawdbot shows why that’s a problem.
https://venturebeat.com/security/mcp-shipped-without-authentication-clawdbot-shows-why-thats-a-problem
Model Context Protocol has a security problem that won’t go away.
When VentureBeat first reported on MCP’s vulnerabilities last October, the data was already alarming. Pynt’s research showed that deploying just 10 MCP plug-ins creates a 92% probability of exploitation — with meaningful risk even from a single plug-in.
The core flaw hasn’t changed: MCP shipped without mandatory authentication. Authorization frameworks arrived six months after widespread deployment. As Merritt Baer, chief security officer at Enkrypt AI, warned at the time: “MCP is shipping with the same mistake we’ve seen in every major protocol rollout: insecure defaults. If we don’t build authentication and least privilege in from day one, we’ll be cleaning up breaches for the next decade.”
Three months later, the cleanup has already begun — and it’s worse than expected.
Tomi Engdahl says:
How an AI Agent Chooses What to Do Under Tokens, Latency, and Tool-Call Budget Constraints?
https://www.marktechpost.com/2026/01/23/how-an-ai-agent-chooses-what-to-do-under-tokens-latency-and-tool-call-budget-constraints/
Tomi Engdahl says:
Is that allowed? Authentication and authorization in Model Context Protocol
Learn how to protect MCP servers from unauthorized access and how authentication of MCP clients to MCP servers works.
https://stackoverflow.blog/2026/01/21/is-that-allowed-authentication-and-authorization-in-model-context-protocol/
Tomi Engdahl says:
https://towardsdatascience.com/5x-agentic-coding-performance-with-few-shot-prompting/
Tomi Engdahl says:
Gemini 3.5 Tested : Shows Fast Deep Thinking, Builds Big Apps in a Single Prompt
https://www.geeky-gadgets.com/gemini-3-5-benchmarks/
Tomi Engdahl says:
https://venturebeat.com/orchestration/claude-codes-tasks-update-lets-agents-work-longer-and-coordinate-across
Tomi Engdahl says:
Windsurf vs. Cursor – which AI coding app is better?
An honest review of Windsurf
https://www.thepromptwarrior.com/p/windsurf-vs-cursor-which-ai-coding-app-is-better
Conclusion
I think Windsurf is actually the better IDE for beginners.
If I were to start out coding today, Windsurf would be a great choice. You don’t need to think about context much, and the Windsurf agent will guide you through the code, helping you write everything.
Cursor by contrast has a bit of a steeper learning curve.
But if you’re aiming to write production-ready code, e.g. applications that have a working backend, payments integration, and authentication, the more fine-grained control that you get in Cursor will result in higher quality code.
For professional purposes, I would currently still choose Cursor over Windsurf.
Tomi Engdahl says:
Becoming Redundant
Wave of Suicides Hits as India’s Economy Is Ravaged by AI
“Very alarming.”
https://futurism.com/artificial-intelligence/suicides-india-economy-ai?fbclid=IwdGRjcAPndoZjbGNrA-d2W2V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHtsIr2deQxdMvZAlKZ0Tvx1sX27vtS-ZMO6xy-Oh2Z-_SJ3OzmlpLyAsTrI0_aem_BUNvuC6zz3x7vxVqLc2Sag
For decades, tech companies have relied immensely on India’s vast workforce, from entry-level call center jobs to software engineers and high-ranking managerial positions.
But with the advent of advanced AI, which has been accompanied by employers greatly cutting back on hiring with the hopes of eventually automating tasks entirely, India’s tech workers are having to cope with a vastly different reality in 2026.
As Rest of World reports, rising anxiety over the influence of AI, on top of already-grueling 90-hour workweeks, has proven devastating for workers. While it’s hard to single out a definitive cause, a troubling wave of suicides among tech workers highlights these unsustainable conditions.
Complicating the picture is a lack of clear government data on the tragic deaths. While it’s impossible to tell whether they are more prevalent among IT workers, experts told Rest of World that the mental health situation in the tech industry is nonetheless “very alarming.”
The prospect of AI making their careers redundant is a major stressor, with tech workers facing a “huge uncertainty about their jobs,” as Indian Institute of Technology Kharagpur senior professor of computer science and engineering Jayanta Mukhopadhyay told Rest of World