AI trends 2025

AI is developing all the time. Here are some picks from several articles what is expected to happen in AI and around it in 2025. Here are picks from various articles, the texts are picks from the article edited and in some cases translated for clarity.

AI in 2025: Five Defining Themes
https://news.sap.com/2025/01/ai-in-2025-defining-themes/
Artificial intelligence (AI) is accelerating at an astonishing pace, quickly moving from emerging technologies to impacting how businesses run. From building AI agents to interacting with technology in ways that feel more like a natural conversation, AI technologies are poised to transform how we work.
But what exactly lies ahead?
1. Agentic AI: Goodbye Agent Washing, Welcome Multi-Agent Systems
AI agents are currently in their infancy. While many software vendors are releasing and labeling the first “AI agents” based on simple conversational document search, advanced AI agents that will be able to plan, reason, use tools, collaborate with humans and other agents, and iteratively reflect on progress until they achieve their objective are on the horizon. The year 2025 will see them rapidly evolve and act more autonomously. More specifically, 2025 will see AI agents deployed more readily “under the hood,” driving complex agentic workflows.
In short, AI will handle mundane, high-volume tasks while the value of human judgement, creativity, and quality outcomes will increase.
2. Models: No Context, No Value
Large language models (LLMs) will continue to become a commodity for vanilla generative AI tasks, a trend that has already started. LLMs are drawing on an increasingly tapped pool of public data scraped from the internet. This will only worsen, and companies must learn to adapt their models to unique, content-rich data sources.
We will also see a greater variety of foundation models that fulfill different purposes. Take, for example, physics-informed neural networks (PINNs), which generate outcomes based on predictions grounded in physical reality or robotics. PINNs are set to gain more importance in the job market because they will enable autonomous robots to navigate and execute tasks in the real world.
Models will increasingly become more multimodal, meaning an AI system can process information from various input types.
3. Adoption: From Buzz to Business
While 2024 was all about introducing AI use cases and their value for organizations and individuals alike, 2025 will see the industry’s unprecedented adoption of AI specifically for businesses. More people will understand when and how to use AI, and the technology will mature to the point where it can deal with critical business issues such as managing multi-national complexities. Many companies will also gain practical experience working for the first time through issues like AI-specific legal and data privacy terms (compared to when companies started moving to the cloud 10 years ago), building the foundation for applying the technology to business processes.
4. User Experience: AI Is Becoming the New UI
AI’s next frontier is seamlessly unifying people, data, and processes to amplify business outcomes. In 2025, we will see increased adoption of AI across the workforce as people discover the benefits of humans plus AI.
This means disrupting the classical user experience from system-led interactions to intent-based, people-led conversations with AI acting in the background. AI copilots will become the new UI for engaging with a system, making software more accessible and easier for people. AI won’t be limited to one app; it might even replace them one day. With AI, frontend, backend, browser, and apps are blurring. This is like giving your AI “arms, legs, and eyes.”
5. Regulation: Innovate, Then Regulate
It’s fair to say that governments worldwide are struggling to keep pace with the rapid advancements in AI technology and to develop meaningful regulatory frameworks that set appropriate guardrails for AI without compromising innovation.

12 AI predictions for 2025
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
https://www.cio.com/article/3630070/12-ai-predictions-for-2025.html
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
1. Small language models and edge computing
Most of the attention this year and last has been on the big language models — specifically on ChatGPT in its various permutations, as well as competitors like Anthropic’s Claude and Meta’s Llama models. But for many business use cases, LLMs are overkill and are too expensive, and too slow, for practical use.
“Looking ahead to 2025, I expect small language models, specifically custom models, to become a more common solution for many businesses,”
2. AI will approach human reasoning ability
In mid-September, OpenAI released a new series of models that thinks through problems much like a person would, it claims. The company says it can achieve PhD-level performance in challenging benchmark tests in physics, chemistry, and biology. For example, the previous best model, GPT-4o, could only solve 13% of the problems on the International Mathematics Olympiad, while the new reasoning model solved 83%.
If AI can reason better, then it will make it possible for AI agents to understand our intent, translate that into a series of steps, and do things on our behalf, says Gartner analyst Arun Chandrasekaran. “Reasoning also helps us use AI as more of a decision support system,”
3. Massive growth in proven use cases
This year, we’ve seen some use cases proven to have ROI, says Monteiro. In 2025, those use cases will see massive adoption, especially if the AI technology is integrated into the software platforms that companies are already using, making it very simple to adopt.
“The fields of customer service, marketing, and customer development are going to see massive adoption,”
4. The evolution of agile development
The agile manifesto was released in 2001 and, since then, the development philosophy has steadily gained over the previous waterfall style of software development.
“For the last 15 years or so, it’s been the de-facto standard for how modern software development works,”
5. Increased regulation
At the end of September, California governor Gavin Newsom signed a law requiring gen AI developers to disclose the data they used to train their systems, which applies to developers who make gen AI systems publicly available to Californians. Developers must comply by the start of 2026.
There are also regulations about the use of deep fakes, facial recognition, and more. The most comprehensive law, the EU’s AI Act, which went into effect last summer, is also something that companies will have to comply with starting in mid-2026, so, again, 2025 is the year when they will need to get ready.
6. AI will become accessible and ubiquitous
With gen AI, people are still at the stage of trying to figure out what gen AI is, how it works, and how to use it.
“There’s going to be a lot less of that,” he says. But gen AI will become ubiquitous and seamlessly woven into workflows, the way the internet is today.
7. Agents will begin replacing services
Software has evolved from big, monolithic systems running on mainframes, to desktop apps, to distributed, service-based architectures, web applications, and mobile apps. Now, it will evolve again, says Malhotra. “Agents are the next phase,” he says. Agents can be more loosely coupled than services, making these architectures more flexible, resilient and smart. And that will bring with it a completely new stack of tools and development processes.
8. The rise of agentic assistants
In addition to agents replacing software components, we’ll also see the rise of agentic assistants, adds Malhotra. Take for example that task of keeping up with regulations.
Today, consultants get continuing education to stay abreast of new laws, or reach out to colleagues who are already experts in them. It takes time for the new knowledge to disseminate and be fully absorbed by employees.
“But an AI agent can be instantly updated to ensure that all our work is compliant with the new laws,” says Malhotra. “This isn’t science fiction.”
9. Multi-agent systems
Sure, AI agents are interesting. But things are going to get really interesting when agents start talking to each other, says Babak Hodjat, CTO of AI at Cognizant. It won’t happen overnight, of course, and companies will need to be careful that these agentic systems don’t go off the rails.
Companies such as Sailes and Salesforce are already developing multi-agent workflows.
10. Multi-modal AI
Humans and the companies we build are multi-modal. We read and write text, we speak and listen, we see and we draw. And we do all these things through time, so we understand that some things come before other things. Today’s AI models are, for the most part, fragmentary. One can create images, another can only handle text, and some recent ones can understand or produce video.
11. Multi-model routing
Not to be confused with multi-modal AI, multi-modal routing is when companies use more than one LLM to power their gen AI applications. Different AI models are better at different things, and some are cheaper than others, or have lower latency. And then there’s the matter of having all your eggs in one basket.
“A number of CIOs I’ve spoken with recently are thinking about the old ERP days of vendor lock,” says Brett Barton, global AI practice leader at Unisys. “And it’s top of mind for many as they look at their application portfolio, specifically as it relates to cloud and AI capabilities.”
Diversifying away from using just a single model for all use cases means a company is less dependent on any one provider and can be more flexible as circumstances change.
12. Mass customization of enterprise software
Today, only the largest companies, with the deepest pockets, get to have custom software developed specifically for them. It’s just not economically feasible to build large systems for small use cases.
“Right now, people are all using the same version of Teams or Slack or what have you,” says Ernst & Young’s Malhotra. “Microsoft can’t make a custom version just for me.” But once AI begins to accelerate the speed of software development while reducing costs, it starts to become much more feasible.

9 IT resolutions for 2025
https://www.cio.com/article/3629833/9-it-resolutions-for-2025.html
1. Innovate
“We’re embracing innovation,”
2. Double down on harnessing the power of AI
Not surprisingly, getting more out of AI is top of mind for many CIOs.
“I am excited about the potential of generative AI, particularly in the security space,”
3. And ensure effective and secure AI rollouts
“AI is everywhere, and while its benefits are extensive, implementing it effectively across a corporation presents challenges. Balancing the rollout with proper training, adoption, and careful measurement of costs and benefits is essential, particularly while securing company assets in tandem,”
4. Focus on responsible AI
The possibilities of AI grow by the day — but so do the risks.
“My resolution is to mature in our execution of responsible AI,”
“AI is the new gold and in order to truly maximize it’s potential, we must first have the proper guardrails in place. Taking a human-first approach to AI will help ensure our state can maintain ethics while taking advantage of the new AI innovations.”
5. Deliver value from generative AI
As organizations move from experimenting and testing generative AI use cases, they’re looking for gen AI to deliver real business value.
“As we go into 2025, we’ll continue to see the evolution of gen AI. But it’s no longer about just standing it up. It’s more about optimizing and maximizing the value we’re getting out of gen AI,”
6. Empower global talent
Although harnessing AI is a top objective for Morgan Stanley’s Wetmur, she says she’s equally committed to harnessing the power of people.
7. Create a wholistic learning culture
Wetmur has another talent-related objective: to create a learning culture — not just in her own department but across all divisions.
8. Deliver better digital experiences
Deltek’s Cilsick has her sights set on improving her company’s digital employee experience, believing that a better DEX will yield benefits in multiple ways.
Cilsick says she first wants to bring in new technologies and automation to “make things as easy as possible,” mirroring the digital experiences most workers have when using consumer technologies.
“It’s really about leveraging tech to make sure [employees] are more efficient and productive,”
“In 2025 my primary focus as CIO will be on transforming operational efficiency, maximizing business productivity, and enhancing employee experiences,”
9. Position the company for long-term success
Lieberman wants to look beyond 2025, saying another resolution for the year is “to develop a longer-term view of our technology roadmap so that we can strategically decide where to invest our resources.”
“My resolutions for 2025 reflect the evolving needs of our organization, the opportunities presented by AI and emerging technologies, and the necessity to balance innovation with operational efficiency,”
Lieberman aims to develop AI capabilities to automate routine tasks.
“Bots will handle common inquiries ranging from sales account summaries to HR benefits, reducing response times and freeing up resources for strategic initiatives,”

Not just hype — here are real-world use cases for AI agents
https://venturebeat.com/ai/not-just-hype-here-are-real-world-use-cases-for-ai-agents/
Just seven or eight months ago, when a customer called in to or emailed Baca Systems with a service question, a human agent handling the query would begin searching for similar cases in the system and analyzing technical documents.
This process would take roughly five to seven minutes; then the agent could offer the “first meaningful response” and finally begin troubleshooting.
But now, with AI agents powered by Salesforce, that time has been shortened to as few as five to 10 seconds.
Now, instead of having to sift through databases for previous customer calls and similar cases, human reps can ask the AI agent to find the relevant information. The AI runs in the background and allows humans to respond right away, Russo noted.
AI can serve as a sales development representative (SDR) to send out general inquires and emails, have a back-and-forth dialogue, then pass the prospect to a member of the sales team, Russo explained.
But once the company implements Salesforce’s Agentforce, a customer needing to modify an order will be able to communicate their needs with AI in natural language, and the AI agent will automatically make adjustments. When more complex issues come up — such as a reconfiguration of an order or an all-out venue change — the AI agent will quickly push the matter up to a human rep.

Open Source in 2025: Strap In, Disruption Straight Ahead
Look for new tensions to arise in the New Year over licensing, the open source AI definition, security and compliance, and how to pay volunteer maintainers.
https://thenewstack.io/open-source-in-2025-strap-in-disruption-straight-ahead/
The trend of widely used open source software moving to more restrictive licensing isn’t new.
In addition to the demands of late-stage capitalism and impatient investors in companies built on open source tools, other outside factors are pressuring the open source world. There’s the promise/threat of generative AI, for instance. Or the shifting geopolitical landscape, which brings new security concerns and governance regulations.
What’s ahead for open source in 2025?
More Consolidation, More Licensing Changes
The Open Source AI Debate: Just Getting Started
Security and Compliance Concerns Will Rise
Paying Maintainers: More Cash, Creativity Needed

Kyberturvallisuuden ja tekoälyn tärkeimmät trendit 2025
https://www.uusiteknologia.fi/2024/11/20/kyberturvallisuuden-ja-tekoalyn-tarkeimmat-trendit-2025/
1. Cyber ​​infrastructure will be centered on a single, unified security platform
2. Big data will give an edge against new entrants
3. AI’s integrated role in 2025 means building trust, governance engagement, and a new kind of leadership
4. Businesses will adopt secure enterprise browsers more widely
5. AI’s energy implications will be more widely recognized in 2025
6. Quantum realities will become clearer in 2025
7. Security and marketing leaders will work more closely together

Presentation: For 2025, ‘AI eats the world’.
https://www.ben-evans.com/presentations

Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing
However, just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity. Right now, we very much see AI in this “peak of inflated expectations” phase and predict that it will dip into the “trough of disillusionment”, where organizations realize that it is not the silver bullet they thought it would be. In fact, there are already signs of cynicism as decision-makers are bombarded with marketing messages from vendors and struggle to discern what is a genuine use case and what is not relevant for their organization.
There is also regulation that will come into force, such as the EU AI Act, which is a comprehensive legal framework that sets out rules for the development and use of AI.
AI certainly won’t solve every problem, and it should be used like automation, as part of a collaborative mix of people, process and technology. You simply can’t replace human intuition with AI, and many new AI regulations stipulate that human oversight is maintained.

7 Splunk Predictions for 2025
https://www.splunk.com/en_us/form/future-predictions.html
AI: Projects must prove their worth to anxious boards or risk defunding, and LLMs will go small to reduce operating costs and environmental impact.

OpenAI, Google and Anthropic Are Struggling to Build More Advanced AI
Three of the leading artificial intelligence companies are seeing diminishing returns from their costly efforts to develop newer models.
https://www.bloomberg.com/news/articles/2024-11-13/openai-google-and-anthropic-are-struggling-to-build-more-advanced-ai
Sources: OpenAI, Google, and Anthropic are all seeing diminishing returns from costly efforts to build new AI models; a new Gemini model misses internal targets

It Costs So Much to Run ChatGPT That OpenAI Is Losing Money on $200 ChatGPT Pro Subscriptions
https://futurism.com/the-byte/openai-chatgpt-pro-subscription-losing-money?fbclid=IwY2xjawH8epVleHRuA2FlbQIxMQABHeggEpKe8ZQfjtPRC0f2pOI7A3z9LFtFon8lVG2VAbj178dkxSQbX_2CJQ_aem_N_ll3ETcuQ4OTRrShHqNGg
In a post on X-formerly-Twitter, CEO Sam Altman admitted an “insane” fact: that the company is “currently losing money” on ChatGPT Pro subscriptions, which run $200 per month and give users access to its suite of products including its o1 “reasoning” model.
“People use it much more than we expected,” the cofounder wrote, later adding in response to another user that he “personally chose the price and thought we would make some money.”
Though Altman didn’t explicitly say why OpenAI is losing money on these premium subscriptions, the issue almost certainly comes down to the enormous expense of running AI infrastructure: the massive and increasing amounts of electricity needed to power the facilities that power AI, not to mention the cost of building and maintaining those data centers. Nowadays, a single query on the company’s most advanced models can cost a staggering $1,000.

Tekoäly edellyttää yhä nopeampia verkkoja
https://etn.fi/index.php/opinion/16974-tekoaely-edellyttaeae-yhae-nopeampia-verkkoja
A resilient digital infrastructure is critical to effectively harnessing telecommunications networks for AI innovations and cloud-based services. The increasing demand for data-rich applications related to AI requires a telecommunications network that can handle large amounts of data with low latency, writes Carl Hansson, Partner Solutions Manager at Orange Business.

AI’s Slowdown Is Everyone Else’s Opportunity
Businesses will benefit from some much-needed breathing space to figure out how to deliver that all-important return on investment.
https://www.bloomberg.com/opinion/articles/2024-11-20/ai-slowdown-is-everyone-else-s-opportunity

Näin sirumarkkinoilla käy ensi vuonna
https://etn.fi/index.php/13-news/16984-naein-sirumarkkinoilla-kaey-ensi-vuonna
The growing demand for high-performance computing (HPC) for artificial intelligence and HPC computing continues to be strong, with the market set to grow by more than 15 percent in 2025, IDC estimates in its recent Worldwide Semiconductor Technology Supply Chain Intelligence report.
IDC predicts eight significant trends for the chip market by 2025.
1. AI growth accelerates
2. Asia-Pacific IC Design Heats Up
3. TSMC’s leadership position is strengthening
4. The expansion of advanced processes is accelerating.
5. Mature process market recovers
6. 2nm Technology Breakthrough
7. Restructuring the Packaging and Testing Market
8. Advanced packaging technologies on the rise

2024: The year when MCUs became AI-enabled
https://www-edn-com.translate.goog/2024-the-year-when-mcus-became-ai-enabled/?fbclid=IwZXh0bgNhZW0CMTEAAR1_fEakArfPtgGZfjd-NiPd_MLBiuHyp9qfiszczOENPGPg38wzl9KOLrQ_aem_rLmf2vF2kjDIFGWzRVZWKw&_x_tr_sl=en&_x_tr_tl=fi&_x_tr_hl=fi&_x_tr_pto=wapp
The AI ​​party in the MCU space started in 2024, and in 2025, it is very likely that there will be more advancements in MCUs using lightweight AI models.
Adoption of AI acceleration features is a big step in the development of microcontrollers. The inclusion of AI features in microcontrollers started in 2024, and it is very likely that in 2025, their features and tools will develop further.

Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing

AI Regulation Gets Serious in 2025 – Is Your Organization Ready?
While the challenges are significant, organizations have an opportunity to build scalable AI governance frameworks that ensure compliance while enabling responsible AI innovation.
https://www.securityweek.com/ai-regulation-gets-serious-in-2025-is-your-organization-ready/
Similar to the GDPR, the EU AI Act will take a phased approach to implementation. The first milestone arrives on February 2, 2025, when organizations operating in the EU must ensure that employees involved in AI use, deployment, or oversight possess adequate AI literacy. Thereafter from August 1 any new AI models based on GPAI standards must be fully compliant with the act. Also similar to GDPR is the threat of huge fines for non-compliance – EUR 35 million or 7 percent of worldwide annual turnover, whichever is higher.
While this requirement may appear manageable on the surface, many organizations are still in the early stages of defining and formalizing their AI usage policies.
Later phases of the EU AI Act, expected in late 2025 and into 2026, will introduce stricter requirements around prohibited and high-risk AI applications. For organizations, this will surface a significant governance challenge: maintaining visibility and control over AI assets.
Tracking the usage of standalone generative AI tools, such as ChatGPT or Claude, is relatively straightforward. However, the challenge intensifies when dealing with SaaS platforms that integrate AI functionalities on the backend. Analysts, including Gartner, refer to this as “embedded AI,” and its proliferation makes maintaining accurate AI asset inventories increasingly complex.
Where frameworks like the EU AI Act grow more complex is their focus on ‘high-risk’ use cases. Compliance will require organizations to move beyond merely identifying AI tools in use; they must also assess how these tools are used, what data is being shared, and what tasks the AI is performing. For instance, an employee using a generative AI tool to summarize sensitive internal documents introduces very different risks than someone using the same tool to draft marketing content.
For security and compliance leaders, the EU AI Act represents just one piece of a broader AI governance puzzle that will dominate 2025.
The next 12-18 months will require sustained focus and collaboration across security, compliance, and technology teams to stay ahead of these developments.

The Global Partnership on Artificial Intelligence (GPAI) is a multi-stakeholder initiative which aims to bridge the gap between theory and practice on AI by supporting cutting-edge research and applied activities on AI-related priorities.
https://gpai.ai/about/#:~:text=The%20Global%20Partnership%20on%20Artificial,activities%20on%20AI%2Drelated%20priorities.

3,950 Comments

  1. Tomi Engdahl says:

    https://www.tivi.fi/uutiset/a/ad085143-78b5-4219-aec1-d5abc2e656b0
    Linus Torvalds ei täysin tyrmää vibe-koodaamista – Näin käytettynä se on kuitenkin ”kauhea, kauhea idea”
    Suvi Korhonen19.11.202508:59|päivitetty19.11.202508:59LinuxTekoälyOhjelmistokehitys
    Torvalds odottaa jo aikaa, kun tekoälyyn liittyvä hype laantuu.

    Reply
  2. Tomi Engdahl says:

    https://www.tivi.fi/uutiset/a/edebea8e-8054-4fb4-bcf4-e5dbb1927dfe
    ChatGPT saa mullistavan ominaisuuden
    Ryhmäkeskusteluihin voi kutsua jopa 20 käyttäjää

    Reply
  3. Tomi Engdahl says:

    Teetkö töitäsi näin? Tekoäly voi syrjäyttää sinut pian
    Kaisa Paastela
    Julkaistu 17.11.2025 | 13:21
    Päivitetty 17.11.2025 | 13:21
    Työelämä
    Tietokirjailijan mukaan tekoäly pystyy tuottamaan rajattomasti sitä, mitä ihmiset tyypillisesti tekevät etänä.
    https://www.verkkouutiset.fi/a/teetko-toitasi-nain-tekoaly-voi-syrjayttaa-sinut-pian/#fb485f67

    Tietokirjailija ja viestintäasiantuntija Katleena Kortesuo ottaa kantaa viime aikojen etätyökeskusteluun, joka virisi kun Kelan kerrottiin jatkossa edellyttävän työntekijöiltään toimistolla käyntiä vähintään neljänä päivänä kuukaudessa. Kortesuon mukaan asiaa voi pohtia pikatestillä.

    – Pystytkö tekemään töitäsi viikkotolkulla putkeen etänä, ilman yhtäkään työhön liittyvää livekohtaamista? Kortesuo kysyy.

    – Silloin todennäköisesti duunisi on korvattavissa tekoälyllä 2–5 vuoden sisällä.

    Kortesuo muistuttaa, että tekoäly pystyy tuottamaan rajattomat määrät juuri sitä mitä ihmiset tekevät etänä: koodia, tekstiä, päätöksiä, kuvia, laskelmia, videoita, yhteenvetoja, musiikkia…

    – Sen sijaan jos työsi vaatii fyysistä koskettamista, asiakkaan luona käymistä, liveneuvotteluja, lavaesiintymisiä, ideariihiä tai ylipäätään kasvokkaista vuorovaikutusta toisten kanssa, olet pitempään turvassa, hän toteaa.

    Kortesuo jatkaa, että jos siis Kelan etuuskäsittelijä pystyy omien sanojensa mukaan työskentelemään etänä viikkojen ajan jonkin toistuvan prosessin tai vuokaavion mukaan, on selvää että hänen työnsä voi antaa tekoälylle varsin pian.

    Kelan pääjohtaja Lasse Lehtonen on aiemmin todennut, että Kelan toimintoja automatisoiva tietojärjestelmäuudistus tarkoittaa, että henkilöstöä tarvitaan 15-20 prosenttia vähemmän viiden vuoden aikajänteellä.

    Etänä pystyy pitämään palavereja, seminaareja ja kasvokkaisia kohtaamisia, jopa ideariihiä. Miten tämä voi vielä tulla jollekin yllätyksenä?

    Reply
  4. Tomi Engdahl says:

    Maailman nopeimmin kasvavan yrityksen perustaja: ”Eurooppa on parempi paikka startupille kuin Piilaakso”
    Slushin avasi tänä vuonna maailman nopeimmin kasvava yritys, tekoälystartup Lovable, ja sen toimitusjohtaja Anton Osika.
    https://www.kauppalehti.fi/uutiset/a/b15317bf-c5a6-40fb-bb30-83a34444ea3f

    Ruotsalaisen Lovablen toimitusjohtaja Anton Osikan mukaan Pohjolaan on syntymässä eräänlainen Pii-Valhalla.

    Reply
  5. Tomi Engdahl says:

    ”Nokia muutti maailmaa aikanaan yhdistämällä ihmiset – nyt tulemme tekemään sen jälleen yhdistämällä älykkäät ratkaisut. Luotettavana länsimaisena turvallisten ja edistyneiden verkkoratkaisujen tarjoajana luomme edellytykset tekoälyn supersyklille. Teknologiaratkaisumme niin mobiili-infrastruktuurissa kuin kiinteissä verkoissa tukevat asiakkaidemme arvonluontia. Olen ylpeä kyvystämme johtaa uuden sukupolven verkkojen aikakautta”, kommentoi Nokian toimitusjohtaja Justin Hotard.
    https://mobiili.fi/2025/11/19/nokia-kertoi-suurista-muutoksista-uusi-strategia-uudelleenjarjestely-ja-liiketoimintoja-myyntilistalle-mobiiliverkoista-vastannut-johtaja-jattaa-yhtion/

    Reply
  6. Tomi Engdahl says:

    Pitääkö huolestua? Tietotyöläisistä valtaosa käyttää tekoälyä ominpäin
    https://demokraatti.fi/pitaako-huolestua-tietotyolaisista-valtaosa-kayttaa-tekoalya-ominpain

    Kolme neljästä suomalaisesta tietotyöläisestä hyödyntää uuden kyselyn mukaan jo tekoälyä työssään, mutta useimmat omatoimisesti ja ilman työnantajan tukea tai koulutusta.

    Tieto käy ilmi teknologiayhtiö HP:n teettämästä kyselystä, johon vastasi hieman yli tuhat suomalaista tietotyöläistä eri toimialoilta.

    Kyselyn mukaan yli puolet tietotyöläisistä käyttää tekoälyä viikoittain, mutta vain runsas kolmannes on saanut siihen koulutusta tai ohjeistusta työpaikaltaan. Useimmat ovat opetelleet käyttämään uusia työkaluja itse.

    ICT-AMMATTILAISET kertoivat tekoälyn säästävän aikaa, parantavan tehokkuutta ja työn laatua, mutta samaan aikaan epävarmuus tekoälyn luotettavuudesta ja tietoturvasta jarruttaa sen laajempaa hyödyntämistä.

    Vastaajien mukaan tekoäly tuo arkeen sujuvuutta ja vapauttaa aikaa keskittymiseen ja ideointiin. Lähes puolet vastaajista haluaisi hyödyntää tekoälyä työssään nykyistä enemmän.

    Reply
  7. Tomi Engdahl says:

    Tekoäly syö niin paljon rahaa, että Amazon turvautui 15 miljardin joukkovelkakirjalainaan
    Liikkeelle lasketut velkakirjat ovat verkkokauppajätin ensimmäiset kolmeen vuoteen.
    https://www.tivi.fi/uutiset/a/beaee2c9-cf93-4531-ad25-b7c8ee562b05

    Amazon on laskenut liikkeelle 15 miljardin dollarin arvosta joukkovelkakirjalainaa. Tämä on ensimmäinen kerta kolmeen vuoteen, kun verkkokauppajätti laskee liikkeelle bondeja, kertoo uutistoimisto Bloomberg.

    Liike on jatkoa useampien teknologiayritysten joukkovelkakirjalainoille, joilla on nostettu rahoitusta kymmeniä miljardeja dollareita. Alphabet laski aiemmin tässä kuussa liikkeelle joukkovelkakirjalainaa 25 miljardin edestä, Meta 30 miljardin edestä ja Oracle 18 miljardin edestä.

    Rahat menevät asiantuntijoiden mukaan tekoälyn kehitykseen ja tekoälyinfrastruktuurin rakentamiseen. Amazon on jo valmiiksi maailman suurin pilvipalvelukapasiteetin tarjoaja

    Amazonin toimitusjohtaja Andy Jassy totesi aiemmin tänä vuonna, että yhtiön datakeskuskapasiteetti on tuplaantunut vuodesta 2022. Jassy odottaa kapasiteetin jälleen tuplaantuvan vuoteen 2027 mennessä. Marraskuun alussa yhtiö tiedotti 38 miljardin dollarin diilistä Nvidian kanssa

    Reply
  8. Tomi Engdahl says:

    Espoolainen IXI on jälleen tehnyt vaikutuksen älylasimarkkinoilla. Yhtiö on julkistanut maailman kevyimmän älylasiprototyypin, jonka paino on vain 22 grammaa, ja siihen sisältyvät kaikki rungon sisään rakennetut elektroniikat. Linssejä ei ole vielä lisätty, mutta pelkkä runko osoittaa, miten pitkälle yritys on edennyt keveyden, mukavuuden ja teknisen integraation yhdistämisessä.
    https://suomimobiili.fi/suomalainen-ixi-esitteli-ennatyskevyet-22-gramman-alylasit/

    IXI nousi julkisuuteen maailman ensimmäisten autofokus-lasien kehittäjänä. Yhtiön teknologia hyödyntää nestekideautofokusta, joka mukautuu katseen liikkeisiin ja tarkentaa automaattisesti lähelle ja kauas. Nyt julkistettu runkoprototyyppi on jälleen osoitus siitä, että yritys etenee kohti massamarkkinoita, tällä kertaa Euroopan tuotantokumppaneiden tukemana.

    Reply
  9. Tomi Engdahl says:

    Kuka tahansa voi kehittää upeita ohjelmistoja – Lovable hurmasi Slushin jälleen
    https://etn.fi/index.php/13-news/18200-kuka-tahansa-voi-kehittaeae-upeita-ohjelmistoja-lovable-hurmasi-slushin-jaelleen

    Vuosi sitten Slushissa lanseerattu Lovable nousi nopeasti koko tapahtuman puheenaiheeksi, ja nyt sama toistui entistä suuremmalla voimalla. Yrityksen perustaja Anton Osika korosti lavalla vision ydintä: kuka tahansa voi kehittää upeita ohjelmistoja. Tätä varten Lovable on rakentanut alustan, joka on kasvanut yhtä nopeasti kuin AI-buumi itsessään.

    Lovablen tarina on poikkeuksellinen. Vain kahdeksassa kuukaudessa palvelu arvioitiin 100 miljoonan dollarin arvoiseksi ja nyt Slushin alla puhuttiin jo 200 miljoonasta dollarista. Kehittäjäyhteisön koko on kasvanut 9 miljoonaan.

    Joka päivä Lovablen alustalla syntyy yli 100 000 uutta sovellusta, ja yli viisi miljoonaa ihmistä käyttää sen työkaluja. Kehittäjäyhteisökin on paisunut yli 100 000 jäsenen kokoiseksi – vaikka Osikan mukaan vain 0,5 prosenttia ihmisistä on perinteisesti kehittäjiä. Juuri tämän hän haluaa muuttaa.

    Taustaltaan Cernissä fysiikkaa ja tekoälyä tutkinut Osika kertoi, että alusta asti oli selvää, kuinka perustavanlaatuisesti AI mullistaa ohjelmistokehityksen. Lovablen vauhti yllätti kuitenkin myös perustajat: vuosi sitten tiimissä oli kahdeksan ihmistä, nyt yritys on kasvanut globaaliksi toimijaksi, jonka päämaja on kuitenkin pysynyt Tukholmassa. Osika korosti, että Euroopasta voi rakentaa maailmanlaajuisia teknologiayrityksiä ja että alueella on runsaasti AI-osaamista.

    Reply
  10. Tomi Engdahl says:

    Gartner: sähkö ei riitä AI-datakeskuksiin
    https://etn.fi/index.php/13-news/18189-gartner-saehkoe-ei-riitae-ai-datakeskuksiin

    Tekoälyn nopea kasvu johtaa datakeskusten sähkönkulutuksen jyrkkään nousuun, varoittaa Gartner tuoreessa analyysissaan. Yhtiön mukaan datakeskukset kuluttavat vuonna 2025 yhteensä 448 terawattituntia sähköä, mutta määrä lähes kaksinkertaistuu vuoteen 2030 mennessä 980 terawattituntiin. Kasvua kiihdyttää ennen kaikkea AI-laskenta, joka lisää energiantarvetta huomattavasti nopeammin kuin perinteinen IT-kuorma.

    Gartnerin tutkimusjohtaja Linglan Wangin mukaan erityisesti tekoälyyn optimoidut palvelimet muodostuvat keskeiseksi haasteeksi energiankulutuksen hallinnassa. Niiden sähkön käyttö viisinkertaistuu vuoteen 2030 mennessä 93 terawattitunnista peräti 432 terawattituntiin. Vuonna 2030 AI-palvelimet vievät jo 44 prosenttia kaikesta datakeskusten kuluttamasta sähköstä ja vastaavat 64 prosentista kulutuksen lisäyksestä.

    Alueellisesti kasvu painottuu Yhdysvaltoihin ja Kiinaan, jotka muodostavat kaksi kolmasosaa maailman datakeskussähkön kysynnästä. Gartnerin mukaan Kiina on jonkin verran paremmassa asemassa energiatehokkaampien palvelinten ja suunnitelmallisemman kapasiteettirakentamisen ansiosta. Yhdysvalloissa datakeskusten osuus kokonaiskulutuksesta kohoaa 4 prosentista 7,8 prosenttiin vuoteen 2030 mennessä. Euroopassa nousu on maltillisempi, mutta silti selvästi havaittava: 2,7 prosentista 5 prosenttiin.

    Kasvava sähkön tarve pakottaa datakeskustoimijat etsimään uusia ratkaisuja energiansaannin turvaamiseksi. Gartner arvioi, että nykyinen malli, jossa valtaosa varavoimasta perustuu fossiilisiin polttoaineisiin, ei ole pitkällä aikavälillä kestävällä pohjalla. Vaihtoehtoiset ”puhdasta sähköä” tarjoavat teknologiat kuten vihreä vety, geoterminen energia ja pienet modulaariset ydinreaktorit ovat vasta nousemassa, mutta Gartner ennustaa niiden tulevan datakeskusten mikroverkoissa käyttökelpoisiksi tämän vuosikymmenen loppupuolella.

    Reply
  11. Tomi Engdahl says:

    Generatiivisen tekoälyn käyttö lähes tuplaantui
    https://www.uusiteknologia.fi/2025/11/12/generatiivisen-tekoalyn-kaytto-lahes-tuplaantui/

    Suomessa generatiivista tekoälyä käytti Tilastokeskuksen mukaan viimeisen kolmen kuukauden aikana 16–89-vuotiaista 41 prosenttia, kun se vuosi sitten oli 23 prosenttia. Generatiivista tekoälyä käytettiin pääkaupunkiseudulla tilaston mukaan enemmän kuin muualla Suomessa.

    Tekoälyn käyttötavoista yleistyi Tilastokeskuksen uusimman mediakäytön tilastojen mukaan eniten tiedonhaku, joka oli myös suosituin käyttötapa. Väestöstä kolmannes oli hakenut tietoa generatiivisen tekoälyn avulla. Toiseksi suosituin käyttötapa on tekstin tuottaminen ja parantaminen, johon tekoälyä oli hyödyntänyt joka neljäs vastaaja.

    ”Tekoälypalveluiden näin nopea omaksuminen on perustunut niiden, etenkin chatbottien, käytön aloittamisen matalaan kynnykseen ja palveluiden käytännöllisyyteen ihmisten opiskelussa, työssä ja arjessa”, arvioi Tilastokeskuksen erikoistutkija Rauli Kohvakka.

    Kohvakkan mukaansa erityisesti nuoret ja nuoret aikuiset ovat ottaneet yleisimmin generatiivisen tekoälyn käyttöön. Miehiä on palveluiden käyttäjinä hieman naisia enemmän. Ero sukupuolten välillä kuitenkin pienentyi. edellisistä vuosista kaikissa ikäryhmissä.

    Reply
  12. Tomi Engdahl says:

    Tekoälyä jo valon nopeudella
    https://www.uusiteknologia.fi/2025/11/15/tekoalya-valon-nopeudella/

    Aalto-yliopiston johdossa on toteutettu ns. tensorilaskentaa valon nopeudella yhdellä laskentakierroksella. Tulos on tärkeä askel kohti tulevaisuuden tekoälylaitteistoa, joka perustuisi optiseen laskentaan perinteisen elektroniikan sijaan.

    Tensorilaskentana tunnettu aritmetiikan ala toimii lähes kaikkien nykyaikaisten teknologioiden ja erityisesti tekoälyn selkärankana, mutta sen toiminnallisuus ulottuu kauas matematiikan lyhyen oppimäärän ulkopuolelle.

    Ajattele vaikkapa mitä kaikkea tarvitaan Rubikin kuution pyörittämiseen, kiertämiseen ja uudelleenjärjestelyyn eri ulottuvuuksissa. Siinä missä ihmiset ja perinteiset tietokoneet suorittavat nämä operaatiot kohta kohdalta, valo voi tehdä ne kaikki kerralla.

    Ja nykyisin kaikki tekoälyn toiminnallisuudet aina kuvantunnistuksesta kielen käsittelyyn perustuvat tensorilaskentoihin. Datan räjähdysmäinen kasvu on kuitenkin työntänyt grafiikkasuorittimien (GPU) kaltaiset perinteiset digitaaliset laskentajärjestelmät äärirajoilleen niin nopeuden, skaalautuvuuden kuin energiankulutuksenkin suhteen.

    Reply
  13. Tomi Engdahl says:

    Jolla toi uuden AI -yrityspalvelimen
    https://www.uusiteknologia.fi/2025/11/19/jolla-toi-slushiin-uuden-ai-yrityspalvelimen/

    Mobiililaitteita ja monta muutakin muodonmuutosta kokenut startup-yritys Jolla esittelee tänään alkaneessa Slush-tapahtumassa uudenlaisen yrityskäyttöön sopivan tekoälypalvelimen. Uutuus perustuu Nvidian grafiikkakiihdyttimeen.

    Jollan uuden Mind2 Enterprise AI -palvelimen avulla yritykset voivat hyödyntää tekoälyä paikallisesti ilman, että dataa tarvitsee siirtää ulkomaille tai pilveen. Samalla Jolla esittelee sisaryhtiö Venho.ai:n kanssa ohjelmistoratkaisun, joka mahdollistaa tekoälyn ilman riippuvuutta kansainvälisistä pilvipalveluista.

    Jolla palvelin esimerkiksi mahdollistaa käsitellä arkaluontoista dataa omassa hallinnassaan EU:n tietosääntelyn ehdoilla. ”80 prosenttia yritysten AI-tarpeista voidaan hoitaa paikallisesti avoimilla malleilla – ilman, että arkaluontoista dataa tarvitsee siirtää ulkomaiseen pilveen”, sanoo Antti Saarnio, Venho.ai:n toimitusjohtaja ja Jolla Groupin hallituksen puheenjohtaja.

    Vuoden 2025 alussa Jolla toi markkinoille jo kehittäjäyhteisölle suunnatun Mind2 AI-laitteen. Nyt esiteltävä Mind2 Enterprise AI -palvelin nostaa tehot uudelle tasolle. Laitteessa on Nvidian RTX 5090 -grafiikkasuoritin ja 64 gigatavua GDDR7-muistia. Suorituskyky yltää yrityksen mukaan 2 600 teraflopsin tasolle, ja pystyy käsittelemään jopa 72 miljardin parametrin tekoälymalleja.

    Reply
  14. Tomi Engdahl says:

    AI ei ymmärrä ja käyttäjät eivät ymmärrä tai välitä näkyy nousevana trendinä. AI työkaluilla tuotetaan roskaa julkaistavaksi. Esimerkiksi 90 prosenttia Facebookissa näkemissäni elektroniikkakytkennöistä näyttää olevan AI:n tuottamaa roskaa, jotka ovat täynnä ilmiselviä virheitä niin että niillä ei ole mitään mahdollisuutta toimia otsikon tai mukaan laitetun kuvauksen mukaisesti. Osa on hengenvaarallisia virityksiä. Monien kohdalla jopa ChatGPT osaa kertoa että ei näytä toimivalle. Nousevia trendejä näyttää olevan AS + HS. Artificial Stupidity tuottaa roskasisältöä. Human Stupidity julkaisee tätä roskaa ja tykkää arvottomasta roskasta.

    Reply
  15. Tomi Engdahl says:

    OpenAI needs to raise at least $207bn by 2030 so it can continue to lose money, HSBC estimates
    A burning platform
    https://www.ft.com/content/23e54a28-6f63-4533-ab96-3756d9c88bad?fbclid=IwdGRjcAOTBZtjbGNrA5MFRWV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHl9yr5FYkC3PL-EnCea_vKAm19sUYzTzDI0NOIY8-uFMuOT-umgq_RmGiVtN_aem_nXY0HV7OAkRxNYE9BSyuJg

    OpenAI is a money pit with a website on top. That much we know already, but since OpenAI is a private company, there’s a lot of guesswork required when estimating the depth of the pit.

    HSBC’s US software and services team has today updated its OpenAI model to include the company’s $250bn rental of cloud compute from Microsoft, announced late in October, and its $38bn rental of cloud compute from Amazon announced less than a week later. The latest two deals add an extra four gigawatts of compute power to OpenAI’s requirements, bringing the contracted amount to 36 gigawatts.

    Based on a total cumulative deal value of up to $1.8tn, OpenAI is heading for a data centre rental bill of about $620bn a year — though only a third of the contracted power is expected to be online by the end of this decade.

    Its starting point is to put user numbers on an S-curve that by 2030 reaches 3bn, “equivalent to 44 per cent of the world’s adult population” ex China. That’s versus an estimated total user base last month of approximately 800mn

    LLM subscriptions will become “as ubiquitous and useful as Microsoft 365”, HSBC says. It models that by 2030, 10 per cent of OpenAI users will be paying customers, versus an estimated 5 per cent currently.

    The team also assumes LLM companies will capture 2 per cent of the digital advertising market in revenue, from slightly more than zero currently.

    For what it’s worth, we can summarise a few of the assumptions HSBC is making for the estimates above:

    Total consumer AI revenue will be $129bn by 2030, of which $87bn comes from search and $24bn comes from advertising.

    OpenAI’s consumer market share slips to 56 per cent by 2030, from around 71 per cent this year. Anthropic and xAI are both given market shares in the single digits, a mystery “others” is assigned 22 per cent, and Google is excluded entirely.

    Enterprise AI will be generating $386bn in annual revenue by 2030, though OpenAI’s market share is set at 37 per cent from about 50 per cent currently.

    The bottom line is that, for OpenAI, it’s nowhere close to enough.

    OpenAI’s cumulative free cash flow to 2030 may be about $282bn, it forecasts, while Nvidia’s promised cash injections and the disposal of AMD shares can bring in another $26bn.

    Each extra 500mn users OpenAI can grab will add about $36bn to cumulative revenue between now and 2030, while converting 20 per cent of the customers to paid subscriptions might bring in an additional $194bn over the same period, HSBC says.

    Given the interlaced relationships between AI LLM, cloud, and chips companies, we see a case for some degree of flexibility at least from the larger players (less so for the neo clouds): less capacity would always be better than a liquidity crisis.

    We expect AI to penetrate every production process and every vertical, with a great potential for productivity gains at a global level.

    Some AI assets may be overvalued, some may be undervalued too. But eventually, a few incremental basis points of economic growth (productivity-driven) on a USD110trn+ world GDP could dwarf what is often seen as unreasonable capex spending at present.

    Reply
  16. Tomi Engdahl says:

    Who Owns AI-Generated Content? The Murky Future of Copyright in the Age of AI
    https://lasoft.org/blog/who-owns-ai-generated-content-the-murky-future-of-copyright-in-the-age-of-ai/?utm_source=facebook&utm_medium=paid&utm_campaign=Blog%20Posts&utm_content=Who%20Owns%20AI-Generated%20Content&utm_term=Education&hsa_acc=2681760161854042&hsa_cam=120224469662910500&hsa_grp=120236724364380500&hsa_ad=120236724746720500&hsa_src=fb&hsa_net=facebook&hsa_ver=3&fbclid=IwdGRjcAOTCHBleHRuA2FlbQEwAGFkaWQBqyqqebMktHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHpsmqIXZ2EWZOMW7KS77xPb1CtgidaWK_rfpbLeUTiKSNRe3vEVHp3azKk9T_aem_wAAz2XFwlOLerKLzdKbGZQ&utm_id=120224469662910500

    Generative AI tools like ChatGPT, Midjourney, and Claude have made it easier than ever to produce articles, images, music, and even code — sometimes with just a few clicks. But as businesses rush to automate content creation, one critical question remains: Who actually owns AI-generated work?

    According to the U.S. Copyright Office, works created entirely by AI without human authorship are not eligible for copyright protection. That means if your article, design, or video was made solely by a machine, you can’t legally stop others from copying or reselling it. This principle was reinforced in 2023, when a federal judge ruled against granting copyright to a work created by an AI system (Thaler v. Perlmutter).

    That doesn’t mean AI-assisted work is off-limits — but the rules are fuzzy, and getting it wrong could cost you more than you think.

    content creation, one critical question remains: Who actually owns AI-generated work?

    According to the U.S. Copyright Office, works created entirely by AI without human authorship are not eligible for copyright protection. That means if your article, design, or video was made solely by a machine, you can’t legally stop others from copying or reselling it. This principle was reinforced in 2023, when a federal judge ruled against granting copyright to a work created by an AI system (Thaler v. Perlmutter).

    That doesn’t mean AI-assisted work is off-limits — but the rules are fuzzy, and getting it wrong could cost you more than you think.

    What the Law Says — and What It Doesn’t
    The U.S. Copyright Office has been clear: copyright protection only applies to works with sufficient human creativity. In other words, if an AI tool generates content without meaningful human involvement, that content is in the public domain from the start. This was underscored in the Thaler v. Perlmutter ruling, where the court stated:

    “Human authorship is a bedrock requirement of copyright.”

    However, there’s a gray zone: what counts as “meaningful” human input? For example:

    Editing AI-generated text significantly
    Selecting, combining, and curating multiple outputs
    Writing highly specific prompts that guide the result in a creative direction

    In such cases, parts of the work — especially the arrangement or final edited form — may qualify for copyright.

    Outside the U.S., approaches vary:

    In the UK, copyright for computer-generated works is granted to the person who made the “arrangements necessary” for creation (CDPA 1988, s. 9(3)).
    In the EU, human authorship remains essential, and the European Parliament is working on broader AI regulations, including content ownership issues.

    Bottom line: there’s no one-size-fits-all answer — and if you’re using AI for anything public-facing, you need to know where your legal exposure starts.

    AI-generated blog posts
    Imagine you publish a blog post written entirely by ChatGPT with minimal edits. You might think you “own” it, but legally, you don’t. Anyone could copy and republish it without infringing copyright. Worse, platforms like Google may devalue purely AI content if it lacks originality or human insight

    Designs from Midjourney or DALL·E
    Creating a logo or ad banner using Midjourney? Those images typically have no copyright protection unless you’ve heavily modified them. In fact, some AI art platforms (like Stability AI) don’t even guarantee you exclusive rights to what you generate. That means your competitor could use a nearly identical image — and you’d have no legal ground to stop them.

    AI-composed music or voiceovers
    Tools like Suno or ElevenLabs can generate music and synthetic voices in seconds. But again, ownership is murky. Many platforms give users a license to use the outputs, but not full copyright. Plus, music with no human composition or recording input is likely unprotected under copyright law.

    Code written by Copilot or similar tools
    GitHub Copilot has raised red flags in the open-source community, with claims it may reproduce licensed code. Microsoft, GitHub, and OpenAI were even sued in 2022 over potential violations (Copilot lawsuit info). If your business ships software generated by such tools, you could unknowingly violate licensing terms — or fail to own the rights to what you deploy.

    Using generative AI without understanding the legal landscape can backfire — fast. Here’s why relying on AI-generated content without human oversight can be risky for your business:

    No Legal Ownership
    If your marketing materials, product descriptions, or visuals are generated solely by AI, you likely don’t hold enforceable rights. That means you can’t stop others from copying or even monetizing what you thought was “yours.”

    Legal Liability
    If your AI tool accidentally reuses licensed or copyrighted material — like snippets of code, brand elements, or media — you may be held accountable, not the AI provider. The Copilot lawsuits show how unclear the boundaries are between “trained on” and “copied from.”

    SEO & Brand Risk
    Search engines are catching up. Google has stated that AI-generated content designed to manipulate rankings could violate its spam policies. Moreover, AI-written copy often lacks voice, originality, or depth — which can hurt trust and engagement.

    Confidentiality and Data Risks
    Uploading sensitive internal info to AI platforms (like ChatGPT or Jasper) could lead to data exposure if you’re not using enterprise-secure models. Always check the platform’s terms — some reserve the right to store or train on your input.

    Replacing Teams = Losing Expertise
    AI is fast — but it lacks domain knowledge, ethical reasoning, emotional intelligence, and brand nuance. Replacing your creative, marketing, or legal teams with AI might save costs short-term, but it can lead to generic content, reputation risks, or compliance issues in the long run.

    What You Can Do: Safe and Smart Use of AI
    AI isn’t the enemy — but it needs to be handled thoughtfully. Here are practical steps businesses can take to stay creative, efficient, and legally safe:

    Use AI as a co-creator, not the sole author
    The safest route to copyright protection is combining AI output with human creativity. That could mean:

    Editing and rewriting AI-generated text
    Reworking AI-generated designs
    Curating and structuring multiple outputs into something original
    This turns the final product into a human-authored work, increasing your chances of legal ownership.

    Document your process
    If you’re using AI to assist with content creation, keep a record of your prompts, edits, and human contributions. This can serve as evidence of human authorship if your ownership is ever challenged.

    Review platform terms
    Not all AI tools offer the same usage rights. Some platforms allow commercial use and grant full rights (e.g. ChatGPT’s paid plans), while others retain ownership or offer limited licenses. Always read the fine print — especially for visuals, music, and code.

    Focus on originality and brand alignment
    Even if you use AI to generate drafts, make sure the final product reflects your brand voice, values, and strategic intent. AI is great at helping with efficiency — but humans still drive differentiation.

    Consult legal or IP experts
    If your business relies heavily on AI-generated content — especially in regulated industries or IP-sensitive sectors — talk to an IP lawyer. It’s far cheaper to get clarity early than to fight disputes later.

    AI Can Help — But Ownership Starts With You
    Generative AI is transforming how we create — but it hasn’t rewritten the rules of ownership. Without human input, AI-generated content often floats in a legal gray zone, leaving your business exposed.

    That doesn’t mean you shouldn’t use it. It means you should use it smartly.

    Reply
  17. Tomi Engdahl says:

    Yhdysvaltojen Q3-tuloskaudella tekoäly määrittää suunnan – Katso tuloskalenteri
    Tekoäly ei ole markkinateema muiden joukossa, vaan Yhdysvaltain Q3/2025-tuloskauden ratkaiseva tekijä. Lue tuloskalenterista, milloin esimerkiksi Nvidia, Apple ja Microsoft julkaisevat tuloksensa.
    https://www.op-media.fi/sijoittaminen/osakesijoittaminen/yhdysvaltojen-q3-tuloskaudella-tekoaly-maarittaa-suunnan–katso-q3-tuloskalenteri/

    Yhdysvaltojen osakemarkkinoiden tuloskausi heinä-syyskuulta 2025 näyttää ennusteiden valossa perinteisen vahvalta.

    – Yhdysvalloissa analyytikot odottavat maailman tunnetuimman S&P 500 -osakeindeksin yhtiöiden tuloskasvun olevan keskimäärin 8–13 prosenttia vuoden takaiseen verrattuna. Tämä olisi siten yhdeksäs perättäinen kasvuun päättyvä kvartaali, sanoo OP:n kansainvälisten osakkeiden ja ETF:ien asiantuntija Joona Heinola.

    On kuitenkin hyvä muistaa, että ennusteet ovat yleisesti hieman alhaisemmat, jotta ne voidaan tulosjulkistusten yhteydessä ylittää.

    Pankit ja isot teknologiayhtiöt, kuten niin sanottu “Magnificent 7″, ovat keskeisiä tuloskauden vetureita. Magnificient 7 -yhtiöt tarkoittavat Applea, Microsoftia, Alphabetia, Amazonia, Metaa, Nvidiaa sekä Teslaa.

    Reply
  18. Tomi Engdahl says:

    Godfather of AI Predicts Total Breakdown of Society
    Tech billionaires “are really betting on AI replacing a lot of workers.”
    https://futurism.com/artificial-intelligence/godfather-ai-breakdown-society

    Geoffrey Hinton, one of the three so-called “godfathers” of AI, never misses an opportunity to issue foreboding proclamations about the tech he helped create.

    During an hour-long public conversation with Senator Bernie Sanders at Georgetown University last week, the British computer science laid out all the alarming ways that he forecasts AI will completely upend society for the worst, seemingly leaving little room for human contrivances like optimism. One of the reasons why is that AI’s rapid deployment will be completely unlike technological revolutions in the past, which created new classes of jobs, he said.

    “The people who lose their jobs won’t have other jobs to go to,” Hinton said, as quoted by Business Insider. “If AI gets as smart as people — or smarter — any job they might do can be done by AI.”

    Hinton pioneered the deep learning techniques that are foundational to the generative AI models fueling the AI boom today. His work on neural networks earned him a Turing Award in 2018, alongside University of Montreal researcher Yoshua Bengio and the former chief AI scientist at Meta Yann LeCun. The trio are considered to be the “godfathers” of AI.

    All three scientists have been outspoken about the tech’s risks, to varying degrees. But it was Hinton who first began to turn the most heads when he said he regretted his life’s work after stepping down from his role at Google in 2023.

    multibillionaires spearheading AI, like Elon Musk, Mark Zuckerberg, and Larry Ellison haven’t really “thought through” the fact that “if the workers don’t get paid, there’s nobody to buy their products,” he said, per BI.

    Previously, Hinton has said it wouldn’t be “inconceivable” that humankind gets wiped out by AI. He also believes we’re not that far away from achieving an artificial general intelligence, or AGI

    “Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose AI,” Hinton said in 2023. “And now I think it may be 20 years or less.”

    Reply
  19. Tomi Engdahl says:

    While leading large language models are trained on a corpus of data vastly exceeding what a human could ever learn, many experts would disagree that this means that the AI actually “knows” what it’s talking about. Moreover, many efforts to replace workers with semi-autonomous models called AI agents have often failed embarrassingly, including in customer support roles that many predicted were the most vulnerable to being outmoded. In other words, it’s not quite set in stone that the tech will be to so easily replace even low-paying jobs.
    https://futurism.com/artificial-intelligence/godfather-ai-breakdown-society

    Reply
  20. Tomi Engdahl says:

    Nvidian osake alamäessä, kun Metan kerrottiin liittoutuvan siruissa Googlen kanssa
    Markkinat|Sijoittavat möivät Nvidiaa, koska Metan huhuttiin neuvottelevan miljardien dollarien sirukaupasta Googlen kanssa.
    https://www.hs.fi/talous/art-2000011650691.html

    Reply
  21. Tomi Engdahl says:

    Your Next ‘Large’ Language Model Might Not Be Large After All
    A 27M-parameter model just outperformed giants like DeepSeek R1, o3-mini, and Claude 3.7 on reasoning tasks
    https://towardsdatascience.com/your-next-large-language-model-might-not-be-large-afterall-2/

    Reply
  22. Tomi Engdahl says:

    https://etn.fi/index.php/13-news/18213-yllaettaevae-tutkimustulos-saehkoe-riittaeaekin-ai-datakeskuksille

    Yksi tekoälyyn liittyvistä suurimmista huolenaiheista on ollut pelko siitä, että kasvava datakeskusten määrä ja AI-mallien käyttö syövät maailman sähköntuotannon kapasiteettia. Uusi tutkimus kääntää asetelman päälaelleen. Kanadalaisen Waterloon yliopiston ja Georgia Techin ympäristötaloustutkijoiden laskelmat osoittavat, että AI:n vaikutus energiankulutukseen jää koko talouden mittakaavassa yllättävän pieneksi – jopa mitättömäksi.

    Tutkimuksessa yhdistettiin Yhdysvaltain talousdata AI:n arvioituihin käyttötasoihin eri toimialoilla. Tulosten mukaan tekoälyn laajamittainen käyttöönotto nostaisi Yhdysvaltain energiankulutusta vuositasolla noin 28 petajoulea. Määrä on suunnilleen sama kuin koko Islannin vuotuinen sähkönkulutus, mutta vain 0,03 prosenttia Yhdysvaltain energiankulutuksesta. Vastaavasti hiilidioksidipäästöt nousisivat 0,02 prosenttia. Muutokset ovat siis mitattavissa, mutta ne eivät heilauta kansallista energiatasetta suuntaan tai toiseen.

    Tutkijoiden johtopäätös on selkeä: sähköä kyllä kuluu lisää, mutta AI ei romauta sähköverkkoja eikä kaada ilmastotavoitteita. Paljon enemmän merkitystä on sillä, minne datakeskukset sijoittuvat ja millaisia energialähteitä ne käyttävät. Paikallisesti vaikutus voi olla tuntuva, sillä datakeskus voi jopa kaksinkertaistaa alueen sähkönkäytön. Kansallisella tasolla nousu häviää kuitenkin taustameluun.

    Reply
  23. Tomi Engdahl says:

    Vibe coding startups face a big copycat risk, says a founder who sold his company for $80 million
    https://www.businessinsider.com/base44-vibecoding-tools-easy-to-copy-maor-shlomo-risk-2025-11

    A vibe coding startup founder says vibe coding tools are easy to clone.
    Startups built only on prompting an LLM or light fine-tuning could struggle to defend their business.
    Maor Shlomo’s comments come as vibe coding platforms continue to gain traction.

    “it’s relatively easy to create a vibe coding tool.”

    Vibe coding tools enable anyone to build software by simply prompting AI. But Shlomo said the part users see — the magic moment when an interface appears — is the easiest piece to replicate.

    Shlomo founded Base44 as a bootstrapped project that quickly hit hundreds of thousands of users. In June, it was acquired by Wix, a web development company, for about $80 million.

    Shlomo said on the podcast that startups that rely on clever prompting or fine-tuning an existing LLM could struggle to defend their business. “It’s going to be hard to have a moat,” he said.

    What is difficult to replicate is the underlying infrastructure behind a tool, such as a built-in database, authentication system, user management, and analytics, he added.

    “It’s very, very, very hard to create a platform that could help people build products they’ll actually use, that are functional, that are complex enough for real-world use cases,” Shlomo said.

    A recent a16z analysis of startup spending shows a noticeable shift toward vibecoding platforms. According to the October report, Replit, Cursor, Loveable, and Emergent were among the top 50 AI-native applications based on spending data. Replit ranked third in total spend, behind OpenAI and Anthropic.

    “Vibe coding is no mere consumer trend — it has landed in workplaces,” wrote the three a16z staff who authored the report.

    Investors are also betting big on vibecoding. Replit announced in September that it raised a $250 million round at a $3 billion valuation, nearly tripling its valuation since its last round in 2023. Lovable closed a $200 million Series A in July that valued it at $1.8 billion, according to PitchBook. Cursor, one of the biggest players in the vibe coding space, announced earlier this month that it had raised a $2.3 billion round at a $29.3 billion valuation.

    Reply
  24. Tomi Engdahl says:

    Rhyme is the key to set AIs free when verse outsmarts security
    Poetry proves potent jailbreak tool for today’s top models
    https://www.theregister.com/2025/11/21/poetry_llm_guardrails/

    Are you a wizard with words? Do you like money without caring how you get it? You could be in luck now that a new role in cybercrime appears to have opened up – poetic LLM jailbreaking.

    A research team in Italy published a paper this week, with one of its members saying that the “findings are honestly wilder than we expected.”

    Researchers found that when you try to bypass top AI models’ guardrails – the safeguards preventing them from spewing harmful content – attempts to do so composed in verse were vastly more successful than typical prompts.

    Reply
  25. Tomi Engdahl says:

    McKinsey explains why AI won’t take your job, even though it can already automate 57% of all U.S. work hours
    https://fortune.com/2025/11/25/why-ai-wont-take-your-job-partnership-agents-robots-mckinsey/

    A new report from McKinsey Global Institute tackles one of the most pressing fears of the modern economy: the sweeping job displacement threatened by artificial intelligence. While McKinsey’s research indicates that current technologies could, in theory, automate about 57% of U.S. work hours, the consulting firm concludes that this high figure measures technical potential in tasks, not the inevitable loss of jobs.

    Reply
  26. Tomi Engdahl says:

    With consumers adopting AI and social media to shop and spend, retailers must offer new experiences

    Hyper-personal shopping: how agentic AI is changing online retail
    Online shopping is getting personal as agentic AI reimagines the entire purchasing process, allowing consumers to discover new ways to spend
    https://www.ft.com/content/da86d455-01c8-4d95-a14f-926fc805e235?%3Futm_source=FB&utm_medium=digital_transformation&utm_content=paid&fbclid=IwdGRjcAOSduFleHRuA2FlbQEwAGFkaWQBqyfNcfbSxnNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHkVFI4lr3wXf-AYkALtR4KQOSlwhDolABWBRQ7fKme21nFEUlqe236NhqSgx_aem_YnBaxEoP64-nynXfjFHbDw&utm_source=fb&utm_id=120233577211270278&utm_term=120233577238360278&utm_campaign=120233577211270278

    Reply
  27. Tomi Engdahl says:

    Agenttihässäkkä valtaa Windowsin nopeammin kuin kukaan odotti
    Windows tuo uudistuksen käyttäjille soraäänistä huolimatta.
    https://muropaketti.com/tietotekniikka/tietotekniikkauutiset/windows-mullistuu-taysin-ja-nopeammin-kuin-kuviteltiin/#google_vignette

    icrosoft on tarkentanut suunnitelmiaan niin kutsutusta agenttisesta käyttöjärjestelmästä, jota on pidetty futuristisena visiona. Yrityksen mukaan Windows 11 saakin mullistuksen huomattavasti nopeammin kuin on odotettu.

    Uudistuksia saapuu lähitulevaisuudessa käyttöjärjestelmän esiversioon Windows Insider -ohjelman puitteissa.

    Reply
  28. Tomi Engdahl says:

    ‘We could have asked ChatGPT’: students fight back over course taught by AI
    Staffordshire students say signs material was AI-generated included suspicious file names and rogue voiceover accent
    https://www.theguardian.com/education/2025/nov/20/university-of-staffordshire-course-taught-in-large-part-by-ai-artificial-intelligence

    Students at the University of Staffordshire have said they feel “robbed of knowledge and enjoyment” after a course they hoped would launch their digital careers turned out to be taught in large part by AI.

    Reply
  29. Tomi Engdahl says:

    Comet-selain hyödyntää tekoäly-yhtiö Perplexityn omaa älykästä hakukonetta, joka tuottaa nopeasti kattavia yhteenvetoja vapaasti muotoiltujen hakulausekkeiden perusteella myös lähteet käyttäjälle paljastaen. Lisäksi saatavilla on Comet Assistant -avustaja, joka suorittaa käyttäjän puolesta monenlaisia tehtäviä kyeten vaikkapa luokittelemaan sähköposteja ja järjestelemään kalenteritapahtumia.
    https://mobiili.fi/2025/11/21/tekoaly-yhtio-perplexity-toi-comet-selaimensa-androidille/

    Reply
  30. Tomi Engdahl says:

    Google-pomo: toimitusjohtajan työt voi korvata helposti tekoälyllä
    Tekoälyn kehitys etenee niin vauhdilla, että jopa teknologiajättien johtajat arvioivat omien töidensä olevan pian automatisoitavissa.
    https://www.tivi.fi/uutiset/a/7bbfb872-4c55-40fb-8e03-dd65bc2b1859

    Reply
  31. Tomi Engdahl says:

    Tekoälyllä luotu joulu­maalaus osoittautui niin hirvittäväksi, että se poistettiin
    Lontoo|Uupunut joulupukki oli kontillaan rantavedessä seuranaan linnunnokkainen koira.
    https://www.hs.fi/maailma/art-2000011643847.html

    Reply
  32. Tomi Engdahl says:

    Google lukee kaikki Gmail -sähköpostisi ja kouluttaa tekoälyään niillä – Näin kytket pois päältä
    https://dawn.fi/uutiset/2025/11/21/opas-gmail-tekoaly-sahkoposti-esto#google_vignette

    Google on ryhtynyt lukemaan oletusarvoisesti kaikkien Gmailia käyttävien käyttäjien kaikki sähköpostit.

    Yhtiö lukee sähköpostit ja käyttää niitä oman tekoälynsä koulutusmateriaalina.

    Asiasta uutisoineen Malwarebytesin mukaan Google käyttää sähköpostiviestejä mm. uusien Gmailin älykkäiden tekoälytoimintojen koulutukseen. Eli käytännössä lukemalla kaikkien käyttäjiensä kaikki sähköpostit, Google osaa tarjota vaikkapa nopeita tapoja vastata saapuneeseen sähköpostiin.

    Reply
  33. Tomi Engdahl says:

    Google on tehnyt tässä suhteessa käyttäjille ikävän tempun: joko annat sähköpostisi tekoälyn käyttöön tai sitten et pysty käyttämään tekoälyä lainkaan Gmailissa.

    Toiminnon saa onneksi kytkettyä pois päältä. Emme löytäneet asetusta omasta Androidin Gmail-sovelluksesta, josta asetuksen voisi vaihtaa, mutta selaimen kautta asetus löytyi ainakin helposti.

    https://dawn.fi/uutiset/2025/11/21/opas-gmail-tekoaly-sahkoposti-esto

    Reply
  34. Tomi Engdahl says:

    Google CEO: If an AI bubble pops, no one is getting out clean
    Sundar Pichai says no company is immune if AI bubble bursts, echoing dotcom fears.
    https://arstechnica.com/ai/2025/11/googles-sundar-pichai-warns-of-irrationality-in-trillion-dollar-ai-investment-boom/

    Reply
  35. Tomi Engdahl says:

    Kenkää tekoälylle
    19.11.202515:30
    Tekoäly on kaikkialla ja monesti pyytämättä. Vyörytyksen edessä ei tarvitse jäädä avuttomaksi. Näillä ohjeilla tekoälyn saa pois Googlen, Microsoftin ja Metan palveluista.
    https://www.iltalehti.fi/oppaat/a/62d2e4fa-0715-43d5-a8e4-582a0c2b6044

    Reply
  36. Tomi Engdahl says:

    Vibe-ohjelmointi nosti TypeScriptin huipulle
    Kenneth Falck21.11.202506:00TekoälyOhjelmistokehitysOhjelmointi
    Kenneth Falck oli itse vielä parikymmentä vuotta sitten ohjelmointikielten tyyppitarkastuksia vastaan, koska ne tekivät ohjelmoinnista hankalaa.
    https://www.tivi.fi/uutiset/a/7168d02f-83a8-43e6-aa06-a86e4868fdec

    Reply
  37. Tomi Engdahl says:

    Tekoälyä käyttävät kasvattavat etumatkaansa muihin – Varjopuolena ylikuormitus
    Heli Kyläinpää21.11.202510:45|päivitetty21.11.202510:45TekoälyTyöelämä
    Päivittäin tekoälyä hyödyntävät työntekijät raportoivat selvästi suurempia hyötyjä tuottavuudessa, palkassa ja työn varmuudessa kuin tekoälyä harvemmin käyttävät. Samalla ylikuormitus ja taloudellinen paine varjostavat monen työarkea.
    https://www.tivi.fi/uutiset/a/92d4daed-3749-4f8b-974d-7822a8777913

    Reply
  38. Tomi Engdahl says:

    The Tool Every Engineer Should Try: A Free AI IDE That Actually Works
    https://www.elektormagazine.com/articles/google-antigravity-ide?fbclid=Iwb21leAOTKwRjbGNrA5Mq-2V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHhOWtU6sSEIT6-v5f3SHXGSz6a8UD5XgYFCMpNZ6z7waJvo-Iq8YMzIKpLaG_aem_8Ovu_28akKdG7VrWy7hYNA

    Google Antigravity IDE an AI-Driven Developer Environment — Free, Powerful, and Built to Supercharge Engineering Workflows

    Introduction
    Google Antigravity is a developer environment that is built around AI from the ground up. It’s like a super version of VS Code, but it works very differently inside. Instead of writing every line of code yourself, Antigravity uses an agent-first development model. You explain what you need, and the system creates agents that carry out the work. Because the tool is free, anyone can try it. After using it myself, it feels like a real upgrade for any engineer’s workstation.

    What Makes Antigravity Different
    Antigravity understands your entire project the moment you open a folder. It scans your files, understands the structure, and learns how the pieces connect. From there, you can ask it to add features, refactor older code, document missing parts, or even build entire modules. It also works very well with embedded and hardware workflows. You can drag in a sensor datasheet and ask it to generate a full driver for Arduino, Espressif ESP32, or STM32. It reads the registers, timing diagrams, and modes, then builds the whole library with examples

    The real strength, however, is the agent system. Antigravity lets you create small agents that work like “mini employees.” Each one can take care of a single task. One agent can test the UI and fix bugs. Another can refactor the routes, and another one can update the documentation. While they work, you can focus on things that need your attention. It really feels like having a small team inside your IDE.

    These agents do not just read your code. They also run and interact with your app. They open the web preview, click buttons, test forms, submit data, and report the results. If you approve their changes, they write the updates for you.

    Reply
  39. Tomi Engdahl says:

    The boy’s family’s lawyer called the response “disturbing.” https://trib.al/Ex9Xm0r

    OpenAI Says Boy’s Death Was His Own Fault for Using ChatGPT Wrong
    The boy’s family’s lawyer called the response “disturbing.”
    https://futurism.com/artificial-intelligence/openai-boy-death-using-chatgpt-wrong?utm_social_handle_id=352364611609411&utm_social_post_id=603083771&fbclid=IwdGRjcAOVSwhjbGNrA5VK02V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHswRqs2v0tjKDBJ5ajYrEsq-cO3Sw-D61w_AfBkIa4Y4UM4qLR0nW8KCW7dJ_aem_-0XuGx4RQS3SXhUE-xjhyQ

    OpenAI has shot back at a family that’s suing the company over the suicide of their teenage son, arguing that the 16-year-old used ChatGPT incorrectly and that his tragic death was his own fault.

    The family filed the lawsuit in late August, arguing that the AI chatbot had coaxed their son Adam Raine into killing himself.

    “To the extent that any ’cause’ can be attributed to this tragic event,” the filing reads, “Plaintiffs’ alleged injuries and harm were caused or contributed to, directly and proximately, in whole or in part, by Adam Raine’s misuse, unauthorized use, unintended use, unforeseeable use, and/or improper use of ChatGPT.”

    “They abjectly ignore all of the damning facts we have put forward: how GPT-4o was rushed to market without full testing,” he wrote. “That OpenAI twice changed its Model Spec to require ChatGPT to engage in self-harm discussions. That ChatGPT counseled Adam away from telling his parents about his suicidal ideation and actively helped him plan a ‘beautiful suicide.’”

    “And OpenAI and Sam Altman have no explanation for the last hours of Adam’s life, when ChatGPT gave him a pep talk and then offered to write a suicide note,” he added.

    Reply
  40. Tomi Engdahl says:

    Major AI conference flooded with peer reviews written fully by AI
    Controversy has erupted after 21% of manuscript reviews for an international AI conference were found to be generated by artificial intelligence.
    https://www.nature.com/articles/d41586-025-03506-6?fbclid=IwdGRjcAOXvS5jbGNrA5e8-WV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHn3Z6eZQg4pTSXK9tufVgWhrIk0BQnVuttcpSE7kluobo9HTLLtWmAJ1aPFD_aem_tu9cLUtSi4P6kYKcwOEnow

    What can researchers do if they suspect that their manuscripts have been peer reviewed using artificial intelligence (AI)? Dozens of academics have raised concerns on social media about manuscripts and peer reviews submitted to the organizers of next year’s International Conference on Learning Representations (ICLR), an annual gathering of specialists in machine learning. Among other things, they flagged hallucinated citations and suspiciously long and vague feedback on their work.

    Pangram screened all 19,490 studies and 75,800 peer reviews submitted for ICLR 2026, which will take place in Rio de Janeiro, Brazil, in April. Neubig and more than 11,000 other AI researchers will be attending.

    Pangram’s analysis revealed that around 21% of the ICLR peer reviews were fully AI-generated, and more than half contained signs of AI use.

    The conference organizers say they will now use automated tools to assess whether submissions and peer reviews breached policies on using AI in submissions and peer reviews. This is the first time that the conference has faced this issue at scale, says Bharath Hariharan, a computer scientist at Cornell University in Ithaca, New York, and senior programme chair for ICLR 2026. “After we go through all this process … that will give us a better notion of trust.”

    Pangram Predicts 21% of ICLR Reviews are AI-Generated
    https://www.pangram.com/blog/pangram-predicts-21-of-iclr-reviews-are-ai-generated?fbclid=IwVERDUAOXvlRleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR69y6GlEe4XfoCx3vbkrY1HTgSXHqm5FvYDVqAYY7Q5_iOsg-U8vK2t-kSZ2Q_aem_dO_soUmvhGqVjpH-_S98AA

    Are authors using LLMs to write AI research papers? Are peer reviewers outsourcing the writing of their reviews of these papers to generative AI tools? In order to find out, we analyzed all 19,000 papers and 70,000 reviews from the International Conference on Learning Representations

    In all seriousness, many ICLR authors and reviewers have been noticing some cases of blantant AI-related scientific misconduct, such as an LLM-generated paper with completely hallucinated references, and many authors claiming to receive completely AI-generated reviews.

    One author even reported that a reviewer asked 40 AI-generated questions in their peer review!

    ICLR has a very clear and descriptive policy on what is allowed and disallowed in terms of LLM usage in both papers and reviews.

    ICLR also has guidelines that authors should follow when using LLMs in their papers and reviews. To summarize:

    Authors are allowed to use LLMs to help with drafting their papers and as a research assistant, but must disclose this usage and are accountable for the scientific integrity of their paper.
    Authors are allowed to use LLMs to assist with spelling and grammar in their LLM reviews, but the use of an LLM to write the entire review is potentially an Code of Ethics violation, based on both misrepresenting an external opinion/view of the paper as their own, and for violating confidentiality.

    We instead wish to draw attention to the amount of AI usage in the papers and peer review, and highlight that fully AI-generated reviews (which indeed, are likely to be Code of Ethics violations) are a much more widespread problem than many realize.

    We found that using a regular PDF parser such as PyMuPDF was insufficient for the ICLR papers, as line numbers, images, and tables were often not handled correctly. Therefore, in order to extract the main text of the paper, we used Mistral OCR to parse the main text of the paper from the PDF as Markdown. Because AI tends to prefer markdown output as well, in order to mitigate false positives coming from the formatting alone, we then reformatted the Markdown as plain text.

    We also checked the peer reviews for AI using our new EditLens model. EditLens is able to not only detect the presence of AI, but can also describe the degree to which AI was involved in the editing process. EditLens can predict that a text falls within one of five categories:

    Fully human-written
    Lightly AI-edited or AI-assisted
    Medium AI-edited or AI-assisted
    Heavy AI-edited or AI-assisted
    Fully AI-generated

    EditLens is currently only available to customers in our private beta

    We found 21%, or 15,899 reviews, were fully AI-generated. We found over half of the reviews had some form of AI involvement, either AI editing, assistance, or full AI-generation.

    Paper submissions, on the other hand, are still mostly human-written (61% were mostly human-written). However, we did find several hundred fully AI-generated papers, though they seem to be outliers, and 9% of submissions had over 50% AI content. As a caveat, some fully AI-generated papers were already desk rejected and removed from OpenReview before we had a change to perform the analysis.

    AI usage in papers is correlated with lower reviews
    Contrary to a previous study that showed that LLMs often prefer their own outputs to human writing when used as a judge, we find the opposite: the more AI-generated text present in a submission, the worse the reviews are.

    One is that the more AI is used in a paper, the less well-thought out and executed the paper is overall. It is possible that when AI is used in scientific writing, it is more often used for offloading and shortcutting rather than used as an additive assistant. Additionally, fully AI-generated papers receiving lower scores potentially indicates that AI-generated research is still low quality slop, and not a real contribution to science (yet).

    We find the more AI is present in a review, the higher the score is. This is problematic: it means rather than reframing the reviewer’s own opinion using AI as the frame (if this were the case, we’d expect the average score to be the same for AI reviews and human reviews), reviewers are actually outsourcing the judgement of the paper to AI as well. Misrepresenting the LLM’s opinion as a reviewer’s own actual opinion is a clear violation of the Code of Ethics. We know that AI tends to be sycophantic, which means it says things that people want to hear and are pleasing rather than giving an unbiased opinion: a completely undesirable property when applied to peer review! This could explain the positive bias in scores among AI reviews.

    AI-generated reviews are longer and have a lot of “filler content” in them.

    According to Shaib et. al., in a research paper called Measuring AI Slop in Text, one property of AI “slop” is that it has low information density– which means the AI uses a lot of words to say very little in terms of actual content.

    We find this to be true in the LLM reviews as well: AI is using a lot of words but not actually giving very high information dense feedback. We argue this is problematic because authors have to waste time parsing a long review and answering vacuous questions that don’t actually contain much helpful feedback. It is also worth mentioning that most authors will probably ask a large language model for a review of their submission before they actually submit it. In these cases, the feedback from an LLM review is largely redundant and unhelpful, because the author has already seen the obvious criticisms that an LLM will make.

    While Pangram’s false positive rate is extremely low, it is non-zero

    How can you tell if you received an AI peer review?
    If you’re an author who suspects you’ve received an AI-generated review, there are several telltale signs you can look for. While Pangram can detect AI-generated text, you can also spot the signs of AI reviews by eye.

    Header styles: AI-generated peer reviews love to create bold section headers with 2-3 word summary tags followed by a colon.

    Shallow nit-picks rather than genuine analysis: AI-generated reviews tend to focus on surface-level issues rather than real concerns with the scientific integrity of the paper. Typical AI criticisms might include more ablations needed that are very similar to the ablations presented, requested increase in the size of the test set or number of controls, or asking for more clarification or more examples.

    Saying a lot of words that say very little: AI reviews often exhibit low information density, using verbose language to make points that could be expressed more concisely. This verbosity creates extra work for authors who must parse through lengthy reviews to extract the actual substantive critiques.

    Why are AI papers and AI peer reviews harmful to the scientific process?

    The biggest issue with poor quality AI-generated papers is that they simply waste time and resources that are in limited supply. According to our analysis, AI-generated papers are simply not as good as human-written papers, and even more problematically, they can be generated cheaply by dishonest reviewers and paper mills that “spray and pray” (submit a high volume of submissions to a conference in hopes that one of them will get accepted by chance). If AI-generated papers are allowed to flood the peer review system, review quality will continue to decline, and reviewers will be less motivated by having to read “slop” papers instead of real research.

    However, the question remains: if AI can generate helpful feedback, why should we prohibit fully AI-generated reviews? University of Chicago economist Alex Imas articulates the core issue in a recent tweet: the answer depends on whether we want human judgment involved in scientific peer review.

    If we believe current AI models are sufficient to replace human judgment entirely, then conferences should simply automate the entire review process—feed papers through an LLM and assign scores automatically. But if we believe human judgment should remain part of the process, then fully AI-generated content must be sanctioned. Imas identifies two key problems: first, a pooling equilibrium where AI-generated content (being easier to produce) will quickly crowd out human judgment within a few review cycles; and second, a verification problem where determining if an AI review is actually good requires the same effort as reviewing the paper yourself—so if LLMs can generate better reviews than humans, why not automate the entire process?

    Expert opinions are more useful than LLMs because their opinions are shaped by experience, context, and a perspective that is curated and refined over time. LLMs are powerful, but their reviews often lack taste, judgment, and therefore feel “flat.”

    Conclusion
    The rise of AI-generated content in academic peer review represents a critical challenge for the scientific community. Our analysis shows that fully AI-generated peer reviews represent a significant proportion of the overall ICLR review population, and the number of AI-generated papers is also rising. Yet, these AI-generated papers are more often slop than genuine research contributions.

    We argue that this trend is problematic and harmful for science, and we call on conferences and publishers to embrace AI detection as a solution to deter abuse and preserve scientific integrity.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*