AI trends 2025

AI is developing all the time. Here are some picks from several articles what is expected to happen in AI and around it in 2025. Here are picks from various articles, the texts are picks from the article edited and in some cases translated for clarity.

AI in 2025: Five Defining Themes
https://news.sap.com/2025/01/ai-in-2025-defining-themes/
Artificial intelligence (AI) is accelerating at an astonishing pace, quickly moving from emerging technologies to impacting how businesses run. From building AI agents to interacting with technology in ways that feel more like a natural conversation, AI technologies are poised to transform how we work.
But what exactly lies ahead?
1. Agentic AI: Goodbye Agent Washing, Welcome Multi-Agent Systems
AI agents are currently in their infancy. While many software vendors are releasing and labeling the first “AI agents” based on simple conversational document search, advanced AI agents that will be able to plan, reason, use tools, collaborate with humans and other agents, and iteratively reflect on progress until they achieve their objective are on the horizon. The year 2025 will see them rapidly evolve and act more autonomously. More specifically, 2025 will see AI agents deployed more readily “under the hood,” driving complex agentic workflows.
In short, AI will handle mundane, high-volume tasks while the value of human judgement, creativity, and quality outcomes will increase.
2. Models: No Context, No Value
Large language models (LLMs) will continue to become a commodity for vanilla generative AI tasks, a trend that has already started. LLMs are drawing on an increasingly tapped pool of public data scraped from the internet. This will only worsen, and companies must learn to adapt their models to unique, content-rich data sources.
We will also see a greater variety of foundation models that fulfill different purposes. Take, for example, physics-informed neural networks (PINNs), which generate outcomes based on predictions grounded in physical reality or robotics. PINNs are set to gain more importance in the job market because they will enable autonomous robots to navigate and execute tasks in the real world.
Models will increasingly become more multimodal, meaning an AI system can process information from various input types.
3. Adoption: From Buzz to Business
While 2024 was all about introducing AI use cases and their value for organizations and individuals alike, 2025 will see the industry’s unprecedented adoption of AI specifically for businesses. More people will understand when and how to use AI, and the technology will mature to the point where it can deal with critical business issues such as managing multi-national complexities. Many companies will also gain practical experience working for the first time through issues like AI-specific legal and data privacy terms (compared to when companies started moving to the cloud 10 years ago), building the foundation for applying the technology to business processes.
4. User Experience: AI Is Becoming the New UI
AI’s next frontier is seamlessly unifying people, data, and processes to amplify business outcomes. In 2025, we will see increased adoption of AI across the workforce as people discover the benefits of humans plus AI.
This means disrupting the classical user experience from system-led interactions to intent-based, people-led conversations with AI acting in the background. AI copilots will become the new UI for engaging with a system, making software more accessible and easier for people. AI won’t be limited to one app; it might even replace them one day. With AI, frontend, backend, browser, and apps are blurring. This is like giving your AI “arms, legs, and eyes.”
5. Regulation: Innovate, Then Regulate
It’s fair to say that governments worldwide are struggling to keep pace with the rapid advancements in AI technology and to develop meaningful regulatory frameworks that set appropriate guardrails for AI without compromising innovation.

12 AI predictions for 2025
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
https://www.cio.com/article/3630070/12-ai-predictions-for-2025.html
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
1. Small language models and edge computing
Most of the attention this year and last has been on the big language models — specifically on ChatGPT in its various permutations, as well as competitors like Anthropic’s Claude and Meta’s Llama models. But for many business use cases, LLMs are overkill and are too expensive, and too slow, for practical use.
“Looking ahead to 2025, I expect small language models, specifically custom models, to become a more common solution for many businesses,”
2. AI will approach human reasoning ability
In mid-September, OpenAI released a new series of models that thinks through problems much like a person would, it claims. The company says it can achieve PhD-level performance in challenging benchmark tests in physics, chemistry, and biology. For example, the previous best model, GPT-4o, could only solve 13% of the problems on the International Mathematics Olympiad, while the new reasoning model solved 83%.
If AI can reason better, then it will make it possible for AI agents to understand our intent, translate that into a series of steps, and do things on our behalf, says Gartner analyst Arun Chandrasekaran. “Reasoning also helps us use AI as more of a decision support system,”
3. Massive growth in proven use cases
This year, we’ve seen some use cases proven to have ROI, says Monteiro. In 2025, those use cases will see massive adoption, especially if the AI technology is integrated into the software platforms that companies are already using, making it very simple to adopt.
“The fields of customer service, marketing, and customer development are going to see massive adoption,”
4. The evolution of agile development
The agile manifesto was released in 2001 and, since then, the development philosophy has steadily gained over the previous waterfall style of software development.
“For the last 15 years or so, it’s been the de-facto standard for how modern software development works,”
5. Increased regulation
At the end of September, California governor Gavin Newsom signed a law requiring gen AI developers to disclose the data they used to train their systems, which applies to developers who make gen AI systems publicly available to Californians. Developers must comply by the start of 2026.
There are also regulations about the use of deep fakes, facial recognition, and more. The most comprehensive law, the EU’s AI Act, which went into effect last summer, is also something that companies will have to comply with starting in mid-2026, so, again, 2025 is the year when they will need to get ready.
6. AI will become accessible and ubiquitous
With gen AI, people are still at the stage of trying to figure out what gen AI is, how it works, and how to use it.
“There’s going to be a lot less of that,” he says. But gen AI will become ubiquitous and seamlessly woven into workflows, the way the internet is today.
7. Agents will begin replacing services
Software has evolved from big, monolithic systems running on mainframes, to desktop apps, to distributed, service-based architectures, web applications, and mobile apps. Now, it will evolve again, says Malhotra. “Agents are the next phase,” he says. Agents can be more loosely coupled than services, making these architectures more flexible, resilient and smart. And that will bring with it a completely new stack of tools and development processes.
8. The rise of agentic assistants
In addition to agents replacing software components, we’ll also see the rise of agentic assistants, adds Malhotra. Take for example that task of keeping up with regulations.
Today, consultants get continuing education to stay abreast of new laws, or reach out to colleagues who are already experts in them. It takes time for the new knowledge to disseminate and be fully absorbed by employees.
“But an AI agent can be instantly updated to ensure that all our work is compliant with the new laws,” says Malhotra. “This isn’t science fiction.”
9. Multi-agent systems
Sure, AI agents are interesting. But things are going to get really interesting when agents start talking to each other, says Babak Hodjat, CTO of AI at Cognizant. It won’t happen overnight, of course, and companies will need to be careful that these agentic systems don’t go off the rails.
Companies such as Sailes and Salesforce are already developing multi-agent workflows.
10. Multi-modal AI
Humans and the companies we build are multi-modal. We read and write text, we speak and listen, we see and we draw. And we do all these things through time, so we understand that some things come before other things. Today’s AI models are, for the most part, fragmentary. One can create images, another can only handle text, and some recent ones can understand or produce video.
11. Multi-model routing
Not to be confused with multi-modal AI, multi-modal routing is when companies use more than one LLM to power their gen AI applications. Different AI models are better at different things, and some are cheaper than others, or have lower latency. And then there’s the matter of having all your eggs in one basket.
“A number of CIOs I’ve spoken with recently are thinking about the old ERP days of vendor lock,” says Brett Barton, global AI practice leader at Unisys. “And it’s top of mind for many as they look at their application portfolio, specifically as it relates to cloud and AI capabilities.”
Diversifying away from using just a single model for all use cases means a company is less dependent on any one provider and can be more flexible as circumstances change.
12. Mass customization of enterprise software
Today, only the largest companies, with the deepest pockets, get to have custom software developed specifically for them. It’s just not economically feasible to build large systems for small use cases.
“Right now, people are all using the same version of Teams or Slack or what have you,” says Ernst & Young’s Malhotra. “Microsoft can’t make a custom version just for me.” But once AI begins to accelerate the speed of software development while reducing costs, it starts to become much more feasible.

9 IT resolutions for 2025
https://www.cio.com/article/3629833/9-it-resolutions-for-2025.html
1. Innovate
“We’re embracing innovation,”
2. Double down on harnessing the power of AI
Not surprisingly, getting more out of AI is top of mind for many CIOs.
“I am excited about the potential of generative AI, particularly in the security space,”
3. And ensure effective and secure AI rollouts
“AI is everywhere, and while its benefits are extensive, implementing it effectively across a corporation presents challenges. Balancing the rollout with proper training, adoption, and careful measurement of costs and benefits is essential, particularly while securing company assets in tandem,”
4. Focus on responsible AI
The possibilities of AI grow by the day — but so do the risks.
“My resolution is to mature in our execution of responsible AI,”
“AI is the new gold and in order to truly maximize it’s potential, we must first have the proper guardrails in place. Taking a human-first approach to AI will help ensure our state can maintain ethics while taking advantage of the new AI innovations.”
5. Deliver value from generative AI
As organizations move from experimenting and testing generative AI use cases, they’re looking for gen AI to deliver real business value.
“As we go into 2025, we’ll continue to see the evolution of gen AI. But it’s no longer about just standing it up. It’s more about optimizing and maximizing the value we’re getting out of gen AI,”
6. Empower global talent
Although harnessing AI is a top objective for Morgan Stanley’s Wetmur, she says she’s equally committed to harnessing the power of people.
7. Create a wholistic learning culture
Wetmur has another talent-related objective: to create a learning culture — not just in her own department but across all divisions.
8. Deliver better digital experiences
Deltek’s Cilsick has her sights set on improving her company’s digital employee experience, believing that a better DEX will yield benefits in multiple ways.
Cilsick says she first wants to bring in new technologies and automation to “make things as easy as possible,” mirroring the digital experiences most workers have when using consumer technologies.
“It’s really about leveraging tech to make sure [employees] are more efficient and productive,”
“In 2025 my primary focus as CIO will be on transforming operational efficiency, maximizing business productivity, and enhancing employee experiences,”
9. Position the company for long-term success
Lieberman wants to look beyond 2025, saying another resolution for the year is “to develop a longer-term view of our technology roadmap so that we can strategically decide where to invest our resources.”
“My resolutions for 2025 reflect the evolving needs of our organization, the opportunities presented by AI and emerging technologies, and the necessity to balance innovation with operational efficiency,”
Lieberman aims to develop AI capabilities to automate routine tasks.
“Bots will handle common inquiries ranging from sales account summaries to HR benefits, reducing response times and freeing up resources for strategic initiatives,”

Not just hype — here are real-world use cases for AI agents
https://venturebeat.com/ai/not-just-hype-here-are-real-world-use-cases-for-ai-agents/
Just seven or eight months ago, when a customer called in to or emailed Baca Systems with a service question, a human agent handling the query would begin searching for similar cases in the system and analyzing technical documents.
This process would take roughly five to seven minutes; then the agent could offer the “first meaningful response” and finally begin troubleshooting.
But now, with AI agents powered by Salesforce, that time has been shortened to as few as five to 10 seconds.
Now, instead of having to sift through databases for previous customer calls and similar cases, human reps can ask the AI agent to find the relevant information. The AI runs in the background and allows humans to respond right away, Russo noted.
AI can serve as a sales development representative (SDR) to send out general inquires and emails, have a back-and-forth dialogue, then pass the prospect to a member of the sales team, Russo explained.
But once the company implements Salesforce’s Agentforce, a customer needing to modify an order will be able to communicate their needs with AI in natural language, and the AI agent will automatically make adjustments. When more complex issues come up — such as a reconfiguration of an order or an all-out venue change — the AI agent will quickly push the matter up to a human rep.

Open Source in 2025: Strap In, Disruption Straight Ahead
Look for new tensions to arise in the New Year over licensing, the open source AI definition, security and compliance, and how to pay volunteer maintainers.
https://thenewstack.io/open-source-in-2025-strap-in-disruption-straight-ahead/
The trend of widely used open source software moving to more restrictive licensing isn’t new.
In addition to the demands of late-stage capitalism and impatient investors in companies built on open source tools, other outside factors are pressuring the open source world. There’s the promise/threat of generative AI, for instance. Or the shifting geopolitical landscape, which brings new security concerns and governance regulations.
What’s ahead for open source in 2025?
More Consolidation, More Licensing Changes
The Open Source AI Debate: Just Getting Started
Security and Compliance Concerns Will Rise
Paying Maintainers: More Cash, Creativity Needed

Kyberturvallisuuden ja tekoälyn tärkeimmät trendit 2025
https://www.uusiteknologia.fi/2024/11/20/kyberturvallisuuden-ja-tekoalyn-tarkeimmat-trendit-2025/
1. Cyber ​​infrastructure will be centered on a single, unified security platform
2. Big data will give an edge against new entrants
3. AI’s integrated role in 2025 means building trust, governance engagement, and a new kind of leadership
4. Businesses will adopt secure enterprise browsers more widely
5. AI’s energy implications will be more widely recognized in 2025
6. Quantum realities will become clearer in 2025
7. Security and marketing leaders will work more closely together

Presentation: For 2025, ‘AI eats the world’.
https://www.ben-evans.com/presentations

Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing
However, just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity. Right now, we very much see AI in this “peak of inflated expectations” phase and predict that it will dip into the “trough of disillusionment”, where organizations realize that it is not the silver bullet they thought it would be. In fact, there are already signs of cynicism as decision-makers are bombarded with marketing messages from vendors and struggle to discern what is a genuine use case and what is not relevant for their organization.
There is also regulation that will come into force, such as the EU AI Act, which is a comprehensive legal framework that sets out rules for the development and use of AI.
AI certainly won’t solve every problem, and it should be used like automation, as part of a collaborative mix of people, process and technology. You simply can’t replace human intuition with AI, and many new AI regulations stipulate that human oversight is maintained.

7 Splunk Predictions for 2025
https://www.splunk.com/en_us/form/future-predictions.html
AI: Projects must prove their worth to anxious boards or risk defunding, and LLMs will go small to reduce operating costs and environmental impact.

OpenAI, Google and Anthropic Are Struggling to Build More Advanced AI
Three of the leading artificial intelligence companies are seeing diminishing returns from their costly efforts to develop newer models.
https://www.bloomberg.com/news/articles/2024-11-13/openai-google-and-anthropic-are-struggling-to-build-more-advanced-ai
Sources: OpenAI, Google, and Anthropic are all seeing diminishing returns from costly efforts to build new AI models; a new Gemini model misses internal targets

It Costs So Much to Run ChatGPT That OpenAI Is Losing Money on $200 ChatGPT Pro Subscriptions
https://futurism.com/the-byte/openai-chatgpt-pro-subscription-losing-money?fbclid=IwY2xjawH8epVleHRuA2FlbQIxMQABHeggEpKe8ZQfjtPRC0f2pOI7A3z9LFtFon8lVG2VAbj178dkxSQbX_2CJQ_aem_N_ll3ETcuQ4OTRrShHqNGg
In a post on X-formerly-Twitter, CEO Sam Altman admitted an “insane” fact: that the company is “currently losing money” on ChatGPT Pro subscriptions, which run $200 per month and give users access to its suite of products including its o1 “reasoning” model.
“People use it much more than we expected,” the cofounder wrote, later adding in response to another user that he “personally chose the price and thought we would make some money.”
Though Altman didn’t explicitly say why OpenAI is losing money on these premium subscriptions, the issue almost certainly comes down to the enormous expense of running AI infrastructure: the massive and increasing amounts of electricity needed to power the facilities that power AI, not to mention the cost of building and maintaining those data centers. Nowadays, a single query on the company’s most advanced models can cost a staggering $1,000.

Tekoäly edellyttää yhä nopeampia verkkoja
https://etn.fi/index.php/opinion/16974-tekoaely-edellyttaeae-yhae-nopeampia-verkkoja
A resilient digital infrastructure is critical to effectively harnessing telecommunications networks for AI innovations and cloud-based services. The increasing demand for data-rich applications related to AI requires a telecommunications network that can handle large amounts of data with low latency, writes Carl Hansson, Partner Solutions Manager at Orange Business.

AI’s Slowdown Is Everyone Else’s Opportunity
Businesses will benefit from some much-needed breathing space to figure out how to deliver that all-important return on investment.
https://www.bloomberg.com/opinion/articles/2024-11-20/ai-slowdown-is-everyone-else-s-opportunity

Näin sirumarkkinoilla käy ensi vuonna
https://etn.fi/index.php/13-news/16984-naein-sirumarkkinoilla-kaey-ensi-vuonna
The growing demand for high-performance computing (HPC) for artificial intelligence and HPC computing continues to be strong, with the market set to grow by more than 15 percent in 2025, IDC estimates in its recent Worldwide Semiconductor Technology Supply Chain Intelligence report.
IDC predicts eight significant trends for the chip market by 2025.
1. AI growth accelerates
2. Asia-Pacific IC Design Heats Up
3. TSMC’s leadership position is strengthening
4. The expansion of advanced processes is accelerating.
5. Mature process market recovers
6. 2nm Technology Breakthrough
7. Restructuring the Packaging and Testing Market
8. Advanced packaging technologies on the rise

2024: The year when MCUs became AI-enabled
https://www-edn-com.translate.goog/2024-the-year-when-mcus-became-ai-enabled/?fbclid=IwZXh0bgNhZW0CMTEAAR1_fEakArfPtgGZfjd-NiPd_MLBiuHyp9qfiszczOENPGPg38wzl9KOLrQ_aem_rLmf2vF2kjDIFGWzRVZWKw&_x_tr_sl=en&_x_tr_tl=fi&_x_tr_hl=fi&_x_tr_pto=wapp
The AI ​​party in the MCU space started in 2024, and in 2025, it is very likely that there will be more advancements in MCUs using lightweight AI models.
Adoption of AI acceleration features is a big step in the development of microcontrollers. The inclusion of AI features in microcontrollers started in 2024, and it is very likely that in 2025, their features and tools will develop further.

Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing

AI Regulation Gets Serious in 2025 – Is Your Organization Ready?
While the challenges are significant, organizations have an opportunity to build scalable AI governance frameworks that ensure compliance while enabling responsible AI innovation.
https://www.securityweek.com/ai-regulation-gets-serious-in-2025-is-your-organization-ready/
Similar to the GDPR, the EU AI Act will take a phased approach to implementation. The first milestone arrives on February 2, 2025, when organizations operating in the EU must ensure that employees involved in AI use, deployment, or oversight possess adequate AI literacy. Thereafter from August 1 any new AI models based on GPAI standards must be fully compliant with the act. Also similar to GDPR is the threat of huge fines for non-compliance – EUR 35 million or 7 percent of worldwide annual turnover, whichever is higher.
While this requirement may appear manageable on the surface, many organizations are still in the early stages of defining and formalizing their AI usage policies.
Later phases of the EU AI Act, expected in late 2025 and into 2026, will introduce stricter requirements around prohibited and high-risk AI applications. For organizations, this will surface a significant governance challenge: maintaining visibility and control over AI assets.
Tracking the usage of standalone generative AI tools, such as ChatGPT or Claude, is relatively straightforward. However, the challenge intensifies when dealing with SaaS platforms that integrate AI functionalities on the backend. Analysts, including Gartner, refer to this as “embedded AI,” and its proliferation makes maintaining accurate AI asset inventories increasingly complex.
Where frameworks like the EU AI Act grow more complex is their focus on ‘high-risk’ use cases. Compliance will require organizations to move beyond merely identifying AI tools in use; they must also assess how these tools are used, what data is being shared, and what tasks the AI is performing. For instance, an employee using a generative AI tool to summarize sensitive internal documents introduces very different risks than someone using the same tool to draft marketing content.
For security and compliance leaders, the EU AI Act represents just one piece of a broader AI governance puzzle that will dominate 2025.
The next 12-18 months will require sustained focus and collaboration across security, compliance, and technology teams to stay ahead of these developments.

The Global Partnership on Artificial Intelligence (GPAI) is a multi-stakeholder initiative which aims to bridge the gap between theory and practice on AI by supporting cutting-edge research and applied activities on AI-related priorities.
https://gpai.ai/about/#:~:text=The%20Global%20Partnership%20on%20Artificial,activities%20on%20AI%2Drelated%20priorities.

2,540 Comments

  1. Tomi Engdahl says:

    Maxwell Zeff / TechCrunch:
    OpenAI details why “emergent misalignment”, where training on wrong answers in one area can lead to misalignment in others, happens and how it can be mitigated

    OpenAI found features in AI models that correspond to different ‘personas’
    https://techcrunch.com/2025/06/18/openai-found-features-in-ai-models-that-correspond-to-different-personas/

    Reply
  2. Tomi Engdahl says:

    Codegen:
    Ship with AI: The Future of Coding is Here — Use natural language in Slack to trigger real PRs, docs, dashboards, infra & more. Codegen connects GitHub, Linear, Notion, and beyond.

    https://codegen.com/blog/ship-with-ai-the-future-of-coding-is-here?utm_source=techmeme&utm_medium=banner&utm_campaign=June2025

    Reply
  3. Tomi Engdahl says:

    Alex Weprin / The Hollywood Reporter:
    At Cannes, YouTube CEO Neal Mohan says YouTube will integrate Veo 3 into Shorts later this summer and that AI tech “will push the limits of human creativity” — The Veo 3 video generator is capable of creating both videos and sound based on text prompts, with YouTube CEO Neal Mohan saying the …

    YouTube to Add Google’s Veo 3 to Shorts in Move That Could Turbocharge AI on the Video Platform
    https://www.hollywoodreporter.com/business/digital/youtube-add-google-veo-3-shorts-ai-1236293135/

    The Veo 3 video generator is capable of creating both videos and sound based on text prompts, with YouTube CEO Neal Mohan saying the “AI technology will push the limits of human creativity.”

    Reply
  4. Tomi Engdahl says:

    Jackie Davalos / Bloomberg:
    White House crypto and AI czar David Sacks says China has grown adept at evading US export controls and is at most two years behind US chip design capabilities

    https://www.bloomberg.com/news/articles/2025-06-18/trump-adviser-david-sacks-says-china-adept-at-evading-chip-curbs

    Reply
  5. Tomi Engdahl says:

    Tekoäly studiossa – uhka vai apu musiikintekoon?
    https://www.silentsound.fi/post/teko%C3%A4ly-studiossa-uhka-vai-apu-musiikintekoon

    tulevaisuudesta – siitä, kuinka tärkeää on oppia yhdistämään vanhaa ja uutta, ja miten työkalut voivat yllättää. Tekoäly työkaluna studiossa ja musiikinteossa ei ole enää utopiaa, vaan konkreettinen osa tämän päivän tekemistä.

    Studiossa, artistien keskuudessa ja somessa käy vilkas keskustelu tai kunnollinen vääntö siitä, viekö AI meiltä musiikin tai luovuuden? Mielestäni tärkein asia on: älkää kieltäkö sen olemassaoloa. Se on täällä, ja se tulee muuttamaan paljon asioita. Kysymys ei ole siitä, estääkö se meitä tekemästä, vaan siitä, miten me halutaan toimia sen rinnalla ja mitä me osataan tehdä sellaista mitä se ei tee.

    Olen itse kokeillut erilaisia AI-työkaluja musiikintekoon. Yhden noston haluan tuoda esille . AIVA on sävellystyökaluksi suunniteltu tekoäly, joka generoi biisejä sekä MIDI-, WAV- että MP3-muodossa.

    Idea on kiinnostava, mutta tulokset vaihtelevat.

    Vertauksena jos taas pyydän ChatGPT:ltä sointukuvion lyhyen kuvauksen perusteella, saan hetkessä useita vaihtoehtoja, joista lähteä liikkeelle. Se on tehokasta, jos siis tietää, mitä tekee.

    Tämä ei tarkoita, että AI olisi ratkaisu kaikkeen. Mutta se on työkalu, ja hyvä sellainen jos sitä osaa potkia oikein… Varsinkin niille, joilla ei ole aiempaa taustaa musiikista, AI voi olla avain uuden luomiseen. Meille, jotka olemme tehneet musiikkia pitkään, se voi tarjota uuden kulman, uuden alkusysäyksen tai välineen kokeiluun. Ja aina kannattaa kokeilla

    Mutta yksi asia on selvä: AI ei vielä tuota musiikkia, jolla on sielu. Olen keskustellut useamman muusikon sekä musiikin kuuntelijan kanssa AI: tekemästä musiikista ja kaikki sanoo samaa, että se suru, kaipuu tai rakkaus ei välity. Se on usein kuin raakile – ja juuri siksi sen voi jalostaa.

    Musiikki ei ole katoamassa. Tärkeintä on, että sitä tehdään. Tekoäly voi olla osa sitä tarinaa – mutta sydän ja tunne lähtevät edelleen ihmisestä.

    Reply
  6. Tomi Engdahl says:

    https://www.silentsound.fi/post/%C3%A4%C3%A4nitysstudion-arki-teko%C3%A4ly-ja-miksauksen-tulevaisuus?utm_source=fb&utm_medium=paid&utm_campaign=120227957449980615&utm_term=120227957450670615&utm_content=120227957450930615&utm_id=120227957449980615&fbclid=IwZXh0bgNhZW0BMABhZGlkAasiv3l-fKcBHn9WxJT-GSk5sI7DOYZIzkhKrqM4ihFuiNcWQRTHLSehqOz5jjUa4ySJHHyi_aem_RDUKActTk5NvzpC3npa6HA

    Oon miettinyt viime aikoina paljon musiikkia ja mitä se tuo vuonna 2025. Tekoäly on tullut osaksi musiikin tekemistä tavalla, jota harva osasi ennustaa tai odottaa. Monet käyttävät AI:ta jo säveltämiseen, osa miksaukseen ja jotkut jopa keikkasettien tekoon. Ja se on oikeastaan hienoa. Mä tykkään siitä, että musiikki on nyt saavutettavampaa ihmisille, jotka eivät ennen osanneet aloittaa. Ainakin AI antaa väylän kokeilla.

    Toki mukana kulkee harhakuvitelma: “Teen hyvän AI-biisin ja musta tulee kuuluisa.” Mutta totuus on, että AI-musiikki kuulostaa vielä usein… vähän demolta. Ja demomainen fiilis on juuri se, mikä estää sitä pääsemästä läpi.

    Itse oon opetellut kuukausia, miten AI:lla tuotettua musiikkia kannattaa miksata. Se ei toimi samojen lakien mukaan kuin “perinteinen” musiikki. Äänien värit, dynamiikat ja rakenteet ovat erilaisia

    Ja juuri siinä kohtaa tekoäly voi olla myös hyvä työkalu miksaukseen ja masterointiin. Käytän itse plugareita, joissa tekoäly avaa valmiiksi 8–9 efektiä ja säätää lähtötasot kohdilleen. Nopeammin kuin itse manuaalisesti ne saisin avattua yksi kerrallaan. Siitä on nopea jatkaa eteenpäin. Se ei vie ammattia – se vain nopeuttaa rutiineja. Mutta vaarana on, että sitä kuuroutuu ja alkaa luottamaan liikaa tekoälyn tekemiin päätöksiin.

    Tärkeintä on olla avoin. Ihmisen ja tekoälyn yhdistelmä voi olla todella vahva kombo. Mutta täytyy muistaa myös tämä: kaikki mikä näyttää tekoälyltä ei ole tekoälyä – ja päinvastoin.

    Kannattaa myös pitää mielessä, että nykyinen AI on halpaa. Mutta ei se sellaisena pysy.

    Jos kuvittelet, että AI pysyy ikuisesti ilmaisena, se voi olla kallis illuusio

    Siksi tämä on hyvä hetki tutustua siihen. Kokeilla. Oppia. Ja ennen kaikkea – tehdä musiikkia.

    Reply
  7. Tomi Engdahl says:

    Experts are warning of immimenent “AI model collapse’ as ChatGPT-Fueled content overwhelms the web.

    And they note that, “leaning is going to be prohibitively expensive, probably impossible.”

    The rapid proliferation of generative AI tools like ChatGPT has sparked a new and unexpected crisis: the internet is becoming saturated with AI-generated content, and it’s already undermining the future of AI development itself. Researchers warn that as AI models increasingly learn from data tainted by previous AI outputs, the quality and reliability of future models may spiral downward — a phenomenon known as “model collapse.” The clean, human-created data that once formed the backbone of machine learning is being buried under layers of synthetic content, complicating efforts to train new models without recursive errors.

    Maurice Chiodo from the University of Cambridge likens the dilemma to the demand for “low-background steel” — uncontaminated metal sourced from pre-nuclear era battleships. In AI terms, data from before 2022 is now seen as “clean,” while post-2022 data is suspect. The problem is already affecting advanced AI systems like retrieval-augmented generation (RAG), which depend on real-time web data that is increasingly polluted. Without enforceable regulations to label or filter AI-generated content, experts caution that the AI industry could stall itself by drowning in its own output.

    learn more https://www.theregister.com/2025/06/15/ai_model_collapse_pollution/

    Reply
  8. Tomi Engdahl says:

    Self-Evolving AI : New MIT AI Rewrites its Own Code and it’s Changing Everything
    https://www.geeky-gadgets.com/ai-rewriting-its-own-code/

    Reply
  9. Tomi Engdahl says:

    The Interpretable AI playbook: What Anthropic’s research means for your enterprise LLM strategy
    https://venturebeat.com/ai/the-interpretable-ai-playbook-what-anthropics-research-means-for-your-enterprise-llm-strategy/

    Anthropic CEO Dario Amodei made an urgent push in April for the need to understand how AI models think.

    This comes at a crucial time. As Anthropic battles in global AI rankings, it’s important to note what sets it apart from other top AI labs. Since its founding in 2021, when seven OpenAI employees broke off over concerns about AI safety, Anthropic has built AI models that adhere to a set of human-valued principles, a system they call Constitutional AI. These principles ensure that models are “helpful, honest and harmless” and generally act in the best interests of society. At the same time, Anthropic’s research arm is diving deep to understand how its models think about the world, and why they produce helpful (and sometimes harmful) answers.

    Anthropic’s flagship model, Claude 3.7 Sonnet, dominated coding benchmarks when it launched in February, proving that AI models can excel at both performance and safety. And the recent release of Claude 4.0 Opus and Sonnet again puts Claude at the top of coding benchmarks. However, in today’s rapid and hyper-competitive AI market, Anthropic’s rivals like Google’s Gemini 2.5 Pro and Open AI’s o3 have their own impressive showings for coding prowess, while they’re already dominating Claude at math, creative writing and overall reasoning across many languages.

    If Amodei’s thoughts are any indication, Anthropic is planning for the future of AI and its implications in critical fields like medicine, psychology and law, where model safety and human values are imperative. And it shows: Anthropic is the leading AI lab that focuses strictly on developing “interpretable” AI, which are models that let us understand, to some degree of certainty, what the model is thinking and how it arrives at a particular conclusion.

    Amazon and Google have already invested billions of dollars in Anthropic even as they build their own AI models

    Sayash Kapoor, an AI safety researcher, suggests that while interpretability is valuable, it is just one of many tools for managing AI risk. In his view, “interpretability is neither necessary nor sufficient” to ensure models behave safely — it matters most when paired with filters, verifiers and human-centered design. This more expansive view sees interpretability as part of a larger ecosystem of control strategies, particularly in real-world AI deployments where models are components in broader decision-making systems.

    Until recently, many thought AI was still years from advancements like those that are now helping Claude, Gemini and ChatGPT boast exceptional market adoption.

    Amodei fears that when an AI responds to a prompt, “we have no idea… why it chooses certain words over others, or why it occasionally makes a mistake despite usually being accurate.” Such errors — hallucinations of inaccurate information, or responses that do not align with human values — will hold AI models back from reaching their full potential. Indeed, we’ve seen many examples of AI continuing to struggle with hallucinations and unethical behavior.

    For Amodei, the best way to solve these problems is to understand how an AI thinks: “Our inability to understand models’ internal mechanisms means that we cannot meaningfully predict such [harmful] behaviors, and therefore struggle to rule them out … If instead it were possible to look inside models, we might be able to systematically block all jailbreaks, and also characterize what dangerous knowledge the models have.”

    Amodei also sees the opacity of current models as a barrier to deploying AI models in “high-stakes financial or safety-critical settings, because we can’t fully set the limits on their behavior, and a small number of mistakes could be very harmful.” In decision-making that affects humans directly, like medical diagnosis or mortgage assessments, legal regulations require AI to explain its decisions.

    Imagine a financial institution using a large language model (LLM) for fraud detection — interpretability could mean explaining a denied loan application to a customer as required by law. Or a manufacturing firm optimizing supply chains — understanding why an AI suggests a particular supplier could unlock efficiencies and prevent unforeseen bottlenecks.

    Reply
  10. Tomi Engdahl says:

    The Silent Arrival Of AGI: Civilization Is Changing, And We Haven’t Noticed
    https://www.forbes.com/councils/forbestechcouncil/2025/06/16/the-silent-arrival-of-agi-civilization-is-changing-and-we-havent-noticed/

    Artificial general intelligence is not something we are waiting for; rather, it is something we are already experiencing. The shift is not theatrical. There will be no public unveiling or dramatic singularity. The reality is quieter, more gradual and far more impactful.

    Reply
  11. Tomi Engdahl says:

    MAGI provides a network of AI agents, each built for transparency, collaboration and control. It avoids the trap of a single master model and offers instead a cooperative architecture with shared goals and human oversight.

    The key components of MAGI are:

    • Scaffolding: Agents that perform specific roles within clear yet separate boundaries.

    • Coordination: Systems that interoperate through shared objectives and protocols.

    • Auditability: Full visibility into what was asked, what was used and how a conclusion was reached.

    Reply
  12. Tomi Engdahl says:

    https://github.blog/changelog/2025-06-17-visual-studio-17-14-june-release/
    Agent mode is now generally available with MCP tools support in Visual Studio

    Reply
  13. Tomi Engdahl says:

    ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study
    https://time.com/7295195/ai-chatgpt-google-learning-school/

    Does ChatGPT harm critical thinking abilities? A new study from researchers at MIT’s Media Lab has returned some concerning results.

    The study divided 54 subjects—18 to 39 year-olds from the Boston area—into three groups, and asked them to write several SAT essays using OpenAI’s ChatGPT, Google’s search engine, and nothing at all, respectively. Researchers used an EEG to record the writers’ brain activity across 32 regions, and found that of the three groups, ChatGPT users had the lowest brain engagement and “consistently underperformed at neural, linguistic, and behavioral levels.” Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.

    Reply
  14. Tomi Engdahl says:

    The paper suggests that the usage of LLMs could actually harm learning, especially for younger users. The paper has not yet been peer reviewed, and its sample size is relatively small.
    https://time.com/7295195/ai-chatgpt-google-learning-school/

    Reply
  15. Tomi Engdahl says:

    The MIT Media Lab has recently devoted significant resources to studying different impacts of generative AI tools. Studies from earlier this year, for example, found that generally, the more time users spend talking to ChatGPT, the lonelier they feel.

    https://www.engadget.com/ai/joint-studies-from-openai-and-mit-found-links-between-loneliness-and-chatgpt-use-193537421.html

    Reply
  16. Tomi Engdahl says:

    Google: No AI System Currently Uses LLMs.txt
    https://www.seroundtable.com/google-ai-llms-txt-39607.html

    There has been a lot, I mean, a lot, of chatter around if one should add an LLMs.txt to their website. Many are starting to add it while others have not added it yet. Well, John Mueller of Google chimed in and wrote on Bluesky, “FWIW no AI system currently uses llms.txt.”

    Yoast already added support to create your llms.txt file, they have a page about it over here. Yoast is a super popular WordPress plugin/extension. Yoast wrote, “Generate an llms.txt file automatically with a click. Offer to guide LLMs like ChatGPT to your most important content, helping them understand and represent your business more accurately.”

    But again, no one is really using it yet.

    https://yoast.com/features/llms-txt/

    Reply
  17. Tomi Engdahl says:

    The launch of ChatGPT polluted the world forever, like the first atomic weapons tests
    Academics mull the need for the digital equivalent of low-background steel
    https://www.theregister.com/2025/06/15/ai_model_collapse_pollution/

    Feature For artificial intelligence researchers, the launch of OpenAI’s ChatGPT on November 30, 2022, changed the world in a way similar to the detonation of the first atomic bomb.

    The Trinity test, in New Mexico on July 16, 1945, marked the beginning of the atomic age. One manifestation of that moment was the contamination of metals manufactured after that date – as airborne particulates left over from Trinity and other nuclear weapons permeated the environment.

    Reply
  18. Tomi Engdahl says:

    Janina Javanainen 30.5.2025 11:04
    Oletko olemassa, jos tekoäly ei siteeraa sinua? Näin optimoit sisältösi AI-aikakaudelle
    https://blog.netprofile.fi/oletko-olemassa-jos-tekoaly-ei-siteeraa-sinua-nain-optimoit-sisaltosi-ai-aikakaudelle

    Reply
  19. Tomi Engdahl says:

    Anthropic Open-Sources Tool to Trace the “Thoughts” of Large Language Models
    https://www.infoq.com/news/2025/06/anthropic-circuit-tracing/

    Reply
  20. Tomi Engdahl says:

    Inside the AI Party at the End of the World
    At a mansion overlooking the Golden Gate Bridge, a group of AI insiders met to debate one unsettling question: If humanity ends, what comes next?
    https://www.wired.com/story/ai-risk-party-san-francisco/

    Reply
  21. Tomi Engdahl says:

    Inside Amsterdam’s high-stakes experiment to create fair welfare AI
    The Dutch city thought it could break a decade-long trend of implementing discriminatory algorithms. Its failure raises the question: can these programs ever be fair?
    https://www.technologyreview.com/2025/06/11/1118233/amsterdam-fair-welfare-ai-discriminatory-algorithms-failure/

    Reply
  22. Tomi Engdahl says:

    Google continues to roll out access to Project Mariner, an agentic browser assistant, for Ultra plan subscribers, although access is still limited to select countries and regions. Project Mariner has been under experimental development within Google Labs and is positioned as a browser agent that interacts with the user’s active Chrome tabs via a dedicated extension.
    https://www.testingcatalog.com/google-expands-project-mariner-access-to-more-ultra-subscribers/#google_vignette

    Reply
  23. Tomi Engdahl says:

    Tech companies are requiring employees to learn and use AI at work—here’s the best way to do that, experts say
    https://www.cnbc.com/2025/05/30/leadership-experts-how-to-get-your-workplace-to-embrace-ai.html

    Reply
  24. Tomi Engdahl says:

    OpenAI CEO Says We’ve Already Passed the ‘Superintelligence Event Horizon’
    Sam Altman believes AI has crossed a key threshold into a new digital era of superintelligence.
    https://decrypt.co/324532/openai-ceo-says-weve-already-passed-the-superintelligence-event-horizon

    Reply
  25. Tomi Engdahl says:

    News Sites Are Getting Crushed by Google’s New AI Tools
    Chatbots are replacing Google’s traditional search, devastating traffic for some publishers
    https://www.wsj.com/tech/ai/google-ai-news-publishers-7e687141

    Reply
  26. Tomi Engdahl says:

    https://wonderish.ai/
    Describe what you want to build. Wonderish makes beautiful websites, landing pages, funnels and more.

    Reply
  27. Tomi Engdahl says:

    Analysis
    How much information do LLMs really memorize? Now we know, thanks to Meta, Google, Nvidia and Cornell
    https://venturebeat.com/ai/how-much-information-do-llms-really-memorize-now-we-know-thanks-to-meta-google-nvidia-and-cornell/

    Most people interested in generative AI likely already know that Large Language Models (LLMs) — like those behind ChatGPT, Anthropic’s Claude, and Google’s Gemini — are trained on massive datasets: trillions of words pulled from websites, books, codebases, and, increasingly, other media such as images, audio, and video. But why?

    From this data, LLMs develop a statistical, generalized understanding of language, its patterns, and the world — encoded in the form of billions of parameters, or “settings,” in a network of artificial neurons (which are mathematical functions that transform input data into output signals).

    Reply
  28. Tomi Engdahl says:

    This new tool lets artists ‘poison’ their artwork to deter AI companies from using it to train their models—here’s how it works
    https://www.cnbc.com/2023/10/27/new-tool-lets-artists-poison-their-artwork-to-deter-ai-companies.html

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*