AI trends 2025

AI is developing all the time. Here are some picks from several articles what is expected to happen in AI and around it in 2025. Here are picks from various articles, the texts are picks from the article edited and in some cases translated for clarity.

AI in 2025: Five Defining Themes
https://news.sap.com/2025/01/ai-in-2025-defining-themes/
Artificial intelligence (AI) is accelerating at an astonishing pace, quickly moving from emerging technologies to impacting how businesses run. From building AI agents to interacting with technology in ways that feel more like a natural conversation, AI technologies are poised to transform how we work.
But what exactly lies ahead?
1. Agentic AI: Goodbye Agent Washing, Welcome Multi-Agent Systems
AI agents are currently in their infancy. While many software vendors are releasing and labeling the first “AI agents” based on simple conversational document search, advanced AI agents that will be able to plan, reason, use tools, collaborate with humans and other agents, and iteratively reflect on progress until they achieve their objective are on the horizon. The year 2025 will see them rapidly evolve and act more autonomously. More specifically, 2025 will see AI agents deployed more readily “under the hood,” driving complex agentic workflows.
In short, AI will handle mundane, high-volume tasks while the value of human judgement, creativity, and quality outcomes will increase.
2. Models: No Context, No Value
Large language models (LLMs) will continue to become a commodity for vanilla generative AI tasks, a trend that has already started. LLMs are drawing on an increasingly tapped pool of public data scraped from the internet. This will only worsen, and companies must learn to adapt their models to unique, content-rich data sources.
We will also see a greater variety of foundation models that fulfill different purposes. Take, for example, physics-informed neural networks (PINNs), which generate outcomes based on predictions grounded in physical reality or robotics. PINNs are set to gain more importance in the job market because they will enable autonomous robots to navigate and execute tasks in the real world.
Models will increasingly become more multimodal, meaning an AI system can process information from various input types.
3. Adoption: From Buzz to Business
While 2024 was all about introducing AI use cases and their value for organizations and individuals alike, 2025 will see the industry’s unprecedented adoption of AI specifically for businesses. More people will understand when and how to use AI, and the technology will mature to the point where it can deal with critical business issues such as managing multi-national complexities. Many companies will also gain practical experience working for the first time through issues like AI-specific legal and data privacy terms (compared to when companies started moving to the cloud 10 years ago), building the foundation for applying the technology to business processes.
4. User Experience: AI Is Becoming the New UI
AI’s next frontier is seamlessly unifying people, data, and processes to amplify business outcomes. In 2025, we will see increased adoption of AI across the workforce as people discover the benefits of humans plus AI.
This means disrupting the classical user experience from system-led interactions to intent-based, people-led conversations with AI acting in the background. AI copilots will become the new UI for engaging with a system, making software more accessible and easier for people. AI won’t be limited to one app; it might even replace them one day. With AI, frontend, backend, browser, and apps are blurring. This is like giving your AI “arms, legs, and eyes.”
5. Regulation: Innovate, Then Regulate
It’s fair to say that governments worldwide are struggling to keep pace with the rapid advancements in AI technology and to develop meaningful regulatory frameworks that set appropriate guardrails for AI without compromising innovation.

12 AI predictions for 2025
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
https://www.cio.com/article/3630070/12-ai-predictions-for-2025.html
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
1. Small language models and edge computing
Most of the attention this year and last has been on the big language models — specifically on ChatGPT in its various permutations, as well as competitors like Anthropic’s Claude and Meta’s Llama models. But for many business use cases, LLMs are overkill and are too expensive, and too slow, for practical use.
“Looking ahead to 2025, I expect small language models, specifically custom models, to become a more common solution for many businesses,”
2. AI will approach human reasoning ability
In mid-September, OpenAI released a new series of models that thinks through problems much like a person would, it claims. The company says it can achieve PhD-level performance in challenging benchmark tests in physics, chemistry, and biology. For example, the previous best model, GPT-4o, could only solve 13% of the problems on the International Mathematics Olympiad, while the new reasoning model solved 83%.
If AI can reason better, then it will make it possible for AI agents to understand our intent, translate that into a series of steps, and do things on our behalf, says Gartner analyst Arun Chandrasekaran. “Reasoning also helps us use AI as more of a decision support system,”
3. Massive growth in proven use cases
This year, we’ve seen some use cases proven to have ROI, says Monteiro. In 2025, those use cases will see massive adoption, especially if the AI technology is integrated into the software platforms that companies are already using, making it very simple to adopt.
“The fields of customer service, marketing, and customer development are going to see massive adoption,”
4. The evolution of agile development
The agile manifesto was released in 2001 and, since then, the development philosophy has steadily gained over the previous waterfall style of software development.
“For the last 15 years or so, it’s been the de-facto standard for how modern software development works,”
5. Increased regulation
At the end of September, California governor Gavin Newsom signed a law requiring gen AI developers to disclose the data they used to train their systems, which applies to developers who make gen AI systems publicly available to Californians. Developers must comply by the start of 2026.
There are also regulations about the use of deep fakes, facial recognition, and more. The most comprehensive law, the EU’s AI Act, which went into effect last summer, is also something that companies will have to comply with starting in mid-2026, so, again, 2025 is the year when they will need to get ready.
6. AI will become accessible and ubiquitous
With gen AI, people are still at the stage of trying to figure out what gen AI is, how it works, and how to use it.
“There’s going to be a lot less of that,” he says. But gen AI will become ubiquitous and seamlessly woven into workflows, the way the internet is today.
7. Agents will begin replacing services
Software has evolved from big, monolithic systems running on mainframes, to desktop apps, to distributed, service-based architectures, web applications, and mobile apps. Now, it will evolve again, says Malhotra. “Agents are the next phase,” he says. Agents can be more loosely coupled than services, making these architectures more flexible, resilient and smart. And that will bring with it a completely new stack of tools and development processes.
8. The rise of agentic assistants
In addition to agents replacing software components, we’ll also see the rise of agentic assistants, adds Malhotra. Take for example that task of keeping up with regulations.
Today, consultants get continuing education to stay abreast of new laws, or reach out to colleagues who are already experts in them. It takes time for the new knowledge to disseminate and be fully absorbed by employees.
“But an AI agent can be instantly updated to ensure that all our work is compliant with the new laws,” says Malhotra. “This isn’t science fiction.”
9. Multi-agent systems
Sure, AI agents are interesting. But things are going to get really interesting when agents start talking to each other, says Babak Hodjat, CTO of AI at Cognizant. It won’t happen overnight, of course, and companies will need to be careful that these agentic systems don’t go off the rails.
Companies such as Sailes and Salesforce are already developing multi-agent workflows.
10. Multi-modal AI
Humans and the companies we build are multi-modal. We read and write text, we speak and listen, we see and we draw. And we do all these things through time, so we understand that some things come before other things. Today’s AI models are, for the most part, fragmentary. One can create images, another can only handle text, and some recent ones can understand or produce video.
11. Multi-model routing
Not to be confused with multi-modal AI, multi-modal routing is when companies use more than one LLM to power their gen AI applications. Different AI models are better at different things, and some are cheaper than others, or have lower latency. And then there’s the matter of having all your eggs in one basket.
“A number of CIOs I’ve spoken with recently are thinking about the old ERP days of vendor lock,” says Brett Barton, global AI practice leader at Unisys. “And it’s top of mind for many as they look at their application portfolio, specifically as it relates to cloud and AI capabilities.”
Diversifying away from using just a single model for all use cases means a company is less dependent on any one provider and can be more flexible as circumstances change.
12. Mass customization of enterprise software
Today, only the largest companies, with the deepest pockets, get to have custom software developed specifically for them. It’s just not economically feasible to build large systems for small use cases.
“Right now, people are all using the same version of Teams or Slack or what have you,” says Ernst & Young’s Malhotra. “Microsoft can’t make a custom version just for me.” But once AI begins to accelerate the speed of software development while reducing costs, it starts to become much more feasible.

9 IT resolutions for 2025
https://www.cio.com/article/3629833/9-it-resolutions-for-2025.html
1. Innovate
“We’re embracing innovation,”
2. Double down on harnessing the power of AI
Not surprisingly, getting more out of AI is top of mind for many CIOs.
“I am excited about the potential of generative AI, particularly in the security space,”
3. And ensure effective and secure AI rollouts
“AI is everywhere, and while its benefits are extensive, implementing it effectively across a corporation presents challenges. Balancing the rollout with proper training, adoption, and careful measurement of costs and benefits is essential, particularly while securing company assets in tandem,”
4. Focus on responsible AI
The possibilities of AI grow by the day — but so do the risks.
“My resolution is to mature in our execution of responsible AI,”
“AI is the new gold and in order to truly maximize it’s potential, we must first have the proper guardrails in place. Taking a human-first approach to AI will help ensure our state can maintain ethics while taking advantage of the new AI innovations.”
5. Deliver value from generative AI
As organizations move from experimenting and testing generative AI use cases, they’re looking for gen AI to deliver real business value.
“As we go into 2025, we’ll continue to see the evolution of gen AI. But it’s no longer about just standing it up. It’s more about optimizing and maximizing the value we’re getting out of gen AI,”
6. Empower global talent
Although harnessing AI is a top objective for Morgan Stanley’s Wetmur, she says she’s equally committed to harnessing the power of people.
7. Create a wholistic learning culture
Wetmur has another talent-related objective: to create a learning culture — not just in her own department but across all divisions.
8. Deliver better digital experiences
Deltek’s Cilsick has her sights set on improving her company’s digital employee experience, believing that a better DEX will yield benefits in multiple ways.
Cilsick says she first wants to bring in new technologies and automation to “make things as easy as possible,” mirroring the digital experiences most workers have when using consumer technologies.
“It’s really about leveraging tech to make sure [employees] are more efficient and productive,”
“In 2025 my primary focus as CIO will be on transforming operational efficiency, maximizing business productivity, and enhancing employee experiences,”
9. Position the company for long-term success
Lieberman wants to look beyond 2025, saying another resolution for the year is “to develop a longer-term view of our technology roadmap so that we can strategically decide where to invest our resources.”
“My resolutions for 2025 reflect the evolving needs of our organization, the opportunities presented by AI and emerging technologies, and the necessity to balance innovation with operational efficiency,”
Lieberman aims to develop AI capabilities to automate routine tasks.
“Bots will handle common inquiries ranging from sales account summaries to HR benefits, reducing response times and freeing up resources for strategic initiatives,”

Not just hype — here are real-world use cases for AI agents
https://venturebeat.com/ai/not-just-hype-here-are-real-world-use-cases-for-ai-agents/
Just seven or eight months ago, when a customer called in to or emailed Baca Systems with a service question, a human agent handling the query would begin searching for similar cases in the system and analyzing technical documents.
This process would take roughly five to seven minutes; then the agent could offer the “first meaningful response” and finally begin troubleshooting.
But now, with AI agents powered by Salesforce, that time has been shortened to as few as five to 10 seconds.
Now, instead of having to sift through databases for previous customer calls and similar cases, human reps can ask the AI agent to find the relevant information. The AI runs in the background and allows humans to respond right away, Russo noted.
AI can serve as a sales development representative (SDR) to send out general inquires and emails, have a back-and-forth dialogue, then pass the prospect to a member of the sales team, Russo explained.
But once the company implements Salesforce’s Agentforce, a customer needing to modify an order will be able to communicate their needs with AI in natural language, and the AI agent will automatically make adjustments. When more complex issues come up — such as a reconfiguration of an order or an all-out venue change — the AI agent will quickly push the matter up to a human rep.

Open Source in 2025: Strap In, Disruption Straight Ahead
Look for new tensions to arise in the New Year over licensing, the open source AI definition, security and compliance, and how to pay volunteer maintainers.
https://thenewstack.io/open-source-in-2025-strap-in-disruption-straight-ahead/
The trend of widely used open source software moving to more restrictive licensing isn’t new.
In addition to the demands of late-stage capitalism and impatient investors in companies built on open source tools, other outside factors are pressuring the open source world. There’s the promise/threat of generative AI, for instance. Or the shifting geopolitical landscape, which brings new security concerns and governance regulations.
What’s ahead for open source in 2025?
More Consolidation, More Licensing Changes
The Open Source AI Debate: Just Getting Started
Security and Compliance Concerns Will Rise
Paying Maintainers: More Cash, Creativity Needed

Kyberturvallisuuden ja tekoälyn tärkeimmät trendit 2025
https://www.uusiteknologia.fi/2024/11/20/kyberturvallisuuden-ja-tekoalyn-tarkeimmat-trendit-2025/
1. Cyber ​​infrastructure will be centered on a single, unified security platform
2. Big data will give an edge against new entrants
3. AI’s integrated role in 2025 means building trust, governance engagement, and a new kind of leadership
4. Businesses will adopt secure enterprise browsers more widely
5. AI’s energy implications will be more widely recognized in 2025
6. Quantum realities will become clearer in 2025
7. Security and marketing leaders will work more closely together

Presentation: For 2025, ‘AI eats the world’.
https://www.ben-evans.com/presentations

Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing
However, just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity. Right now, we very much see AI in this “peak of inflated expectations” phase and predict that it will dip into the “trough of disillusionment”, where organizations realize that it is not the silver bullet they thought it would be. In fact, there are already signs of cynicism as decision-makers are bombarded with marketing messages from vendors and struggle to discern what is a genuine use case and what is not relevant for their organization.
There is also regulation that will come into force, such as the EU AI Act, which is a comprehensive legal framework that sets out rules for the development and use of AI.
AI certainly won’t solve every problem, and it should be used like automation, as part of a collaborative mix of people, process and technology. You simply can’t replace human intuition with AI, and many new AI regulations stipulate that human oversight is maintained.

7 Splunk Predictions for 2025
https://www.splunk.com/en_us/form/future-predictions.html
AI: Projects must prove their worth to anxious boards or risk defunding, and LLMs will go small to reduce operating costs and environmental impact.

OpenAI, Google and Anthropic Are Struggling to Build More Advanced AI
Three of the leading artificial intelligence companies are seeing diminishing returns from their costly efforts to develop newer models.
https://www.bloomberg.com/news/articles/2024-11-13/openai-google-and-anthropic-are-struggling-to-build-more-advanced-ai
Sources: OpenAI, Google, and Anthropic are all seeing diminishing returns from costly efforts to build new AI models; a new Gemini model misses internal targets

It Costs So Much to Run ChatGPT That OpenAI Is Losing Money on $200 ChatGPT Pro Subscriptions
https://futurism.com/the-byte/openai-chatgpt-pro-subscription-losing-money?fbclid=IwY2xjawH8epVleHRuA2FlbQIxMQABHeggEpKe8ZQfjtPRC0f2pOI7A3z9LFtFon8lVG2VAbj178dkxSQbX_2CJQ_aem_N_ll3ETcuQ4OTRrShHqNGg
In a post on X-formerly-Twitter, CEO Sam Altman admitted an “insane” fact: that the company is “currently losing money” on ChatGPT Pro subscriptions, which run $200 per month and give users access to its suite of products including its o1 “reasoning” model.
“People use it much more than we expected,” the cofounder wrote, later adding in response to another user that he “personally chose the price and thought we would make some money.”
Though Altman didn’t explicitly say why OpenAI is losing money on these premium subscriptions, the issue almost certainly comes down to the enormous expense of running AI infrastructure: the massive and increasing amounts of electricity needed to power the facilities that power AI, not to mention the cost of building and maintaining those data centers. Nowadays, a single query on the company’s most advanced models can cost a staggering $1,000.

Tekoäly edellyttää yhä nopeampia verkkoja
https://etn.fi/index.php/opinion/16974-tekoaely-edellyttaeae-yhae-nopeampia-verkkoja
A resilient digital infrastructure is critical to effectively harnessing telecommunications networks for AI innovations and cloud-based services. The increasing demand for data-rich applications related to AI requires a telecommunications network that can handle large amounts of data with low latency, writes Carl Hansson, Partner Solutions Manager at Orange Business.

AI’s Slowdown Is Everyone Else’s Opportunity
Businesses will benefit from some much-needed breathing space to figure out how to deliver that all-important return on investment.
https://www.bloomberg.com/opinion/articles/2024-11-20/ai-slowdown-is-everyone-else-s-opportunity

Näin sirumarkkinoilla käy ensi vuonna
https://etn.fi/index.php/13-news/16984-naein-sirumarkkinoilla-kaey-ensi-vuonna
The growing demand for high-performance computing (HPC) for artificial intelligence and HPC computing continues to be strong, with the market set to grow by more than 15 percent in 2025, IDC estimates in its recent Worldwide Semiconductor Technology Supply Chain Intelligence report.
IDC predicts eight significant trends for the chip market by 2025.
1. AI growth accelerates
2. Asia-Pacific IC Design Heats Up
3. TSMC’s leadership position is strengthening
4. The expansion of advanced processes is accelerating.
5. Mature process market recovers
6. 2nm Technology Breakthrough
7. Restructuring the Packaging and Testing Market
8. Advanced packaging technologies on the rise

2024: The year when MCUs became AI-enabled
https://www-edn-com.translate.goog/2024-the-year-when-mcus-became-ai-enabled/?fbclid=IwZXh0bgNhZW0CMTEAAR1_fEakArfPtgGZfjd-NiPd_MLBiuHyp9qfiszczOENPGPg38wzl9KOLrQ_aem_rLmf2vF2kjDIFGWzRVZWKw&_x_tr_sl=en&_x_tr_tl=fi&_x_tr_hl=fi&_x_tr_pto=wapp
The AI ​​party in the MCU space started in 2024, and in 2025, it is very likely that there will be more advancements in MCUs using lightweight AI models.
Adoption of AI acceleration features is a big step in the development of microcontrollers. The inclusion of AI features in microcontrollers started in 2024, and it is very likely that in 2025, their features and tools will develop further.

Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing

AI Regulation Gets Serious in 2025 – Is Your Organization Ready?
While the challenges are significant, organizations have an opportunity to build scalable AI governance frameworks that ensure compliance while enabling responsible AI innovation.
https://www.securityweek.com/ai-regulation-gets-serious-in-2025-is-your-organization-ready/
Similar to the GDPR, the EU AI Act will take a phased approach to implementation. The first milestone arrives on February 2, 2025, when organizations operating in the EU must ensure that employees involved in AI use, deployment, or oversight possess adequate AI literacy. Thereafter from August 1 any new AI models based on GPAI standards must be fully compliant with the act. Also similar to GDPR is the threat of huge fines for non-compliance – EUR 35 million or 7 percent of worldwide annual turnover, whichever is higher.
While this requirement may appear manageable on the surface, many organizations are still in the early stages of defining and formalizing their AI usage policies.
Later phases of the EU AI Act, expected in late 2025 and into 2026, will introduce stricter requirements around prohibited and high-risk AI applications. For organizations, this will surface a significant governance challenge: maintaining visibility and control over AI assets.
Tracking the usage of standalone generative AI tools, such as ChatGPT or Claude, is relatively straightforward. However, the challenge intensifies when dealing with SaaS platforms that integrate AI functionalities on the backend. Analysts, including Gartner, refer to this as “embedded AI,” and its proliferation makes maintaining accurate AI asset inventories increasingly complex.
Where frameworks like the EU AI Act grow more complex is their focus on ‘high-risk’ use cases. Compliance will require organizations to move beyond merely identifying AI tools in use; they must also assess how these tools are used, what data is being shared, and what tasks the AI is performing. For instance, an employee using a generative AI tool to summarize sensitive internal documents introduces very different risks than someone using the same tool to draft marketing content.
For security and compliance leaders, the EU AI Act represents just one piece of a broader AI governance puzzle that will dominate 2025.
The next 12-18 months will require sustained focus and collaboration across security, compliance, and technology teams to stay ahead of these developments.

The Global Partnership on Artificial Intelligence (GPAI) is a multi-stakeholder initiative which aims to bridge the gap between theory and practice on AI by supporting cutting-edge research and applied activities on AI-related priorities.
https://gpai.ai/about/#:~:text=The%20Global%20Partnership%20on%20Artificial,activities%20on%20AI%2Drelated%20priorities.

2,775 Comments

  1. Tomi Engdahl says:

    David Sacks / @davidsacks:
    Doomer predictions of a rapid, monopolistic AGI were wrong, as recent AI model releases resemble a Goldilocks scenario with competitive, specialized models — A BEST CASE SCENARIO FOR AI? The Doomer narratives were wrong. Predicated on a “rapid take-off” to AGI, they predicted that the leading AI model would use its intelligence to self-improve, leaving others in the dust, and quickly achieving a godlike superintelligence. Instead, we

    A BEST CASE SCENARIO FOR AI?
    https://x.com/davidsacks/status/1954244614304739360

    The Doomer narratives were wrong. Predicated on a “rapid take-off” to AGI, they predicted that the leading AI model would use its intelligence to self-improve, leaving others in the dust, and quickly achieving a godlike superintelligence. Instead, we are seeing the opposite:
    — the leading models are clustering around similar performance benchmarks;
    — model companies continue to leapfrog each other with their latest versions (which shouldn’t be possible if one achieves rapid take-off);
    — models are developing areas of competitive advantage, becoming increasingly specialized in personality, modes, coding and math as opposed to one model becoming all-knowing.

    Reply
  2. Tomi Engdahl says:

    Leslie Liang / Rest of World:
    Western pharma giants have struck multibillion-dollar deals with Chinese biotech firms using AI, signaling confidence in China’s AI drug discovery research — Multibillion-dollar partnerships show China’s rising influence in AI-driven pharmaceuticals. — Western pharmaceutical giants …

    China’s AI drug discovery companies land huge deals with Big Pharma
    Multibillion-dollar partnerships show China’s rising influence in AI-driven pharmaceuticals.
    https://restofworld.org/2025/ai-drug-discovery-startups-big-pharma-china/

    Reply
  3. Tomi Engdahl says:

    Natasha Singer / New York Times:
    CS grads struggle to land jobs as tech companies lay off workers and embrace AI; CS majors face a 6.1% unemployment rate in the US among grads aged 22 to 27 — Growing up near Silicon Valley, Manasi Mishra remembers seeing tech executives on social media urging students to study computer programming.

    Goodbye, $165,000 Tech Jobs. Student Coders Seek Work at Chipotle.
    https://www.nytimes.com/2025/08/10/technology/coding-ai-jobs-students.html?unlocked_article_code=1.dE8.fZy8.I7nhHSqK9ejO

    As companies like Amazon and Microsoft lay off workers and embrace A.I. coding tools, computer science graduates say they’re struggling to land tech jobs.

    Reply
  4. Tomi Engdahl says:

    Bloomberg:
    Investors are increasingly divesting from companies they fear are at risk of AI disruption, like Wix and Shutterstock, which are down at least 30% in 2025 — Artificial intelligence’s imprint on US financial markets is unmistakable. Nvidia Corp. is the most valuable company in the world at nearly $4.5 trillion.

    Traders Are Fleeing Stocks Feared to Be Under Threat From AI
    https://www.bloomberg.com/news/articles/2025-08-09/traders-are-fleeing-stocks-feared-to-be-under-threat-from-ai

    Reply
  5. Tomi Engdahl says:

    Bloomberg:
    AI startups like Anthropic and OpenAI are ramping up efforts to recruit quant researchers from Wall Street firms, offering competitive pay and benefits

    https://www.bloomberg.com/news/articles/2025-08-08/open-ai-perplexity-make-pitch-to-recruit-quant-traders-from-banks

    Reply
  6. Tomi Engdahl says:

    Kevin Roose / New York Times:
    Hands-on with Alexa+: fun to talk to and good at handling multistep requests, but it is buggy, unreliable, and worse at some basic tasks than the original Alexa

    Alexa Got an A.I. Brain Transplant. How Smart Is It Now?
    https://www.nytimes.com/2025/08/09/business/alexa-artificial-intelligence-amazon.html?unlocked_article_code=1.c08.AM7L.3hvo5WbPW0JG&smid=url-share

    It took Amazon several years to overcome technical hurdles as it remade its voice assistant with new artificial intelligence technology.

    For the last few years, I’ve been waiting for Alexa’s A.I. glow-up.

    I’ve been a loyal user of Alexa, the voice assistant that powers Amazon’s home devices and smart speakers, for more than a decade. I have five Alexa-enabled speakers scattered throughout my house, and while I don’t use them for anything complicated — playing music, setting timers and getting the weather forecast are basically it — they’re good at what they do.

    But since 2023, when ChatGPT added an A.I. voice mode that could answer questions in a fluid, conversational way, it has been obvious that Alexa would need a brain transplant — a new A.I. system built around the same large language models, or L.L.M.s, that power ChatGPT and other products. L.L.M.-based systems are smarter and more versatile than older systems. They can handle more complex requests, making them an obvious pick for a next-generation voice assistant.

    Amazon agrees. For the last few years, the company has been working feverishly to upgrade the A.I. inside Alexa. It has been a slog. Replacing the A.I. technology inside a voice assistant isn’t as easy as swapping in a new model, and the Alexa remodel was reportedly delayed by internal struggles and technical challenges along the way. L.L.M.s also aren’t a perfect match for this kind of product, which not only needs to work with tons of pre-existing services and millions of Alexa-enabled devices, but also needs to reliably perform basic tasks.

    Reply
  7. Tomi Engdahl says:

    Tom Warren / The Verge:
    Microsoft launches Copilot 3D, a free AI-powered tool allowing users to transform 2D images into 3D models without a text prompt, available in Copilot Labs

    Microsoft’s new Copilot 3D feature is great for Ikea, bad for my dog
    https://www.theverge.com/hands-on/756587/microsoft-copilot-3d-feature-hands-on

    Copilot 3D can convert 2D images into 3D models that can be used in design tools.

    Reply
  8. Tomi Engdahl says:

    Robert Frank / CNBC:
    CB Insights: the number of AI unicorns reached 498 with a combined value of $2.7T, 100 founded since 2023, and over 1,300 AI startups valued above $100M

    AI is creating new billionaires at a record pace
    https://www.cnbc.com/2025/08/10/ai-artificial-intelligence-billionaires-wealth.html

    Key Points

    The artificial intelligence boom is quickly becoming the largest wealth creation spree in recent history.
    That’s boosted in part by blockbuster fundraising rounds this year for Anthropic, Safe Superintelligence, OpenAI, Anysphere and other AI startups, which have helped mint new billionaires.
    With time, and IPOs, many of today’s private AI fortunes will eventually become more liquid, providing a historic opportunity for wealth management firms.

    Artificial intelligence startups have minted dozens of new billionaires this year, adding to an AI boom that’s quickly becoming the largest wealth creation spree in recent history.

    Blockbuster fundraising rounds this year for Anthropic, Safe Superintelligence, OpenAI, Anysphere and other startups have created vast new paper fortunes and propelled valuations to record levels. There are now 498 AI “unicorns,” or private AI companies with valuations of $1 billion or more, with a combined value of $2.7 trillion, according to CB Insights. Fully 100 of them were founded since 2023. There are more than 1,300 AI startups with valuations of over $100 million, the firm said.

    Combined with the soaring stock prices of Nvidia
    , Meta, Microsoft

    and other publicly traded AI-related firms, along with the infrastructure companies that are building data centers and computing power and the huge payouts for AI engineers, AI is creating personal wealth on a scale that makes the past two tech waves look like warmups.

    “Going back over 100 years of data, we have never seen wealth created at this size and speed,” said Andrew McAfee, principal researcher at MIT. “It’s unprecedented.”

    A new crop of billionaires is rising with sky-rocketing valuations. In March, Bloomberg estimated that four of the largest private AI companies had created at least 15 billionaires with a combined net worth of $38 billion. More than a dozen unicorns have been crowned since then.

    Granted, most of the AI wealth creation is in private companies, making it difficult for equity holders and founders to cash out. Unlike the dot-com boom of the late 1990s, when a flood of companies went public, today’s AI startups can stay private for longer given the constant investment from venture capital funds, sovereign wealth funds, family offices and other tech investors.

    At the same time, the rapid growth of secondary markets is allowing equity owners of private companies to sell their shares to other investors and provide liquidity. Structured secondary sales or tender offers are becoming widespread. Many founders can also borrow against their equity.

    “It’s astonishing how geographically concentrated this AI wave is,” said McAfee, who is also co-director of MIT’s Initiative on the Digital Economy. “The people who know how to found and fund and grow tech companies are there. I’ve heard people say for 25 years ‘This is the end of the Silicon Valley’ or some other place is ‘the new Silicon Valley.’ But Silicon Valley is still Silicon Valley.”

    With time, and initial public offerings, many of today’s private AI fortunes will eventually become more liquid, providing a historic opportunity for wealth management firms. All of the major private banks, wirehouses, independent advisors and boutique firms are cozying up to the AI elite in hopes of winning their business, according to tech advisors.

    Like the dot-com millionaires, however, luring the AI wealthy may be challenging for traditional wealth management companies.

    “I would say a much higher percentage of the ultimate wealth being created is illiquid,” he said. “There are ways of getting liquidity, but it’s tiny compared to being employed at Meta or Google” or another megacap publicly traded tech company.

    Eventually, those fortunes will become liquid and prized by wealth management firms. Krinsky said the AI wealthy are likely to follow similar client patterns as the newly rich dot-commers of the 1990s. Initially, the dot-commers used their excess liquidity and assets to invest in similar tech companies they knew through their networks, colleagues or shared investors. He said the same is likely true for the AI wealthy.

    “Everybody turned around and invested with their friends in the same kind of companies that created their own wealth,” he said.

    After discovering the perils of having all their wealth concentrated in one highly volatile and speculative industry, the dot-commers turned to wealth management.

    Krinksy said today’s AI entrepreneurs are likely to follow the same path, with huge potential for AI to disrupt — if not replace — many of the traditional functions of wealth management.

    Ultimately, however, the ultra-wealthy AI founders will discover the need for the traditional, personalized service that only dedicated wealth management teams can provide, whether it’s around taxes, inheritances and estate planning, or philanthropy advice and portfolio construction.

    “After people were beaten up or bruised up in the early 2000s, they came around to appreciating some degree of diversification and maybe hiring a professional manager to protect them from themselves,” Krinksy said. “I anticipate a similar trend with the AI group.”

    Reply
  9. Tomi Engdahl says:

    Peter Rudegeair / Wall Street Journal:
    AI-focused hedge funds raise billions, such as ex-OpenAI researcher Leopold Aschenbrenner’s Situational Awareness that amassed $1.5B+ for a “brain trust on AI”

    Billions Flow to New Hedge Funds Focused on AI-Related Bets
    A 23-year-old former OpenAI researcher quickly amassed more than $1.5 billion for ‘brain trust on AI’
    https://www.wsj.com/finance/investing/billions-flow-to-new-hedge-funds-focused-on-ai-related-bets-48d97f41?st=ksFaHy

    Reply
  10. Tomi Engdahl says:

    Reuters:
    Rumble says it plans to acquire German AI cloud group Northern Data in an all-stock deal; estimates show the deal could be valued at $1.17B

    Rumble weighs near $1.2 billion bid for German AI cloud firm Northern Data
    https://www.reuters.com/business/rumble-weighs-near-12-billion-bid-german-ai-cloud-firm-northern-data-2025-08-11/

    Reply
  11. Tomi Engdahl says:

    Epätasaista älyä Sveitsistä
    Lumo on tervetullut lisä Protonin palveluihin ja Euroopan tekoälykuvioihin, ja se ymmärtää myös suomea. Siitä puuttuu kuitenkin ominaisuus, joka alkaa olla tekoälyjen vakiovaruste.
    https://www.iltalehti.fi/ohjelmat/a/09df3019-0578-48bb-bc05-3e9894369905

    Vakavasti otettavia eurooppalaisia tekoälypalveluita ei ole juuri näkynyt. Chat GPT, Gemini ja Claude tulevat Yhdysvalloista, Deepseek ja Qwen puolestaan Kiinasta.

    Heinäkuun lopussa tekoälyjen joukkoon liittyi sveitsiläisen Protonin kehittämä Lumo. Sen esittelyssä Proton huomauttaa, että Lumo ei liity Kiinan tai USA:n tekoäly-yhtiöihin, ja että se perustuu avoimeen lähdekoodiin.

    Reply
  12. Tomi Engdahl says:

    Onko AI aikamme suuri kupla? Lue Hannu Angervuon ajatukset.
    Omia huomioita:
    Teknologia-alalla pannukakut ovat tyypillisiä. Viimeksi blockchain ja virtuaalitodellisuus.
    Isot investoinnit kyvykkyyksiin eivät tarkoita tuottavuuden ja hyötyjen kasvua.
    Investoinneissa ja rekryissä on kova Dotcom -ajan tunnelma.
    Kiina näyttää käyttävän AI-kuplaa aseena kauppasodassa tarjoamalla samat ominaisuudet halvemmalla ja ilmaiseksi – ehkä tavoitteena kaataa pörssi ja heikentää dollaria?
    Tuleeko Ai-palveluista välttämätön ja hyvä paha, joka vie hitosti rahaa ja resursseja ilman liiketoiminnan tuottoja?
    Näillä ajatuksilla olisimme rakentamssa kaikkien aikojen jytkyä. ATH myös kryptoissa, asunnoissa jne. USAssa ja korot korkealla….. https://www.salkunrakentaja.fi/2025/08/kymmenen-suurinta-sp500/?fbclid=IwY2xjawMGtjVleHRuA2FlbQIxMQABHgsDPW5U3cOM2f2PIJPjnmYxvxwLxcTqu5PgE3cAwkAknToEeBnbrZi4zInd_aem_vYgEPgN5pOja2ok3ugGb9w

    Source https://www.facebook.com/share/p/16v32BdCqR/

    Reply
  13. Tomi Engdahl says:

    Alex Knapp / Forbes:
    Tahoe Therapeutics, which is building AI models of living cells, raised $30M led by Amplify Partners at a $120M valuation, taking its total funding to $42M

    Biotech Startup Tahoe Therapeutics Raised $30 Million To Build AI Models Of Living Cells
    https://www.forbes.com/sites/alexknapp/2025/08/11/biotech-startup-tahoe-therapeutics-raised-30-million-to-build-ai-models-of-living-cells/

    The Palo Alto, California-based company, now valued at $120 million, has developed a scalable way to quickly generate crucial biological data needed for AI models–and use them to find new cures for cancer.

    Reply
  14. Tomi Engdahl says:

    Managing the Trust-Risk Equation in AI: Predicting Hallucinations Before They Strike
    https://www.securityweek.com/managing-the-trust-risk-equation-in-ai-predicting-hallucinations-before-they-strike/

    New physics-based research suggests large language models could predict when their own answers are about to go wrong — a potential game changer for trust, risk, and security in AI-driven systems.

    Hallucinations are a continuing and inevitable problem for LLMs because they are a byproduct of operation rather than a bug in design. But what if we knew when and why they happen?

    “Hallucinations – the generation of plausible but false, fabricated, or nonsensical content – are not just common, they are mathematically unavoidable in all computable LLMs… hallucinations are not bugs, they are inevitable byproducts of how LLMs are built, and for enterprise applications, that’s a death knell,” wrote Srini Pagidyala(co-founder of Aigo AI) on LinkedIn.

    Neil Johnson (professor of physics at GWU), goes further, “More worrying,” he says, “is that output can mysteriously tip mid-response from good (correct) to bad (misleading or wrong) without the user noticing.”

    The use of AI is a trust / risk balance. Its benefits to cybersecurity cannot be ignored, but there is always the potential for the response to be wrong. Johnson is trying to add predictability to the unpredictable hallucination with the help of mathematics. His latest paper (Multispin Physics of AI Tipping Points and Hallucinations) extends arguments expressed in an earlier paper.

    “Establishing a mathematical mapping to a multispin thermal system, we reveal a hidden tipping instability at the scale of the AI’s ‘atom’ (basic Attention head),” he writes. That tipping is the point at which the mathematical inevitability becomes the practical reality. His work will not eliminate hallucinations but could add visibility and potentially reduce the incidence of hallucinations in the future.

    Given the increasing use of AI and the tendency to believe AI output above human expertise, “Harms and lawsuits from unnoticed good-to-bad output tipping look set to skyrocket globally across medical, mental health, financial, commercial, government and military AI domains.”

    Reply
  15. Tomi Engdahl says:

    Red Teams Jailbreak GPT-5 With Ease, Warn It’s ‘Nearly Unusable’ for Enterprise
    https://www.securityweek.com/red-teams-breach-gpt-5-with-ease-warn-its-nearly-unusable-for-enterprise/

    Researchers demonstrate how multi-turn “storytelling” attacks bypass prompt-level filters, exposing systemic weaknesses in GPT-5’s defenses.

    Reply
  16. Tomi Engdahl says:

    James O’Donnell / MIT Technology Review:
    A look at potential issues as US judges join lawyers in testing generative AI to speed up legal research, summarize cases, draft routine orders, and more

    Meet the early-adopter judges using AI
    As the line between helping and judging blurs, the cost of errors is steep.
    https://www.technologyreview.com/2025/08/11/1121460/meet-the-early-adopter-judges-using-ai/

    The propensity for AI systems to make mistakes and for humans to miss those mistakes has been on full display in the US legal system as of late. The follies began when lawyers—including some at prestigious firms—submitted documents citing cases that didn’t exist. Similar mistakes soon spread to other roles in the courts. In December, a Stanford professor submitted sworn testimony containing hallucinations and errors in a case about deepfakes, despite being an expert on AI and misinformation himself.

    The buck stopped with judges, who—whether they or opposing counsel caught the mistakes—issued reprimands and fines, and likely left attorneys embarrassed enough to think twice before trusting AI again.

    But now judges are experimenting with generative AI too. Some are confident that with the right precautions, the technology can expedite legal research, summarize cases, draft routine orders, and overall help speed up the court system, which is badly backlogged in many parts of the US. This summer, though, we’ve already seen AI-generated mistakes go undetected and cited by judges. A federal judge in New Jersey had to reissue an order riddled with errors that may have come from AI, and a judge in Mississippi refused to explain why his order too contained mistakes that seemed like AI hallucinations.

    The results of these early-adopter experiments make two things clear. One, the category of routine tasks—for which AI can assist without requiring human judgment—is slippery to define. Two, while lawyers face sharp scrutiny when their use of AI leads to mistakes, judges may not face the same accountability, and walking back their mistakes before they do damage is much harder.

    Reply
  17. Tomi Engdahl says:

    Surbhi Misra / Reuters:
    Elon Musk alleges Apple is violating antitrust laws by “making it impossible” for any AI firm but OpenAI to top its App Store, says xAI will take legal action

    Musk says xAI to take legal action against Apple over App Store rankings
    https://www.reuters.com/sustainability/boards-policy-regulation/musk-says-xai-take-legal-action-against-apple-over-app-store-rankings-2025-08-12/

    Reply
  18. Tomi Engdahl says:

    Casey Newton / The Verge:
    Q&A with Notion CEO Ivan Zhao on Notion’s evolution into an “AI workspace”, being profitable, B2B vs. B2C, usage-based pricing for AI, and more

    Notion CEO Ivan Zhao wants you to demand better from your tools

    The head of Notion on productivity, LEGO, and what he learned from Kyoto’s craft

    https://www.theverge.com/decoder-podcast-with-nilay-patel/756736/notion-ceo-ivan-zhao-productivity-software-design-ai-interview

    Reply
  19. Tomi Engdahl says:

    VideoCardz.com:
    Nvidia announces two low-power Blackwell workstation GPUs, the RTX PRO 4000 SFF and RTX PRO 2000, both featuring compact designs and available later this year

    https://videocardz.com/newz/nvidia-launches-rtx-pro-4000-sff-and-rtx-pro-2000-blackwell-workstation-gpus-with-70w-tdp

    Reply
  20. Tomi Engdahl says:

    Rebecca Szkutak / TechCrunch:
    Nvidia debuts new Omniverse SDKs and Cosmos world foundation models for robotics devs, including Cosmos Reason, a 7B-parameter reasoning vision language model

    https://techcrunch.com/2025/08/11/nvidia-unveils-new-cosmos-world-models-other-infra-for-physical-applications-of-ai/

    Reply
  21. Tomi Engdahl says:

    Jay Peters / The Verge:
    Reddit says it will block the Internet Archive from indexing most of its pages after it caught AI companies scraping its data from the Wayback Machine — The company says that AI companies have scraped data from the Wayback Machine, so it’s going to limit what the Wayback Machine can access.

    Reddit will block the Internet Archive
    https://www.theverge.com/news/757538/reddit-internet-archive-wayback-machine-block-limit

    The company says that AI companies have scraped data from the Wayback Machine, so it’s going to limit what the Wayback Machine can access.

    Reply
  22. Tomi Engdahl says:

    Zac Hall / 9to5Mac:
    Anthropic adds a memory feature for Claude to reference information from past chats, available now for Max, Team, and Enterprise plans, and soon for other plans — Anthropic has introduced a helpful new feature for Claude that solves a problem similar to one ChatGPT already addressed.

    Claude just learned a useful ChatGPT trick
    https://9to5mac.com/2025/08/11/claude-memory-feature/

    Reply
  23. Tomi Engdahl says:

    Generate videos with Veo 3 in Google Gemini — Create high-quality, 8-second videos with Veo 3, our state-of-the-art AI video generator. Try it with a Google AI Pro plan or get the highest access with the Ultra plan.

    https://gemini.google/overview/video-generation/?utm_source=techmeme&utm_medium=paid&utm_campaign=paid_aitl_q3_veo3_hp&dclid=CPu40NPihI8DFecoogMdyWwPgA&gad_source=7

    Reply
  24. Tomi Engdahl says:

    Omair Pall / Mashable India:
    xAI makes Grok 4 free for all users worldwide after making Grok Imagine free for all US users; Grok 4 Heavy remains exclusive to SuperGrok Heavy subscribers — xAI releases Grok 4 free, rivals GPT-5 launch. — Elon Musk-owned xAI has officially made the latest version of its AI model …

    Elon Musk’s xAI Releases Grok 4 For Free Globally, Challenges OpenAI’s GPT-5 Launch
    xAI releases Grok 4 free, rivals GPT-5 launch.
    https://in.mashable.com/tech/98367/elon-musks-xai-releases-grok-4-for-free-globally-challenges-openais-gpt-5-launch

    Reply
  25. Tomi Engdahl says:

    Andrew Deck / Nieman Lab:
    The Yomiuri Shimbun, Japan’s largest newspaper by circulation, sues Perplexity, alleging unauthorized reproduction of its articles, and seeks $14.7M in damages

    Japan’s largest newspaper, Yomiuri Shimbun, sues AI startup Perplexity for copyright violations
    https://www.niemanlab.org/2025/08/japans-largest-newspaper-yomiuri-shimbun-sues-perplexity-for-copyright-violations/

    The Yomiuri Shimbun, Japan’s largest newspaper by circulation, has sued the generative AI startup Perplexity for copyright infringement. The lawsuit, filed in Tokyo District Court on August 7, marks the first copyright challenge by a major Japanese news publisher against an AI company.

    The filing claims that Perplexity accessed 119,467 articles on Yomiuri’s site between February and June of this year, based on an analysis of its company server logs. Yomiuri alleges the scraping has been used by Perplexity to reproduce the newspaper’s copyrighted articles in responses to user queries without authorization.

    In particular, the suit claims Perplexity has violated its “right of reproduction” and its “right to transmit to the public,” two tenets of Japanese law that give copyright holders control over the copying and distribution of their work. The suit seeks nearly $15 million in damages and demands that Perplexity stop reproducing its articles.

    Japan’s copyright law allows AI developers to train models on copyrighted material without permission. This leeway is a direct result of a 2018 amendment to Japan’s Copyright Act, meant to encourage AI development in the country’s tech sector. The law does not, however, allow for wholesale reproduction of those works, or for AI developers to distribute copies in a way that will “unreasonably prejudice the interests of the copyright owner.”

    In a statement sent to Yomiuri, a Perplexity spokesperson said, “We are deeply sorry for the misunderstanding this has caused in Japan. We are currently working hard to understand the nature of the claims. We take this very seriously, because Perplexity is committed to ensuring that publishers and journalists benefit from the new business models that will arise in the AI age.”

    Last fall, two News Corp–owned publishers, The Wall Street Journal and the New York Post, took similar legal action against Perplexity. Outside of the U.S., though, Perplexity has so far avoided much legal scrutiny. Competing generative AI companies, including OpenAI and Meta, have faced copyright infringement suits from major international publishers.

    In India, a joint copyright infringement suit against OpenAI includes some of the country’s most established news publications, including The Indian Express, The Hindu, and The India Today group. In France, the country’s leading authors and publishers associations have filed suits against Meta, alleging economic “parasitism.”

    In May, the Japan Newspaper Publishers and Editors Association published an open letter calling out AI companies for free riding off their copyrighted material and warning them to stop their scraping practices. The status quo “could cause huge damage to the business of news organizations,” the association wrote at the time.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*