AI trends 2025

AI is developing all the time. Here are some picks from several articles what is expected to happen in AI and around it in 2025. Here are picks from various articles, the texts are picks from the article edited and in some cases translated for clarity.

AI in 2025: Five Defining Themes
https://news.sap.com/2025/01/ai-in-2025-defining-themes/
Artificial intelligence (AI) is accelerating at an astonishing pace, quickly moving from emerging technologies to impacting how businesses run. From building AI agents to interacting with technology in ways that feel more like a natural conversation, AI technologies are poised to transform how we work.
But what exactly lies ahead?
1. Agentic AI: Goodbye Agent Washing, Welcome Multi-Agent Systems
AI agents are currently in their infancy. While many software vendors are releasing and labeling the first “AI agents” based on simple conversational document search, advanced AI agents that will be able to plan, reason, use tools, collaborate with humans and other agents, and iteratively reflect on progress until they achieve their objective are on the horizon. The year 2025 will see them rapidly evolve and act more autonomously. More specifically, 2025 will see AI agents deployed more readily “under the hood,” driving complex agentic workflows.
In short, AI will handle mundane, high-volume tasks while the value of human judgement, creativity, and quality outcomes will increase.
2. Models: No Context, No Value
Large language models (LLMs) will continue to become a commodity for vanilla generative AI tasks, a trend that has already started. LLMs are drawing on an increasingly tapped pool of public data scraped from the internet. This will only worsen, and companies must learn to adapt their models to unique, content-rich data sources.
We will also see a greater variety of foundation models that fulfill different purposes. Take, for example, physics-informed neural networks (PINNs), which generate outcomes based on predictions grounded in physical reality or robotics. PINNs are set to gain more importance in the job market because they will enable autonomous robots to navigate and execute tasks in the real world.
Models will increasingly become more multimodal, meaning an AI system can process information from various input types.
3. Adoption: From Buzz to Business
While 2024 was all about introducing AI use cases and their value for organizations and individuals alike, 2025 will see the industry’s unprecedented adoption of AI specifically for businesses. More people will understand when and how to use AI, and the technology will mature to the point where it can deal with critical business issues such as managing multi-national complexities. Many companies will also gain practical experience working for the first time through issues like AI-specific legal and data privacy terms (compared to when companies started moving to the cloud 10 years ago), building the foundation for applying the technology to business processes.
4. User Experience: AI Is Becoming the New UI
AI’s next frontier is seamlessly unifying people, data, and processes to amplify business outcomes. In 2025, we will see increased adoption of AI across the workforce as people discover the benefits of humans plus AI.
This means disrupting the classical user experience from system-led interactions to intent-based, people-led conversations with AI acting in the background. AI copilots will become the new UI for engaging with a system, making software more accessible and easier for people. AI won’t be limited to one app; it might even replace them one day. With AI, frontend, backend, browser, and apps are blurring. This is like giving your AI “arms, legs, and eyes.”
5. Regulation: Innovate, Then Regulate
It’s fair to say that governments worldwide are struggling to keep pace with the rapid advancements in AI technology and to develop meaningful regulatory frameworks that set appropriate guardrails for AI without compromising innovation.

12 AI predictions for 2025
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
https://www.cio.com/article/3630070/12-ai-predictions-for-2025.html
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
1. Small language models and edge computing
Most of the attention this year and last has been on the big language models — specifically on ChatGPT in its various permutations, as well as competitors like Anthropic’s Claude and Meta’s Llama models. But for many business use cases, LLMs are overkill and are too expensive, and too slow, for practical use.
“Looking ahead to 2025, I expect small language models, specifically custom models, to become a more common solution for many businesses,”
2. AI will approach human reasoning ability
In mid-September, OpenAI released a new series of models that thinks through problems much like a person would, it claims. The company says it can achieve PhD-level performance in challenging benchmark tests in physics, chemistry, and biology. For example, the previous best model, GPT-4o, could only solve 13% of the problems on the International Mathematics Olympiad, while the new reasoning model solved 83%.
If AI can reason better, then it will make it possible for AI agents to understand our intent, translate that into a series of steps, and do things on our behalf, says Gartner analyst Arun Chandrasekaran. “Reasoning also helps us use AI as more of a decision support system,”
3. Massive growth in proven use cases
This year, we’ve seen some use cases proven to have ROI, says Monteiro. In 2025, those use cases will see massive adoption, especially if the AI technology is integrated into the software platforms that companies are already using, making it very simple to adopt.
“The fields of customer service, marketing, and customer development are going to see massive adoption,”
4. The evolution of agile development
The agile manifesto was released in 2001 and, since then, the development philosophy has steadily gained over the previous waterfall style of software development.
“For the last 15 years or so, it’s been the de-facto standard for how modern software development works,”
5. Increased regulation
At the end of September, California governor Gavin Newsom signed a law requiring gen AI developers to disclose the data they used to train their systems, which applies to developers who make gen AI systems publicly available to Californians. Developers must comply by the start of 2026.
There are also regulations about the use of deep fakes, facial recognition, and more. The most comprehensive law, the EU’s AI Act, which went into effect last summer, is also something that companies will have to comply with starting in mid-2026, so, again, 2025 is the year when they will need to get ready.
6. AI will become accessible and ubiquitous
With gen AI, people are still at the stage of trying to figure out what gen AI is, how it works, and how to use it.
“There’s going to be a lot less of that,” he says. But gen AI will become ubiquitous and seamlessly woven into workflows, the way the internet is today.
7. Agents will begin replacing services
Software has evolved from big, monolithic systems running on mainframes, to desktop apps, to distributed, service-based architectures, web applications, and mobile apps. Now, it will evolve again, says Malhotra. “Agents are the next phase,” he says. Agents can be more loosely coupled than services, making these architectures more flexible, resilient and smart. And that will bring with it a completely new stack of tools and development processes.
8. The rise of agentic assistants
In addition to agents replacing software components, we’ll also see the rise of agentic assistants, adds Malhotra. Take for example that task of keeping up with regulations.
Today, consultants get continuing education to stay abreast of new laws, or reach out to colleagues who are already experts in them. It takes time for the new knowledge to disseminate and be fully absorbed by employees.
“But an AI agent can be instantly updated to ensure that all our work is compliant with the new laws,” says Malhotra. “This isn’t science fiction.”
9. Multi-agent systems
Sure, AI agents are interesting. But things are going to get really interesting when agents start talking to each other, says Babak Hodjat, CTO of AI at Cognizant. It won’t happen overnight, of course, and companies will need to be careful that these agentic systems don’t go off the rails.
Companies such as Sailes and Salesforce are already developing multi-agent workflows.
10. Multi-modal AI
Humans and the companies we build are multi-modal. We read and write text, we speak and listen, we see and we draw. And we do all these things through time, so we understand that some things come before other things. Today’s AI models are, for the most part, fragmentary. One can create images, another can only handle text, and some recent ones can understand or produce video.
11. Multi-model routing
Not to be confused with multi-modal AI, multi-modal routing is when companies use more than one LLM to power their gen AI applications. Different AI models are better at different things, and some are cheaper than others, or have lower latency. And then there’s the matter of having all your eggs in one basket.
“A number of CIOs I’ve spoken with recently are thinking about the old ERP days of vendor lock,” says Brett Barton, global AI practice leader at Unisys. “And it’s top of mind for many as they look at their application portfolio, specifically as it relates to cloud and AI capabilities.”
Diversifying away from using just a single model for all use cases means a company is less dependent on any one provider and can be more flexible as circumstances change.
12. Mass customization of enterprise software
Today, only the largest companies, with the deepest pockets, get to have custom software developed specifically for them. It’s just not economically feasible to build large systems for small use cases.
“Right now, people are all using the same version of Teams or Slack or what have you,” says Ernst & Young’s Malhotra. “Microsoft can’t make a custom version just for me.” But once AI begins to accelerate the speed of software development while reducing costs, it starts to become much more feasible.

9 IT resolutions for 2025
https://www.cio.com/article/3629833/9-it-resolutions-for-2025.html
1. Innovate
“We’re embracing innovation,”
2. Double down on harnessing the power of AI
Not surprisingly, getting more out of AI is top of mind for many CIOs.
“I am excited about the potential of generative AI, particularly in the security space,”
3. And ensure effective and secure AI rollouts
“AI is everywhere, and while its benefits are extensive, implementing it effectively across a corporation presents challenges. Balancing the rollout with proper training, adoption, and careful measurement of costs and benefits is essential, particularly while securing company assets in tandem,”
4. Focus on responsible AI
The possibilities of AI grow by the day — but so do the risks.
“My resolution is to mature in our execution of responsible AI,”
“AI is the new gold and in order to truly maximize it’s potential, we must first have the proper guardrails in place. Taking a human-first approach to AI will help ensure our state can maintain ethics while taking advantage of the new AI innovations.”
5. Deliver value from generative AI
As organizations move from experimenting and testing generative AI use cases, they’re looking for gen AI to deliver real business value.
“As we go into 2025, we’ll continue to see the evolution of gen AI. But it’s no longer about just standing it up. It’s more about optimizing and maximizing the value we’re getting out of gen AI,”
6. Empower global talent
Although harnessing AI is a top objective for Morgan Stanley’s Wetmur, she says she’s equally committed to harnessing the power of people.
7. Create a wholistic learning culture
Wetmur has another talent-related objective: to create a learning culture — not just in her own department but across all divisions.
8. Deliver better digital experiences
Deltek’s Cilsick has her sights set on improving her company’s digital employee experience, believing that a better DEX will yield benefits in multiple ways.
Cilsick says she first wants to bring in new technologies and automation to “make things as easy as possible,” mirroring the digital experiences most workers have when using consumer technologies.
“It’s really about leveraging tech to make sure [employees] are more efficient and productive,”
“In 2025 my primary focus as CIO will be on transforming operational efficiency, maximizing business productivity, and enhancing employee experiences,”
9. Position the company for long-term success
Lieberman wants to look beyond 2025, saying another resolution for the year is “to develop a longer-term view of our technology roadmap so that we can strategically decide where to invest our resources.”
“My resolutions for 2025 reflect the evolving needs of our organization, the opportunities presented by AI and emerging technologies, and the necessity to balance innovation with operational efficiency,”
Lieberman aims to develop AI capabilities to automate routine tasks.
“Bots will handle common inquiries ranging from sales account summaries to HR benefits, reducing response times and freeing up resources for strategic initiatives,”

Not just hype — here are real-world use cases for AI agents
https://venturebeat.com/ai/not-just-hype-here-are-real-world-use-cases-for-ai-agents/
Just seven or eight months ago, when a customer called in to or emailed Baca Systems with a service question, a human agent handling the query would begin searching for similar cases in the system and analyzing technical documents.
This process would take roughly five to seven minutes; then the agent could offer the “first meaningful response” and finally begin troubleshooting.
But now, with AI agents powered by Salesforce, that time has been shortened to as few as five to 10 seconds.
Now, instead of having to sift through databases for previous customer calls and similar cases, human reps can ask the AI agent to find the relevant information. The AI runs in the background and allows humans to respond right away, Russo noted.
AI can serve as a sales development representative (SDR) to send out general inquires and emails, have a back-and-forth dialogue, then pass the prospect to a member of the sales team, Russo explained.
But once the company implements Salesforce’s Agentforce, a customer needing to modify an order will be able to communicate their needs with AI in natural language, and the AI agent will automatically make adjustments. When more complex issues come up — such as a reconfiguration of an order or an all-out venue change — the AI agent will quickly push the matter up to a human rep.

Open Source in 2025: Strap In, Disruption Straight Ahead
Look for new tensions to arise in the New Year over licensing, the open source AI definition, security and compliance, and how to pay volunteer maintainers.
https://thenewstack.io/open-source-in-2025-strap-in-disruption-straight-ahead/
The trend of widely used open source software moving to more restrictive licensing isn’t new.
In addition to the demands of late-stage capitalism and impatient investors in companies built on open source tools, other outside factors are pressuring the open source world. There’s the promise/threat of generative AI, for instance. Or the shifting geopolitical landscape, which brings new security concerns and governance regulations.
What’s ahead for open source in 2025?
More Consolidation, More Licensing Changes
The Open Source AI Debate: Just Getting Started
Security and Compliance Concerns Will Rise
Paying Maintainers: More Cash, Creativity Needed

Kyberturvallisuuden ja tekoälyn tärkeimmät trendit 2025
https://www.uusiteknologia.fi/2024/11/20/kyberturvallisuuden-ja-tekoalyn-tarkeimmat-trendit-2025/
1. Cyber ​​infrastructure will be centered on a single, unified security platform
2. Big data will give an edge against new entrants
3. AI’s integrated role in 2025 means building trust, governance engagement, and a new kind of leadership
4. Businesses will adopt secure enterprise browsers more widely
5. AI’s energy implications will be more widely recognized in 2025
6. Quantum realities will become clearer in 2025
7. Security and marketing leaders will work more closely together

Presentation: For 2025, ‘AI eats the world’.
https://www.ben-evans.com/presentations

Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing
However, just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity. Right now, we very much see AI in this “peak of inflated expectations” phase and predict that it will dip into the “trough of disillusionment”, where organizations realize that it is not the silver bullet they thought it would be. In fact, there are already signs of cynicism as decision-makers are bombarded with marketing messages from vendors and struggle to discern what is a genuine use case and what is not relevant for their organization.
There is also regulation that will come into force, such as the EU AI Act, which is a comprehensive legal framework that sets out rules for the development and use of AI.
AI certainly won’t solve every problem, and it should be used like automation, as part of a collaborative mix of people, process and technology. You simply can’t replace human intuition with AI, and many new AI regulations stipulate that human oversight is maintained.

7 Splunk Predictions for 2025
https://www.splunk.com/en_us/form/future-predictions.html
AI: Projects must prove their worth to anxious boards or risk defunding, and LLMs will go small to reduce operating costs and environmental impact.

OpenAI, Google and Anthropic Are Struggling to Build More Advanced AI
Three of the leading artificial intelligence companies are seeing diminishing returns from their costly efforts to develop newer models.
https://www.bloomberg.com/news/articles/2024-11-13/openai-google-and-anthropic-are-struggling-to-build-more-advanced-ai
Sources: OpenAI, Google, and Anthropic are all seeing diminishing returns from costly efforts to build new AI models; a new Gemini model misses internal targets

It Costs So Much to Run ChatGPT That OpenAI Is Losing Money on $200 ChatGPT Pro Subscriptions
https://futurism.com/the-byte/openai-chatgpt-pro-subscription-losing-money?fbclid=IwY2xjawH8epVleHRuA2FlbQIxMQABHeggEpKe8ZQfjtPRC0f2pOI7A3z9LFtFon8lVG2VAbj178dkxSQbX_2CJQ_aem_N_ll3ETcuQ4OTRrShHqNGg
In a post on X-formerly-Twitter, CEO Sam Altman admitted an “insane” fact: that the company is “currently losing money” on ChatGPT Pro subscriptions, which run $200 per month and give users access to its suite of products including its o1 “reasoning” model.
“People use it much more than we expected,” the cofounder wrote, later adding in response to another user that he “personally chose the price and thought we would make some money.”
Though Altman didn’t explicitly say why OpenAI is losing money on these premium subscriptions, the issue almost certainly comes down to the enormous expense of running AI infrastructure: the massive and increasing amounts of electricity needed to power the facilities that power AI, not to mention the cost of building and maintaining those data centers. Nowadays, a single query on the company’s most advanced models can cost a staggering $1,000.

Tekoäly edellyttää yhä nopeampia verkkoja
https://etn.fi/index.php/opinion/16974-tekoaely-edellyttaeae-yhae-nopeampia-verkkoja
A resilient digital infrastructure is critical to effectively harnessing telecommunications networks for AI innovations and cloud-based services. The increasing demand for data-rich applications related to AI requires a telecommunications network that can handle large amounts of data with low latency, writes Carl Hansson, Partner Solutions Manager at Orange Business.

AI’s Slowdown Is Everyone Else’s Opportunity
Businesses will benefit from some much-needed breathing space to figure out how to deliver that all-important return on investment.
https://www.bloomberg.com/opinion/articles/2024-11-20/ai-slowdown-is-everyone-else-s-opportunity

Näin sirumarkkinoilla käy ensi vuonna
https://etn.fi/index.php/13-news/16984-naein-sirumarkkinoilla-kaey-ensi-vuonna
The growing demand for high-performance computing (HPC) for artificial intelligence and HPC computing continues to be strong, with the market set to grow by more than 15 percent in 2025, IDC estimates in its recent Worldwide Semiconductor Technology Supply Chain Intelligence report.
IDC predicts eight significant trends for the chip market by 2025.
1. AI growth accelerates
2. Asia-Pacific IC Design Heats Up
3. TSMC’s leadership position is strengthening
4. The expansion of advanced processes is accelerating.
5. Mature process market recovers
6. 2nm Technology Breakthrough
7. Restructuring the Packaging and Testing Market
8. Advanced packaging technologies on the rise

2024: The year when MCUs became AI-enabled
https://www-edn-com.translate.goog/2024-the-year-when-mcus-became-ai-enabled/?fbclid=IwZXh0bgNhZW0CMTEAAR1_fEakArfPtgGZfjd-NiPd_MLBiuHyp9qfiszczOENPGPg38wzl9KOLrQ_aem_rLmf2vF2kjDIFGWzRVZWKw&_x_tr_sl=en&_x_tr_tl=fi&_x_tr_hl=fi&_x_tr_pto=wapp
The AI ​​party in the MCU space started in 2024, and in 2025, it is very likely that there will be more advancements in MCUs using lightweight AI models.
Adoption of AI acceleration features is a big step in the development of microcontrollers. The inclusion of AI features in microcontrollers started in 2024, and it is very likely that in 2025, their features and tools will develop further.

Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing

AI Regulation Gets Serious in 2025 – Is Your Organization Ready?
While the challenges are significant, organizations have an opportunity to build scalable AI governance frameworks that ensure compliance while enabling responsible AI innovation.
https://www.securityweek.com/ai-regulation-gets-serious-in-2025-is-your-organization-ready/
Similar to the GDPR, the EU AI Act will take a phased approach to implementation. The first milestone arrives on February 2, 2025, when organizations operating in the EU must ensure that employees involved in AI use, deployment, or oversight possess adequate AI literacy. Thereafter from August 1 any new AI models based on GPAI standards must be fully compliant with the act. Also similar to GDPR is the threat of huge fines for non-compliance – EUR 35 million or 7 percent of worldwide annual turnover, whichever is higher.
While this requirement may appear manageable on the surface, many organizations are still in the early stages of defining and formalizing their AI usage policies.
Later phases of the EU AI Act, expected in late 2025 and into 2026, will introduce stricter requirements around prohibited and high-risk AI applications. For organizations, this will surface a significant governance challenge: maintaining visibility and control over AI assets.
Tracking the usage of standalone generative AI tools, such as ChatGPT or Claude, is relatively straightforward. However, the challenge intensifies when dealing with SaaS platforms that integrate AI functionalities on the backend. Analysts, including Gartner, refer to this as “embedded AI,” and its proliferation makes maintaining accurate AI asset inventories increasingly complex.
Where frameworks like the EU AI Act grow more complex is their focus on ‘high-risk’ use cases. Compliance will require organizations to move beyond merely identifying AI tools in use; they must also assess how these tools are used, what data is being shared, and what tasks the AI is performing. For instance, an employee using a generative AI tool to summarize sensitive internal documents introduces very different risks than someone using the same tool to draft marketing content.
For security and compliance leaders, the EU AI Act represents just one piece of a broader AI governance puzzle that will dominate 2025.
The next 12-18 months will require sustained focus and collaboration across security, compliance, and technology teams to stay ahead of these developments.

The Global Partnership on Artificial Intelligence (GPAI) is a multi-stakeholder initiative which aims to bridge the gap between theory and practice on AI by supporting cutting-edge research and applied activities on AI-related priorities.
https://gpai.ai/about/#:~:text=The%20Global%20Partnership%20on%20Artificial,activities%20on%20AI%2Drelated%20priorities.

2,315 Comments

  1. Tomi Engdahl says:

    The Rise Of AI Agents And The Future Of Data
    https://www.forbes.com/councils/forbestechcouncil/2025/03/06/the-rise-of-ai-agents-and-the-future-of-data/

    Consider how the brain processes information. It doesn’t store memories in neat boxes, and thoughts don’t always follow a straight path from point A to point B. Instead, the brain relies on a vast, branching network of billions of neurons that connect stimuli, experiences and knowledge (often in real time) to make sense of the world.

    This complex, adaptive cognitive web is what makes the human brain so powerful. This is precisely why machine learning neural networks are modeled after the brain. As machine learning models—including large language models—continue to evolve and become more intelligent, applications powered by them will more closely look and sound…like us.

    This brings me to data, the information that intelligent models rely on (like our memories). Historically, databases stored data in organized tables of rows and columns—a structure resembling a spreadsheet. In this “tabular” model of data management, each cell serves a singular, specific purpose, ensuring tight control. It’s a highly organized mechanism for data storage and retrieval, but it’s inflexible and struggles to adapt to unforeseen types of information.

    Reply
  2. Tomi Engdahl says:

    Technique Behind ChatGPT’s AI Wins Computing’s Top Prize—But Its Creators Are Worried
    Founders Andrew Barto and Richard Sutton received the 2024 Turing Award on Wednesday, before immediately flagging concerns about AI safety.
    https://decrypt.co/308830/technique-behind-chatgpts-ai-wins-computings-top-prize-but-its-creators-are-worried

    Reply
  3. Tomi Engdahl says:

    Chinese scientists build world’s first AI chip made of carbon and it’s super fast
    Chinese researchers have developed a chip that it could be a game-changer in modern computing
    https://www.scmp.com/news/china/science/article/3301229/chinese-scientists-build-worlds-first-ai-chip-made-carbon-and-its-super-fast

    Reply
  4. Tomi Engdahl says:

    IT leaders: What’s the gameplan as tech badly outpaces talent?
    https://www.cio.com/article/3841198/it-leaders-whats-the-gameplan-as-tech-badly-outpaces-talent.html

    With the pace of gen AI adoption accelerating across all business sectors and functions, required skills are increasingly in demand — but the people out there able to meet needs are in short supply.

    Dun and Bradstreet has been using AI and ML for years, and that includes gen AI, says Michael Manos, the company’s CTO. It’s a quickly-evolving field, he says, and the demand for professionals with experience in this space is exceedingly high. He’s seeing the need for professionals who can not only navigate the technology itself, but also manage increasing complexities around its surrounding architectures, data sets, infrastructure, applications, and overall security.

    “Professionals with real experience in this space are rare and can command significant compensation expectations or pursue roles of their choice,” he says. Dun & Bradstreet can attract that kind of talent, he says, because candidates are looking for roles where they can expand their own reach and scale. “We’ve been innovating with AI, ML, and LLMs for years,” he says.

    Reply
  5. Tomi Engdahl says:

    The talent shortage is particularly acute in two key areas, says Arun Chandrasekaran at Gartner. “There’s clearly a demand for more professionals,” he says. “Like someone who monitors and manages these models in production, there’s not a lot of AI engineers out there, but a mismatch between supply and demand.”

    The second area is responsible AI. “How do you build privacy, safety, security, and interoperability into the AI world?” he says. “The demand is high, even as regulations evolve in that space and the supply of professionals is limited.”

    Reply
  6. Tomi Engdahl says:

    Jaakko purki turhautumistaan Nordean chat-robottiin – vastaus löi ällikällä
    https://www.is.fi/taloussanomat/art-2000011100385.html

    Reply
  7. Tomi Engdahl says:

    Bloomberg:
    Baidu releases Ernie X1, an AI model that articulates its reasoning similarly to DeepSeek R1, and upgrades its flagship foundation model to Ernie 4.5, both free — The Ernie X1 model by China’s internet search leader works similarly to DeepSeek R1 — which shocked Silicon Valley …

    https://www.bloomberg.com/news/articles/2025-03-16/baidu-releases-reasoning-ai-model-to-take-on-deepseek

    Reply
  8. Tomi Engdahl says:

    Catherine Thorbecke / Bloomberg:
    The hype around AI agent Manus doesn’t represent a second DeepSeek moment, but reveals that Chinese startups can compete with US companies building AI products — The viral AI agent from a Chinese startup isn’t about research breakthroughs, it’s about creating competitive consumer products.

    https://www.bloomberg.com/opinion/articles/2025-03-13/manus-ai-pushes-the-deepseek-moment-further

    Reply
  9. Tomi Engdahl says:

    Kate Rooney / CNBC:
    Garry Tan says ~80% of YC’s W25 batch is AI focused, and the cohort is growing significantly faster than past ones, with actual revenue, thanks to “vibe coding”

    Y Combinator startups are fastest growing, most profitable in fund history because of AI
    https://www.cnbc.com/2025/03/15/y-combinator-startups-are-fastest-growing-in-fund-history-because-of-ai.html

    Y Combinator CEO Garry Tan says for about a quarter of the current YC startups, 95% of the code was written by AI.
    “What that means for founders is that you don’t need a team of 50 or 100 engineers,” Tan said. “You don’t have to raise as much. The capital goes much longer.”
    For the last nine months, the entire batch of YC companies in aggregate grew 10% per week, he said.

    Silicon Valley’s earliest stage companies are getting a major boost from artificial intelligence.

    Startup accelerator Y Combinator — known for backing Airbnb
    , Dropbox

    and Stripe — this week held its annual demo day in San Francisco, where founders pitched their startups to an auditorium of potential venture capital investors.

    Y Combinator CEO Garry Tan told CNBC that this group is growing significantly faster than past cohorts and with actual revenue. For the last nine months, the entire batch of YC companies in aggregate grew 10% per week, he said.

    “It’s not just the number one or two companies — the whole batch is growing 10% week on week,” said Tan, who is also a Y Combinator alum. “That’s never happened before in early-stage venture.”

    That growth spurt is thanks to leaps in artificial intelligence, Tan said.

    Reply
  10. Tomi Engdahl says:

    Anthony Ha / TechCrunch:
    Bluesky proposes letting users indicate if their data can be used for AI training, web archiving, and more; critics see it as a reversal of its prior statements

    Bluesky users debate plans around user data and AI training
    https://techcrunch.com/2025/03/15/bluesky-users-debate-plans-around-user-data-and-ai-training/

    Social network Bluesky recently published a proposal on GitHub outlining new options it could give users to indicate whether they want their posts and data to be scraped for things like generative AI training and public archiving.

    CEO Jay Graber discussed the proposal earlier this week, while on-stage at South by Southwest, but it attracted fresh attention on Friday night, after she posted about it on Bluesky. Some users reacted with alarm to the company’s plans, which they saw as a reversal of Bluesky’s previous insistence that it won’t sell user data to advertisers and won’t train AI on user posts.

    “Oh, hell no!” the user Sketchette wrote. “The beauty of this platform was the NOT sharing of information. Especially gen AI. Don’t you cave now.”

    0008: User Intents for Data Reuse
    https://github.com/bluesky-social/proposals/tree/main/0008-user-intents

    Reply
  11. Tomi Engdahl says:

    Molly White / Citation Needed:
    The real threat of AI models training on open-access material is that they may bleed free repositories dry by draining resources and not providing attribution

    “Wait, not like that”: Free and open access in the age of generative AI
    https://www.citationneeded.news/free-and-open-access-in-the-age-of-generative-ai/

    The real threat isn’t AI using open knowledge — it’s AI companies killing the projects that make knowledge free

    The visions of the open access movement have inspired countless people to contribute their work to the commons: a world where “every single human being can freely share in the sum of all knowledge” (Wikimedia), and where “education, culture, and science are equitably shared as a means to benefit humanity” (Creative Commonsa).
    a.

    Creative Commons is a non-profit that releases the Creative Commons licenses: easily reusable licenses that broadly release some rights so that anyone can share and/or build upon the works under specified terms.

    But there are scenarios that can introduce doubt for those who contribute to free and open projects like the Wikimedia projects, or who independently release their own works under free licenses. I call these “wait, no, not like that” moments.

    When a passionate Wikipedian discovers their carefully researched article has been packaged into an e-book and sold on Amazon for someone else’s profit? Wait, no, not like that.

    When a developer of an open source software project sees a multi-billion dollar tech company rely on their work without contributing anything back? Wait, no, not like that.

    When a nature photographer discovers their freely licensed wildlife photo was used in an NFT collection minted on an environmentally destructive blockchain? Wait, no, not like that.

    And perhaps most recently, when a person who publishes their work under a free license discovers that work has been used by tech mega-giants to train extractive, exploitative large language models? Wait, no, not like that.

    These reactions are understandable. When we freely license our work, we do so in service of those goals: free and open access to knowledge and education. But when trillion dollar companies exploit that openness while giving nothing back, or when our work enables harmful or exploitative uses, it can feel like we’ve been naïve. The natural response is to try to regain control.

    This is where many creators find themselves today, particularly in response to AI training. But the solutions they’re reaching for — more restrictive licenses, paywalls, or not publishing at all — risk destroying the very commons they originally set out to build.

    The first impulse is often to try to tighten the licensing, maybe by switching away to something like the Creative Commons’ non-commercial (and thus, non-free) license. When NFTs enjoyed a moment of popularity in the early 2020s, some artists looked to Creative Commons in hopes that they might declare NFTs fundamentally incompatible with their free licenses (they didn’t1). The same thing happened again with the explosion of generative AI companies training models on CC-licensed works, and some were disappointed to see the group take the stance that, not only do CC licenses not prohibit AI training wholesale, AI training should be considered non-infringing by default from a copyright perspective.2

    But the trouble with trying to continually narrow the definitions of “free” is that it is impossible to write a license that will perfectly prohibit each possibility that makes a person go “wait, no, not like that” while retaining the benefits of free and open access. If that is truly what a creator wants, then they are likely better served by a traditional, all rights reserved model in which any prospective reuser must individually negotiate terms with them; but this undermines the purpose of free, and restricts permitted reuse only to those with the time, means, and bargaining power to negotiate on a case by case basis.b

    There’s also been an impulse by creators concerned about AI to dramatically limit how people can access their work. Some artists have decided it’s simply not worthwhile to maintain an online gallery of their work when that makes it easily accessible for AI training. Many have implemented restrictive content gates — paywalls, registration-walls, “are you a human”-walls, and similar — to try to fend off scrapers. This too closes off the commons, making it more challenging or expensive for those “every single human beings” described in open access manifestos to access the material that was originally intended to be common goods.

    Instead of worrying about “wait, not like that”, I think we need to reframe the conversation to “wait, not only like that” or “wait, not in ways that threaten open access itself”. The true threat from AI models training on open access material is not that more people may access knowledge thanks to new modalities. It’s that those models may stifle Wikipedia and other free knowledge repositories, benefiting from the labor, money, and care that goes into supporting them while also bleeding them dry. It’s that trillion dollar companies become the sole arbiters of access to knowledge after subsuming the painstaking work of those who made knowledge free to all, killing those projects in the process.

    Irresponsible AI companies are already imposing huge loads on Wikimedia infrastructure, which is costly both from a pure bandwidth perspective, but also because it requires dedicated engineers to maintain and improve systems to handle the massive automated traffic. And AI companies that do not attribute their responses or otherwise provide any pointers back to Wikipedia prevent users from knowing where that material came from, and do not encourage those users to go visit Wikipedia, where they might then sign up as an editor, or donate after seeing a request for support. (This is most AI companies, by the way. Many AI “visionaries” seem perfectly content to promise that artificial superintelligence is just around the corner, but claim that attribution is somehow a permanently unsolvable problem.)

    And while I rely on Wikipedia as an example here, the same goes for any website containing freely licensed material, where scraping benefits AI companies at often extreme cost to the content hosts. This isn’t just about strain on one individual project, it’s about the systematic dismantling of the infrastructure that makes open knowledge possible.

    Anyone at an AI company who stops to think for half a second should be able to recognize they have a vampiric relationship with the commons.

    Even if AI companies don’t care about the benefit to the common good, it shouldn’t be hard for them to understand that by bleeding these projects dry, they are destroying their own food supply.

    And yet many AI companies seem to give very little thought to this, seemingly looking only at the months in front of them rather than operating on years-long timescales.

    Some might argue that if AI companies are already ignoring copyright and training on all-rights-reserved works, they’ll simply ignore these mechanisms too. But there’s a crucial difference: rather than relying on murky copyright claims or threatening to expand copyright in ways that would ultimately harm creators, we can establish clear legal frameworks around consent and compensation that build on existing labor and contract law. Just as unions have successfully negotiated terms of use, ethical engagement, and fair compensation in the past, collective bargaining can help establish enforceable agreements between AI companies, those freely licensing their works, and communities maintaining open knowledge repositories. These agreements would cover not just financial compensation for infrastructure costs, but also requirements around attribution, ethical use, and reinvestment in the commons.

    The future of free and open access isn’t about saying “wait, not like that” — it’s about saying “yes, like that, but under fair terms”. With fair compensation for infrastructure costs. With attribution and avenues by which new people can discover and give back to the underlying commons. With deep respect for the communities that make the commons — and the tools that build off them — possible. Only then can we truly build that world where every single human being can freely share in the sum of all knowledge.

    Reply
  12. Tomi Engdahl says:

    John Gruber / Daring Fireball:
    A developer who had a one-day Swift Assist session in late 2024 says its UI is complete but the results were not very good, and it falls apart on complex tasks

    Apple Did Demo Swift Assist at WWDC Last Year, and Has Shown It, Under NDA, Since Then
    https://daringfireball.net/linked/2025/03/15/swift-assist-voorhees

    Reply
  13. Tomi Engdahl says:

    Jennifer Elias / CNBC:
    Google and other companies are considering bringing back in-person job interviews, as some startups sell AI tools that let engineers cheat in virtual interviews

    Meet the 21-year-old helping coders use AI to cheat in Google and other tech job interviews
    https://www.cnbc.com/2025/03/09/google-ai-interview-coder-cheat.html

    As artificial intelligence becomes more advanced, employers are trying to build workarounds to prevent candidates from cheating in virtual job interviews but are struggling to keep up.
    Columbia University student Chungin “Roy” Lee said he used AI to game a popular virtual interview platform used by tech companies and later received several internship offers.
    Google is among companies considering moving away from virtual interviews as AI becomes more popular among candidates as a way to cheat the process.

    Reply
  14. Tomi Engdahl says:

    Christopher Mims / Wall Street Journal:
    Directors Anthony and Joe Russo say they’re building a high-tech studio aiming to help artists use AI as a creative tool to make films, shows, and video games

    The Russo Brothers Upended Hollywood Once. Now They Aim to Do It Again.
    The ‘Avengers’ directors are building a high-tech studio to help artists use AI to make films, shows and videogames with potentially smaller budgets
    https://www.wsj.com/tech/ai/the-russo-brothers-upended-hollywood-once-now-they-aim-to-do-it-again-8611e41b?st=pzEhJS&reflink=desktopwebshare_permalink

    Reply
  15. Tomi Engdahl says:

    Emma Jacobs / Financial Times:
    A look at “coachbots”, AI-powered virtual coaches like CoachHub’s Aimy and Valence’s Nadia that advise on salary talks, work habits, and role-play conversations
    https://www.ft.com/content/ede799c4-8a1c-4c39-8a9b-01899d5b6754

    Reply
  16. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    Some developers say licenses for “open” AI models like Google’s Gemma and Meta’s Llama have limits that restrict commercial use without fear of legal reprisal

    ‘Open’ AI model licenses often carry concerning restrictions
    https://techcrunch.com/2025/03/14/open-ai-model-licenses-often-carry-concerning-restrictions/

    This week, Google released a family of open AI models, Gemma 3, that quickly garnered praise for their impressive efficiency. But as a number of developers lamented on X, Gemma 3’s license makes commercial use of the models a risky proposition.

    It’s not a problem unique to Gemma 3. Companies like Meta also apply custom, non-standard licensing terms to their openly available models, and the terms present legal challenges for companies. Some firms, especially smaller operations, worry that Google and others could “pull the rug” on their business by asserting the more onerous clauses.

    “The restrictive and inconsistent licensing of so-called ‘open’ AI models is creating significant uncertainty, particularly for commercial adoption,” Nick Vidal, head of community at the Open Source Initiative, a long-running institution aiming to define and “steward” all things open source, told TechCrunch. “While these models are marketed as open, the actual terms impose various legal and practical hurdles that deter businesses from integrating them into their products or services.”

    Open model developers have their reasons for releasing models under proprietary licenses as opposed to industry-standard options like Apache and MIT. AI startup Cohere, for example, has been clear about its intent to support scientific — but not commercial — work on top of its models.

    But Gemma and Meta’s Llama licenses in particular have restrictions that limit the ways companies can use the models without fear of legal reprisal.

    Meta, for instance, prohibits developers from using the “output or results” of Llama 3 models to improve any model besides Llama 3 or “derivative works.” It also prevents companies with over 700 million monthly active users from deploying Llama models without first obtaining a special, additional license.

    Gemma’s license is generally less burdensome. But it does grant Google the right to “restrict (remotely or otherwise) usage” of Gemma that Google believes is in violation of the company’s prohibited use policy or “applicable laws and regulations.”

    These terms don’t just apply to the original Llama and Gemma models. Models based on Llama or Gemma must also adhere to the Llama and Gemma licenses, respectively. In Gemma’s case, that includes models trained on synthetic data generated by Gemma.

    Reply
  17. Tomi Engdahl says:

    Awkward. https://trib.al/ERb5pMy

    Elon Musk’s Grok AI Determines Which Political Party Has Performed Better for the Economy
    “Data speaks louder than party lines.”
    https://futurism.com/elon-musk-grok-economy-party?fbclid=IwY2xjawJE7UhleHRuA2FlbQIxMQABHbDw3AvArueW783ecpuNvG2VZAsf69GCogWGPcTIWXuwhIBMtnk5onvZUg_aem_tSZMS_Wnwa6m-0pYVPNtPg

    Elon Musk’s “anti-woke” AI chatbot Grok is now apparently a Democratic operative.

    That’s our takeaway, at least, from Grok’s response to progressive activist Alex Cole after he tagged the chatbot on X and asked it whether Republicans or Democrats were “better for the economy in the last 30 years.”

    Grok was so eager to shill for Elon Musk’s opposition that instead of staying within the timeframe Cole asked, it summarized politics and economics since World War II — a period during which “Democrats have outperformed Republicans on the economy,” the chatbot noted.

    “[Gross domestic product] growth averages 4.23% under Dems vs. 2.36% under GOP,” Grok continued. “Job creation? 1.7% yearly for Dems, 1.0% for Republicans. Also, 9 of the last 10 recessions started under Republican presidents.”

    Reply
  18. Tomi Engdahl says:

    Koneoppiminen dominoi tekoälymarkkinaa
    https://etn.fi/index.php/13-news/17278-koneoppiminen-dominoi-tekoaelymarkkinaa

    Koneoppimisen markkina kasvaa ennennäkemättömällä vauhdilla ja ohittaa 110 miljardin dollarin rajapyykin vuonna 2025. Kasvuvauhti on 30 prosenttia nopeampi kuin muu tekoälymarkkina, mikä vahvistaa koneoppimisen asemaa tekoälyn keskeisimpänä osa-alueena.

    AltIndex.comin esittelemien tietojen mukaan koneoppimisen markkina on yli kaksinkertaistunut vuodesta 2020, ja kasvu jatkuu huimana. Tekoälyratkaisujen kysynnän kasvaessa koneoppimisesta on tullut AI-markkinan nopeimmin kehittyvä osa-alue, joka ylittää selkeästi muun tekoälyteknologian kasvuluvut.

    Vaikka koko tekoälyteollisuus on kasvanut voimakkaasti, koneoppimisen kehitys on ollut poikkeuksellista. Se on muuttanut perusteellisesti monia toimialoja, kuten finanssialaa ja terveydenhuoltoa, tuonut mullistuksia ennakoivaan analytiikkaan sekä generatiiviseen tekoälyyn ja houkutellut ennätyksellisiä investointeja sekä suurilta teknologiayrityksiltä että startup-yrityksiltä.

    Vuonna 2025 koneoppimisen markkina-arvo nousee Statistan Market Insights -tutkimuksen mukaan 113 miljardiin dollariin, kasvun ollessa 42,6 %. Vaikka kasvu hidastuu vuoden 2024 huimasta 56 prosentista, koneoppiminen on edelleen selkeästi AI-markkinan nopeimmin kasvava segmentti.

    Vertailun vuoksi luonnollisen kielen käsittely (NLP), joka on tekoälymarkkinan toiseksi suurin segmentti, kasvaa vain 32 % vuodessa. Myös AI-robotiikan ja autonomisten järjestelmien kasvu jää noin 32 %:iin, kun taas anturiteknologian ja autonomisten järjestelmien kasvu on vain 20 % ja konenäön 13,5 %.

    Machine Learning to Surge Past $110 Billion in 2025, Growing 30% Faster Than the Rest of the AI Market
    https://altindex.com/news/machine-learing-to-surge-growing-fast

    The machine learning market has seen explosive growth in recent years, more than doubling in size since 2020. As global demand for AI-driven solutions accelerates, machine learning remains the fastest-growing sector in the AI revolution, expanding significantly faster than the broader market.

    According to data presented by AltIndex.com, the machine learning industry is on track to surpass the $110 billion mark in 2025, growing 30% faster than the rest of the AI market.

    Reply
  19. Tomi Engdahl says:

    RAG vs. CAG: Solving Knowledge Gaps in AI Models
    https://www.youtube.com/watch?v=HdafI0t3sEY

    What if your AI can’t answer who won the Oscars last year?
    Martin Keen explains how RAG (Retrieval-Augmented Generation) and CAG (Cache-Augmented Generation) address knowledge gaps in AI.
    Discover their strengths in real-time retrieval, scalability, and efficient workflows for smarter AI systems.

    Reply
  20. Tomi Engdahl says:

    On Tuesday, OpenAI released new tools designed to help developers and enterprises build AI agents — automated systems that can independently accomplish tasks — using the company’s own AI models and frameworks.

    The tools are part of OpenAI’s new Responses API, which lets businesses develop custom AI agents that can perform web searches, scan through company files, and navigate websites, much like OpenAI’s Operator product.

    Read more from Maxwell Zeff here: https://tcrn.ch/41F1nJi

    #TechCrunch #technews #artificialintelligence #OpenAI

    Reply
  21. Tomi Engdahl says:

    Bloomberg:
    Sources: Cognition, maker of AI coding assistant Devin, raised hundreds of millions led by 8VC at a ~$4B valuation, doubling its $2B valuation from April 2024
    https://www.bloomberg.com/news/articles/2025-03-18/cognition-ai-hits-4-billion-valuation-in-deal-led-by-lonsdale-s-firm

    Reply
  22. Tomi Engdahl says:

    Todd Spangler / Variety:
    In an open letter to the OSTP, Ben Stiller and 400+ Hollywood creatives urge President Trump to not let Google, OpenAI, and others “exploit” copyrighted works

    Ben Stiller, Mark Ruffalo and More Than 400 Hollywood Names Urge Trump to Not Let AI Companies ‘Exploit’ Copyrighted Works
    https://variety.com/2025/digital/news/hollywood-urges-trump-block-ai-exploit-copyrights-1236339750/

    Reply
  23. Tomi Engdahl says:

    Jay Peters / The Verge:
    Roblox open sources Cube 3D, the first version of its AI foundation model for generating 3D objects, trained on licensed and public datasets and its own data

    Roblox’s new AI model can generate 3D objects
    The model, Cube 3D, creates 3D models from a text prompt.
    https://www.theverge.com/news/630977/roblox-cube-3d-objects-mesh-ai-text-prompt

    Reply
  24. Tomi Engdahl says:

    Sean Michael Kerner / VentureBeat:
    Zoom unveils AI Companion 2.0, which adds agentic AI features, including calendar management, meeting tools, and document creation, rolling out by July 2025

    Inside Zoom’s AI evolution: From basic meeting tools to agentic productivity platform powered by LLMs and SLMs
    https://venturebeat.com/ai/inside-zooms-ai-evolution-from-basic-meeting-tools-to-agentic-productivity-platform-powered-by-llms-and-slms/

    Reply
  25. Tomi Engdahl says:

    Qianer Liu / The Information:
    Sources: Google plans to partner with Taiwanese chipmaker MediaTek to design and produce its next-gen TPUs, set for 2026, but will still work with Broadcom

    Google Taps MediaTek for Cheaper AI Chips
    https://www.theinformation.com/articles/google-taps-mediatek-cheaper-ai-chips

    Reply
  26. Tomi Engdahl says:

    Jason Koebler / 404 Media:
    AI slop, shared by thousands of prolific accounts, is brute forcing virality, and platforms like Meta embrace it; some AI videos get 350M+ views

    AI Slop Is a Brute Force Attack on the Algorithms That Control Reality
    Jason Koebler Jason Koebler
    ·
    Mar 17, 2025 at 11:18 AM
    Generative AI spammers are brute forcing the internet, and it is working.
    https://www.404media.co/ai-slop-is-a-brute-force-attack-on-the-algorithms-that-control-reality/

    Consider, for a moment, that this AI-generated video of a bizarre creature turning into a spider, turning into a nightmare giraffe inside of a busy mall has been viewed 362 million times. That means this short reel has been viewed more times than every single article 404 Media has ever published, combined and multiplied tens of times.

    Reply
  27. Tomi Engdahl says:

    The Information:
    Liam Fedus, VP of research in charge of post-training at OpenAI, is leaving the company to found a startup focused on using AI to discover new materials

    OpenAI Post-Training Head Departs
    https://www.theinformation.com/briefings/openai-post-training-head-departs

    Reply
  28. Tomi Engdahl says:

    Mike Pastore / Search Engine Land:
    Adobe: traffic from AI sources to US retail sites in February rose 1,200% from July 2024; 39% of US consumers used generative AI for shopping, 53% plan to do so

    Generative AI use surging among consumers for online shopping: Report
    Compared to non-AI traffic sources, users engage more, view more pages, and bounce at lower rates – but they are also less likely to convert.
    https://searchengineland.com/generative-ai-surging-online-shopping-report-453312

    Traffic from generative AI surged to U.S. retail sites over the holiday season and that trend has continued into 2025, according to new Adobe data.

    Between Nov. 1 and Dec. 31, traffic from generative AI sources increased by 1,300% compared to the year prior (up 1,950% YoY on Cyber Monday).

    This trend continued beyond the holiday season, Adobe found. In February, traffic from generative AI sources increased by 1,200% compared to July 2024.

    The percentages are high because generative AI tools are so new. ChatGPT debuted its research preview on Nov. 30. 2022. Generative AI traffic remains modest compared to other channels, such as paid search or email, but the growth is notable. It’s doubled every two months since September 2024.

    By the numbers. Findings from Adobe’s survey of 5,000 U.S. consumers found AI generates more engaged traffic:

    39% used generative AI for online shopping, with 53% planning to do so in 2025.
    55% of respondents) use generative AI for conducting research.
    47% use it for product recommendations.
    43% use generative AI for seeking deals.
    35% for getting gift ideas.
    35% for finding unique products.
    33% for creating shopping lists.

    One of the most interesting findings from Adobe covers what happens once generative AI users land on a retail website. Compared to non-AI traffic sources (including paid search, affiliates and partners, email, organic search, social media), generative AI traffic shows:

    More engagement: Adobe found 8% higher engagement as individuals linger on the site for longer.
    More pages: Generative AI visitors browse 12% more pages per visit
    Fewer bounces: They have a 23% lower bounce rate.

    Yes, but. While engaged traffic is good, conversions are better.

    Adobe found that traffic from generative AI sources is 9% less likely to convert than traffic from other sources.
    However, the data shows that this has improved significantly since July 2024, which indicates growing comfort.

    Reply
  29. Tomi Engdahl says:

    Nikkei Asia:
    PitchBook: China led Asia with 715 AI VC deals in 2024, totaling $7.3B, South Korea followed with 308 deals worth $1.8B, and India with 306 deals totaling $1.7B
    https://asia.nikkei.com/Business/Technology/China-venture-capitalists-hold-back-on-AI-deals-despite-DeepSeek-buzz

    Reply
  30. Tomi Engdahl says:

    Not a fan of AI? A recent Windows update can actually remove Microsoft’s Copilot assistant from the OS.

    As the company promotes generative AI to the public, Redmond accidentally introduced a software bug last week that can delete the Copilot program from PCs running Windows 10 or 11.

    The problem affects the March 11th updates for both operating systems. “We’re aware of an issue with the Microsoft Copilot app affecting some devices. The app is unintentionally uninstalled and unpinned from the taskbar,” the company wrote in the support pages.

    In a bit of irony though, some Windows users have said “I wish this wasn’t a bug” after learning that Microsoft has been mistakenly deleting the Copilot app.

    “Amazing, Microsoft fixes their own bloat,” joked one user on Reddit. “Finally a good feature,” wrote another.

    More at PCMag
    https://www.pcmag.com/news/oops-update-accidentally-removes-copilot-from-windows

    Reply
  31. Tomi Engdahl says:

    Bloomberg:
    Tencent releases five AI tools to turn text or images into 3D visuals and graphics, based on Hunyuan3D-2.0, and plans to integrate DeepSeek’s R1 into WeChat
    https://www.bloomberg.com/news/articles/2025-03-18/tencent-touts-open-source-ai-models-to-turn-text-into-3d-visuals

    Reply
  32. Tomi Engdahl says:

    Eurooppaan tulee kaikkiaan 13 tekoälytehdasta
    https://etn.fi/index.php/13-news/17285-eurooppaan-tulee-kaikkiaan-13-tekoaelytehdasta

    Euroopan tekoälykehitys ottaa suuren harppauksen eteenpäin, kun EU:n EuroHPC-yhteisyritys (EuroHPC JU) laajentaa tekoälytehdashankettaan kuudella uudella tehtaalla. Uudet tekoälytehtaat perustetaan Itävaltaan, Bulgariaan, Ranskaan, Saksaan, Puolaan ja Sloveniaan.

    Näiden uusien yksiköiden myötä Eurooppaan syntyy kaikkiaan 15 tekoälytehdasta, jotka toimivat huipputason supertietokoneiden tukemina. Tehtaiden tavoitteena on vahvistaa Euroopan tekoälyosaamista tarjoamalla yrityksille, tutkijoille ja julkiselle sektorille pääsy kehittyneisiin laskentainfrastruktuureihin, tietoaineistoihin ja koulutusohjelmiin.

    Saksaan (Jülich) ja Ranskaan (Pariisi) perustettavat tekoälytehtaat hyödyntävät Euroopan ensimmäisiä eksatason supertietokoneita, JUPITERia ja Alice Recoqueta. Muut uudet tehtaat keskittyvät AI-optimoituihin järjestelmiin ja tekoälyn sovelluskehitykseen eri teollisuudenaloilla.

    Suomen Kajaanissa sijaitseva LUMI-supertietokoneen seuraaja on jo osa tätä verkostoa. Se tarjoaa suomalaisille ja eurooppalaisille tekoälytutkijoille ja yrityksille valtavan laskentakapasiteetin sekä infrastruktuurin innovatiivisten tekoälysovellusten kehittämiseen.

    Reply
  33. Tomi Engdahl says:

    AI Is Turbocharging Organized Crime, EU Police Agency Warns

    AI and other technologies “are a catalyst for crime, and drive criminal operations’ efficiency by amplifying their speed, reach, and sophistication,” the report said.

    https://www.securityweek.com/ai-is-turbocharging-organized-crime-eu-police-agency-warns/

    Reply
  34. Tomi Engdahl says:

    Tobias Mann / The Register:
    Nvidia updates the DGX Station, begins taking reservations for the DGX Spark box, formerly Project Digits, and unveils the RTX PRO workstation and server GPUs — GTC After a Hopper hiatus, Nvidia’s DGX Station returns, now armed with an all-new desktop-tuned Grace-Blackwell Ultra Superchip capable …

    Nvidia wants to put a GB300 Superchip on your desk with DGX Station, Spark PCs
    Or a 96 GB RTX PRO in your desktop or server
    https://www.theregister.com/2025/03/18/gtc_frame_nvidias_budget_blackwell/

    After a Hopper hiatus, Nvidia’s DGX Station returns, now armed with an all-new desktop-tuned Grace-Blackwell Ultra Superchip capable of churning out 20 petaFLOPS of AI performance.

    The system marks the first time Nvidia has updated its DGX Station lineup since the Ampere GPU generation. Its last DGX Station was an A100-based system with quad GPUs and a single AMD Epyc processor that was cooled by a custom refrigerant loop.

    By comparison, Nvidia’s new Blackwell-based systems are far simpler, powered by a single Blackwell Ultra GPU and, as its GB300 codename would suggest, are backed by a Grace CPU. In total, we’re told the system will feature 784 GB of “unified memory between the CPU’s LPDDR5x DRAM and the GPUs’ HBM3e.”

    Reply
  35. Tomi Engdahl says:

    Kif Leswing / CNBC:
    Nvidia unveils Blackwell Ultra, a family of AI chips shipping in 2025, and Vera Rubin, its next-gen chip featuring Nvidia’s first custom CPU, slated for H2 2026 — Nvidia announced new chips for building and deploying artificial intelligence models at its annual GTC conference on Tuesday.

    Nvidia announces Blackwell Ultra and Rubin AI chips
    https://www.cnbc.com/2025/03/18/nvidia-announces-blackwell-ultra-and-vera-rubin-ai-chips-.html

    Reply
  36. Tomi Engdahl says:

    Larry Dignan / Constellation Research:
    Nvidia announces Dynamo, an “operating system of the AI factory” that it says can boost token output by 30x per GPU when running DeepSeek-R1 on a large cluster — Nvidia launched Blackwell Ultra, which aims to boost training and test time inference, as the GPU giant makes the case …

    Nvidia launches Blackwell Ultra, Dynamo. outlines roadmap through 2027
    https://www.constellationr.com/blog-news/insights/nvidia-launches-blackwell-ultra-dynamo-outlines-roadmap-through-2027

    Mike Wheatley / SiliconANGLE:
    Nvidia debuts Nvidia AI-Q Blueprint, a framework for connecting knowledge bases to agents, Llama Nemotron AI models with advanced reasoning capabilities, more

    Nvidia’s new reasoning models and building blocks pave way for advanced AI agents
    https://siliconangle.com/2025/03/18/nvidias-new-reasoning-models-building-blocks-pave-way-next-gen-ai-agents/

    Nvidia Corp. is looking to capitalize on the agentic artificial intelligence trend not only by providing the underlying infrastructure, but also the models that power these next-generation autonomous agents.

    At its GTC 2025 annual conference today, the company unveiled a new family of Llama Nemotron AI models with advanced reasoning capabilities. Based on Meta Platforms Inc.’s renowned open-source Llama models, they’re designed to provide developers with a strong foundation on which they can build advanced AI agents that perform tasks on behalf of their users with minimal supervision.

    Nvidia explained that it basically just took Meta’s Llama models and improved them using post-training enhancement techniques to increase their multistep math, coding, complex decision-making and reasoning skills. Employing some careful refinements, Nvidia claims that the Llama Nemotron AI models are 20% more accurate than the Llama models they’re based on, while their inference speed has been increased by an impressive five times, enabling them to handle many more complex tasks with lower operational costs.

    Reply
  37. Tomi Engdahl says:

    NVIDIA on YouTube:
    A recording of the Nvidia GTC 2025 keynote at the SAP Center in San Jose, California

    GTC March 2025 Keynote with NVIDIA CEO Jensen Huang
    https://www.youtube.com/watch?v=_waPvOwL9Z8

    Reply
  38. Tomi Engdahl says:

    CNBC:
    GM partners with Nvidia to use Nvidia Drive AGX, Omniverse with Cosmos, and more across driver assistance systems and factory planning and robotics

    Nvidia, GM announce deal for AI, factories and next-gen vehicles
    https://www.cnbc.com/2025/03/18/nvidia-gm-deals-ai-factories-vehicles.html

    Key Points

    General Motors and Nvidia agreed to a strategic collaboration that includes the automaker utilizing several products and artificial intelligence services.
    GM has been using Nvidia graphics processing units, or GPUs, for training AI models across various areas, including simulation and validation. The new business expands to in-vehicle hardware, automotive plant design and other operations, the companies said.
    GM declined to disclose a cost for the new tools with Nvidia, which has been attempting to diversify its automotive business.

    Reply
  39. Tomi Engdahl says:

    Michael Peel / Financial Times:
    Microsoft partners with Swiss startup Inait to deploy an AI model that simulates mammal brain reasoning to advance fields like financial trading and robotics
    https://www.ft.com/content/37e44758-04a6-450b-abe3-f51f1d7d972a

    Reply
  40. Tomi Engdahl says:

    June Yoon / Financial Times:
    Chinese firms like Alibaba, Baidu and DeepSeek are open sourcing AI models to bypass US curbs, decentralize development, and tap global talent for refinement
    https://www.ft.com/content/13df6250-dffb-40fc-bb79-309764fa3905

    Reply
  41. Tomi Engdahl says:

    Makena Kelly / Wired:
    Current and former employees say the FTC removed 300+ business guidance blogs from the Biden era, including info on AI consumer protection and Big Tech lawsuits
    https://www.wired.com/story/federal-trade-commission-removed-blogs-critical-of-ai-amazon-microsoft/

    Reply
  42. Tomi Engdahl says:

    Kamila Wojciechowska / Android Authority:
    A source details how Google built the Pixel 10′s Tensor G5 chip without Samsung’s help, using in-house and off-the-shelf IP, and partnering with Arm and others — The Tensor G5 won’t be all that different from the previous generations. — • — • — • — TL;DR

    Exclusive: How Google built the Pixel 10′s Tensor G5 without Samsung’s help
    The Tensor G5 won’t be all that different from the previous generations.
    https://www.androidauthority.com/how-google-built-tensor-g5-3535489/

    Reply
  43. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    Google updates its Gemini chatbot, adding Canvas, a space for users to create, refine, and share writing and coding projects, and Audio Overview from NotebookLM

    Google brings a ‘canvas’ feature to Gemini, plus Audio Overview
    https://techcrunch.com/2025/03/18/google-brings-a-canvas-feature-to-gemini-plus-audio-overview/

    Reply
  44. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    Mark Zuckerberg says Meta’s Llama models have been downloaded 1B times since their 2023 debut, up from 650M downloads in early December 2024

    Mark Zuckerberg says that Meta’s Llama models have hit 1B downloads
    https://techcrunch.com/2025/03/18/mark-zuckerberg-says-that-metas-llama-models-have-hit-1b-downloads/

    In a brief message Tuesday morning on Threads, Meta CEO Mark Zuckerberg said the company’s “open” AI model family, Llama, hit 1 billion downloads. That’s up from 650 million downloads as of early December 2024 — a ~53% increase over a roughly three-month period.

    Llama, which powers Meta’s AI assistant, Meta AI, across the tech giant’s various platforms, including Facebook, Instagram, and WhatsApp, is a part of Meta’s yearslong bid to foster a wide-ranging AI product ecosystem. The company makes the models, as well as the tools required to fine-tune and customize them, available for free under a proprietary license.

    Some developers and companies have taken issue with the Llama license terms, which are somewhat commercially restrictive. Yet Llama has achieved widespread success since launching in 2023. Companies including Spotify, AT&T, and DoorDash use Llama models in production today.

    That’s not to suggest that Meta hasn’t faced setbacks.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*