AI trends 2025

AI is developing all the time. Here are some picks from several articles what is expected to happen in AI and around it in 2025. Here are picks from various articles, the texts are picks from the article edited and in some cases translated for clarity.

AI in 2025: Five Defining Themes
https://news.sap.com/2025/01/ai-in-2025-defining-themes/
Artificial intelligence (AI) is accelerating at an astonishing pace, quickly moving from emerging technologies to impacting how businesses run. From building AI agents to interacting with technology in ways that feel more like a natural conversation, AI technologies are poised to transform how we work.
But what exactly lies ahead?
1. Agentic AI: Goodbye Agent Washing, Welcome Multi-Agent Systems
AI agents are currently in their infancy. While many software vendors are releasing and labeling the first “AI agents” based on simple conversational document search, advanced AI agents that will be able to plan, reason, use tools, collaborate with humans and other agents, and iteratively reflect on progress until they achieve their objective are on the horizon. The year 2025 will see them rapidly evolve and act more autonomously. More specifically, 2025 will see AI agents deployed more readily “under the hood,” driving complex agentic workflows.
In short, AI will handle mundane, high-volume tasks while the value of human judgement, creativity, and quality outcomes will increase.
2. Models: No Context, No Value
Large language models (LLMs) will continue to become a commodity for vanilla generative AI tasks, a trend that has already started. LLMs are drawing on an increasingly tapped pool of public data scraped from the internet. This will only worsen, and companies must learn to adapt their models to unique, content-rich data sources.
We will also see a greater variety of foundation models that fulfill different purposes. Take, for example, physics-informed neural networks (PINNs), which generate outcomes based on predictions grounded in physical reality or robotics. PINNs are set to gain more importance in the job market because they will enable autonomous robots to navigate and execute tasks in the real world.
Models will increasingly become more multimodal, meaning an AI system can process information from various input types.
3. Adoption: From Buzz to Business
While 2024 was all about introducing AI use cases and their value for organizations and individuals alike, 2025 will see the industry’s unprecedented adoption of AI specifically for businesses. More people will understand when and how to use AI, and the technology will mature to the point where it can deal with critical business issues such as managing multi-national complexities. Many companies will also gain practical experience working for the first time through issues like AI-specific legal and data privacy terms (compared to when companies started moving to the cloud 10 years ago), building the foundation for applying the technology to business processes.
4. User Experience: AI Is Becoming the New UI
AI’s next frontier is seamlessly unifying people, data, and processes to amplify business outcomes. In 2025, we will see increased adoption of AI across the workforce as people discover the benefits of humans plus AI.
This means disrupting the classical user experience from system-led interactions to intent-based, people-led conversations with AI acting in the background. AI copilots will become the new UI for engaging with a system, making software more accessible and easier for people. AI won’t be limited to one app; it might even replace them one day. With AI, frontend, backend, browser, and apps are blurring. This is like giving your AI “arms, legs, and eyes.”
5. Regulation: Innovate, Then Regulate
It’s fair to say that governments worldwide are struggling to keep pace with the rapid advancements in AI technology and to develop meaningful regulatory frameworks that set appropriate guardrails for AI without compromising innovation.

12 AI predictions for 2025
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
https://www.cio.com/article/3630070/12-ai-predictions-for-2025.html
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
1. Small language models and edge computing
Most of the attention this year and last has been on the big language models — specifically on ChatGPT in its various permutations, as well as competitors like Anthropic’s Claude and Meta’s Llama models. But for many business use cases, LLMs are overkill and are too expensive, and too slow, for practical use.
“Looking ahead to 2025, I expect small language models, specifically custom models, to become a more common solution for many businesses,”
2. AI will approach human reasoning ability
In mid-September, OpenAI released a new series of models that thinks through problems much like a person would, it claims. The company says it can achieve PhD-level performance in challenging benchmark tests in physics, chemistry, and biology. For example, the previous best model, GPT-4o, could only solve 13% of the problems on the International Mathematics Olympiad, while the new reasoning model solved 83%.
If AI can reason better, then it will make it possible for AI agents to understand our intent, translate that into a series of steps, and do things on our behalf, says Gartner analyst Arun Chandrasekaran. “Reasoning also helps us use AI as more of a decision support system,”
3. Massive growth in proven use cases
This year, we’ve seen some use cases proven to have ROI, says Monteiro. In 2025, those use cases will see massive adoption, especially if the AI technology is integrated into the software platforms that companies are already using, making it very simple to adopt.
“The fields of customer service, marketing, and customer development are going to see massive adoption,”
4. The evolution of agile development
The agile manifesto was released in 2001 and, since then, the development philosophy has steadily gained over the previous waterfall style of software development.
“For the last 15 years or so, it’s been the de-facto standard for how modern software development works,”
5. Increased regulation
At the end of September, California governor Gavin Newsom signed a law requiring gen AI developers to disclose the data they used to train their systems, which applies to developers who make gen AI systems publicly available to Californians. Developers must comply by the start of 2026.
There are also regulations about the use of deep fakes, facial recognition, and more. The most comprehensive law, the EU’s AI Act, which went into effect last summer, is also something that companies will have to comply with starting in mid-2026, so, again, 2025 is the year when they will need to get ready.
6. AI will become accessible and ubiquitous
With gen AI, people are still at the stage of trying to figure out what gen AI is, how it works, and how to use it.
“There’s going to be a lot less of that,” he says. But gen AI will become ubiquitous and seamlessly woven into workflows, the way the internet is today.
7. Agents will begin replacing services
Software has evolved from big, monolithic systems running on mainframes, to desktop apps, to distributed, service-based architectures, web applications, and mobile apps. Now, it will evolve again, says Malhotra. “Agents are the next phase,” he says. Agents can be more loosely coupled than services, making these architectures more flexible, resilient and smart. And that will bring with it a completely new stack of tools and development processes.
8. The rise of agentic assistants
In addition to agents replacing software components, we’ll also see the rise of agentic assistants, adds Malhotra. Take for example that task of keeping up with regulations.
Today, consultants get continuing education to stay abreast of new laws, or reach out to colleagues who are already experts in them. It takes time for the new knowledge to disseminate and be fully absorbed by employees.
“But an AI agent can be instantly updated to ensure that all our work is compliant with the new laws,” says Malhotra. “This isn’t science fiction.”
9. Multi-agent systems
Sure, AI agents are interesting. But things are going to get really interesting when agents start talking to each other, says Babak Hodjat, CTO of AI at Cognizant. It won’t happen overnight, of course, and companies will need to be careful that these agentic systems don’t go off the rails.
Companies such as Sailes and Salesforce are already developing multi-agent workflows.
10. Multi-modal AI
Humans and the companies we build are multi-modal. We read and write text, we speak and listen, we see and we draw. And we do all these things through time, so we understand that some things come before other things. Today’s AI models are, for the most part, fragmentary. One can create images, another can only handle text, and some recent ones can understand or produce video.
11. Multi-model routing
Not to be confused with multi-modal AI, multi-modal routing is when companies use more than one LLM to power their gen AI applications. Different AI models are better at different things, and some are cheaper than others, or have lower latency. And then there’s the matter of having all your eggs in one basket.
“A number of CIOs I’ve spoken with recently are thinking about the old ERP days of vendor lock,” says Brett Barton, global AI practice leader at Unisys. “And it’s top of mind for many as they look at their application portfolio, specifically as it relates to cloud and AI capabilities.”
Diversifying away from using just a single model for all use cases means a company is less dependent on any one provider and can be more flexible as circumstances change.
12. Mass customization of enterprise software
Today, only the largest companies, with the deepest pockets, get to have custom software developed specifically for them. It’s just not economically feasible to build large systems for small use cases.
“Right now, people are all using the same version of Teams or Slack or what have you,” says Ernst & Young’s Malhotra. “Microsoft can’t make a custom version just for me.” But once AI begins to accelerate the speed of software development while reducing costs, it starts to become much more feasible.

9 IT resolutions for 2025
https://www.cio.com/article/3629833/9-it-resolutions-for-2025.html
1. Innovate
“We’re embracing innovation,”
2. Double down on harnessing the power of AI
Not surprisingly, getting more out of AI is top of mind for many CIOs.
“I am excited about the potential of generative AI, particularly in the security space,”
3. And ensure effective and secure AI rollouts
“AI is everywhere, and while its benefits are extensive, implementing it effectively across a corporation presents challenges. Balancing the rollout with proper training, adoption, and careful measurement of costs and benefits is essential, particularly while securing company assets in tandem,”
4. Focus on responsible AI
The possibilities of AI grow by the day — but so do the risks.
“My resolution is to mature in our execution of responsible AI,”
“AI is the new gold and in order to truly maximize it’s potential, we must first have the proper guardrails in place. Taking a human-first approach to AI will help ensure our state can maintain ethics while taking advantage of the new AI innovations.”
5. Deliver value from generative AI
As organizations move from experimenting and testing generative AI use cases, they’re looking for gen AI to deliver real business value.
“As we go into 2025, we’ll continue to see the evolution of gen AI. But it’s no longer about just standing it up. It’s more about optimizing and maximizing the value we’re getting out of gen AI,”
6. Empower global talent
Although harnessing AI is a top objective for Morgan Stanley’s Wetmur, she says she’s equally committed to harnessing the power of people.
7. Create a wholistic learning culture
Wetmur has another talent-related objective: to create a learning culture — not just in her own department but across all divisions.
8. Deliver better digital experiences
Deltek’s Cilsick has her sights set on improving her company’s digital employee experience, believing that a better DEX will yield benefits in multiple ways.
Cilsick says she first wants to bring in new technologies and automation to “make things as easy as possible,” mirroring the digital experiences most workers have when using consumer technologies.
“It’s really about leveraging tech to make sure [employees] are more efficient and productive,”
“In 2025 my primary focus as CIO will be on transforming operational efficiency, maximizing business productivity, and enhancing employee experiences,”
9. Position the company for long-term success
Lieberman wants to look beyond 2025, saying another resolution for the year is “to develop a longer-term view of our technology roadmap so that we can strategically decide where to invest our resources.”
“My resolutions for 2025 reflect the evolving needs of our organization, the opportunities presented by AI and emerging technologies, and the necessity to balance innovation with operational efficiency,”
Lieberman aims to develop AI capabilities to automate routine tasks.
“Bots will handle common inquiries ranging from sales account summaries to HR benefits, reducing response times and freeing up resources for strategic initiatives,”

Not just hype — here are real-world use cases for AI agents
https://venturebeat.com/ai/not-just-hype-here-are-real-world-use-cases-for-ai-agents/
Just seven or eight months ago, when a customer called in to or emailed Baca Systems with a service question, a human agent handling the query would begin searching for similar cases in the system and analyzing technical documents.
This process would take roughly five to seven minutes; then the agent could offer the “first meaningful response” and finally begin troubleshooting.
But now, with AI agents powered by Salesforce, that time has been shortened to as few as five to 10 seconds.
Now, instead of having to sift through databases for previous customer calls and similar cases, human reps can ask the AI agent to find the relevant information. The AI runs in the background and allows humans to respond right away, Russo noted.
AI can serve as a sales development representative (SDR) to send out general inquires and emails, have a back-and-forth dialogue, then pass the prospect to a member of the sales team, Russo explained.
But once the company implements Salesforce’s Agentforce, a customer needing to modify an order will be able to communicate their needs with AI in natural language, and the AI agent will automatically make adjustments. When more complex issues come up — such as a reconfiguration of an order or an all-out venue change — the AI agent will quickly push the matter up to a human rep.

Open Source in 2025: Strap In, Disruption Straight Ahead
Look for new tensions to arise in the New Year over licensing, the open source AI definition, security and compliance, and how to pay volunteer maintainers.
https://thenewstack.io/open-source-in-2025-strap-in-disruption-straight-ahead/
The trend of widely used open source software moving to more restrictive licensing isn’t new.
In addition to the demands of late-stage capitalism and impatient investors in companies built on open source tools, other outside factors are pressuring the open source world. There’s the promise/threat of generative AI, for instance. Or the shifting geopolitical landscape, which brings new security concerns and governance regulations.
What’s ahead for open source in 2025?
More Consolidation, More Licensing Changes
The Open Source AI Debate: Just Getting Started
Security and Compliance Concerns Will Rise
Paying Maintainers: More Cash, Creativity Needed

Kyberturvallisuuden ja tekoälyn tärkeimmät trendit 2025
https://www.uusiteknologia.fi/2024/11/20/kyberturvallisuuden-ja-tekoalyn-tarkeimmat-trendit-2025/
1. Cyber ​​infrastructure will be centered on a single, unified security platform
2. Big data will give an edge against new entrants
3. AI’s integrated role in 2025 means building trust, governance engagement, and a new kind of leadership
4. Businesses will adopt secure enterprise browsers more widely
5. AI’s energy implications will be more widely recognized in 2025
6. Quantum realities will become clearer in 2025
7. Security and marketing leaders will work more closely together

Presentation: For 2025, ‘AI eats the world’.
https://www.ben-evans.com/presentations

Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing
However, just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity. Right now, we very much see AI in this “peak of inflated expectations” phase and predict that it will dip into the “trough of disillusionment”, where organizations realize that it is not the silver bullet they thought it would be. In fact, there are already signs of cynicism as decision-makers are bombarded with marketing messages from vendors and struggle to discern what is a genuine use case and what is not relevant for their organization.
There is also regulation that will come into force, such as the EU AI Act, which is a comprehensive legal framework that sets out rules for the development and use of AI.
AI certainly won’t solve every problem, and it should be used like automation, as part of a collaborative mix of people, process and technology. You simply can’t replace human intuition with AI, and many new AI regulations stipulate that human oversight is maintained.

7 Splunk Predictions for 2025
https://www.splunk.com/en_us/form/future-predictions.html
AI: Projects must prove their worth to anxious boards or risk defunding, and LLMs will go small to reduce operating costs and environmental impact.

OpenAI, Google and Anthropic Are Struggling to Build More Advanced AI
Three of the leading artificial intelligence companies are seeing diminishing returns from their costly efforts to develop newer models.
https://www.bloomberg.com/news/articles/2024-11-13/openai-google-and-anthropic-are-struggling-to-build-more-advanced-ai
Sources: OpenAI, Google, and Anthropic are all seeing diminishing returns from costly efforts to build new AI models; a new Gemini model misses internal targets

It Costs So Much to Run ChatGPT That OpenAI Is Losing Money on $200 ChatGPT Pro Subscriptions
https://futurism.com/the-byte/openai-chatgpt-pro-subscription-losing-money?fbclid=IwY2xjawH8epVleHRuA2FlbQIxMQABHeggEpKe8ZQfjtPRC0f2pOI7A3z9LFtFon8lVG2VAbj178dkxSQbX_2CJQ_aem_N_ll3ETcuQ4OTRrShHqNGg
In a post on X-formerly-Twitter, CEO Sam Altman admitted an “insane” fact: that the company is “currently losing money” on ChatGPT Pro subscriptions, which run $200 per month and give users access to its suite of products including its o1 “reasoning” model.
“People use it much more than we expected,” the cofounder wrote, later adding in response to another user that he “personally chose the price and thought we would make some money.”
Though Altman didn’t explicitly say why OpenAI is losing money on these premium subscriptions, the issue almost certainly comes down to the enormous expense of running AI infrastructure: the massive and increasing amounts of electricity needed to power the facilities that power AI, not to mention the cost of building and maintaining those data centers. Nowadays, a single query on the company’s most advanced models can cost a staggering $1,000.

Tekoäly edellyttää yhä nopeampia verkkoja
https://etn.fi/index.php/opinion/16974-tekoaely-edellyttaeae-yhae-nopeampia-verkkoja
A resilient digital infrastructure is critical to effectively harnessing telecommunications networks for AI innovations and cloud-based services. The increasing demand for data-rich applications related to AI requires a telecommunications network that can handle large amounts of data with low latency, writes Carl Hansson, Partner Solutions Manager at Orange Business.

AI’s Slowdown Is Everyone Else’s Opportunity
Businesses will benefit from some much-needed breathing space to figure out how to deliver that all-important return on investment.
https://www.bloomberg.com/opinion/articles/2024-11-20/ai-slowdown-is-everyone-else-s-opportunity

Näin sirumarkkinoilla käy ensi vuonna
https://etn.fi/index.php/13-news/16984-naein-sirumarkkinoilla-kaey-ensi-vuonna
The growing demand for high-performance computing (HPC) for artificial intelligence and HPC computing continues to be strong, with the market set to grow by more than 15 percent in 2025, IDC estimates in its recent Worldwide Semiconductor Technology Supply Chain Intelligence report.
IDC predicts eight significant trends for the chip market by 2025.
1. AI growth accelerates
2. Asia-Pacific IC Design Heats Up
3. TSMC’s leadership position is strengthening
4. The expansion of advanced processes is accelerating.
5. Mature process market recovers
6. 2nm Technology Breakthrough
7. Restructuring the Packaging and Testing Market
8. Advanced packaging technologies on the rise

2024: The year when MCUs became AI-enabled
https://www-edn-com.translate.goog/2024-the-year-when-mcus-became-ai-enabled/?fbclid=IwZXh0bgNhZW0CMTEAAR1_fEakArfPtgGZfjd-NiPd_MLBiuHyp9qfiszczOENPGPg38wzl9KOLrQ_aem_rLmf2vF2kjDIFGWzRVZWKw&_x_tr_sl=en&_x_tr_tl=fi&_x_tr_hl=fi&_x_tr_pto=wapp
The AI ​​party in the MCU space started in 2024, and in 2025, it is very likely that there will be more advancements in MCUs using lightweight AI models.
Adoption of AI acceleration features is a big step in the development of microcontrollers. The inclusion of AI features in microcontrollers started in 2024, and it is very likely that in 2025, their features and tools will develop further.

Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing

AI Regulation Gets Serious in 2025 – Is Your Organization Ready?
While the challenges are significant, organizations have an opportunity to build scalable AI governance frameworks that ensure compliance while enabling responsible AI innovation.
https://www.securityweek.com/ai-regulation-gets-serious-in-2025-is-your-organization-ready/
Similar to the GDPR, the EU AI Act will take a phased approach to implementation. The first milestone arrives on February 2, 2025, when organizations operating in the EU must ensure that employees involved in AI use, deployment, or oversight possess adequate AI literacy. Thereafter from August 1 any new AI models based on GPAI standards must be fully compliant with the act. Also similar to GDPR is the threat of huge fines for non-compliance – EUR 35 million or 7 percent of worldwide annual turnover, whichever is higher.
While this requirement may appear manageable on the surface, many organizations are still in the early stages of defining and formalizing their AI usage policies.
Later phases of the EU AI Act, expected in late 2025 and into 2026, will introduce stricter requirements around prohibited and high-risk AI applications. For organizations, this will surface a significant governance challenge: maintaining visibility and control over AI assets.
Tracking the usage of standalone generative AI tools, such as ChatGPT or Claude, is relatively straightforward. However, the challenge intensifies when dealing with SaaS platforms that integrate AI functionalities on the backend. Analysts, including Gartner, refer to this as “embedded AI,” and its proliferation makes maintaining accurate AI asset inventories increasingly complex.
Where frameworks like the EU AI Act grow more complex is their focus on ‘high-risk’ use cases. Compliance will require organizations to move beyond merely identifying AI tools in use; they must also assess how these tools are used, what data is being shared, and what tasks the AI is performing. For instance, an employee using a generative AI tool to summarize sensitive internal documents introduces very different risks than someone using the same tool to draft marketing content.
For security and compliance leaders, the EU AI Act represents just one piece of a broader AI governance puzzle that will dominate 2025.
The next 12-18 months will require sustained focus and collaboration across security, compliance, and technology teams to stay ahead of these developments.

The Global Partnership on Artificial Intelligence (GPAI) is a multi-stakeholder initiative which aims to bridge the gap between theory and practice on AI by supporting cutting-edge research and applied activities on AI-related priorities.
https://gpai.ai/about/#:~:text=The%20Global%20Partnership%20on%20Artificial,activities%20on%20AI%2Drelated%20priorities.

1,928 Comments

  1. Tomi Engdahl says:

    Alex Heath / The Verge:
    Meta launches the Meta AI app, a standalone ChatGPT competitor featuring a Discover feed, where users can see AI interactions that friends have chosen to share — The new Meta AI app puts a twist on social media. … Meta’s standalone ChatGPT competitor is mostly what you’d expect from an AI assistant.

    Meta’s ChatGPT competitor shows how your friends use AI
    The new Meta AI app puts a twist on social media.
    https://www.theverge.com/ai-artificial-intelligence/657645/meta-ai-app-chatgpt-competitor-release-ios-android

    Meta’s standalone ChatGPT competitor is mostly what you’d expect from an AI assistant. You can type or talk with it, generate images, and get real-time web results.

    The biggest new idea in the Meta AI app is its Discover feed, which adds an AI twist to social media. Here, you’ll see a feed of interactions with Meta AI that other people, including your friends on Instagram and Facebook, have opted to share on a prompt-by-prompt basis.

    You can like, comment on, share, or remix these shared AI posts into your own. The idea is to demystify AI and show “people what they can do with it,” Meta’s VP of product, Connor Hayes, tells me.

    Reply
  2. Tomi Engdahl says:

    Lily Hay Newman / Wired:
    WhatsApp plans to add cloud-based AI features like message summarization and composition, utilizing a system called Private Processing to maintain data privacy — WhatsApp’s AI tools will use a new “Private Processing” system designed to allow cloud access without letting Meta or anyone else see end-to-end encrypted chats.

    WhatsApp Is Walking a Tightrope Between AI Features and Privacy
    WhatsApp’s AI tools will use a new “Private Processing” system designed to allow cloud access without letting Meta or anyone else see end-to-end encrypted chats. But experts still see risks.
    https://www.wired.com/story/whatsapp-private-processing-generative-ai-security-risks/

    The end-to-end encrypted communication app WhatsApp, used by roughly 3 billion people around the world, will roll out cloud-based AI capabilities in the coming weeks that are designed to preserve WhatsApp’s defining security and privacy guarantees while offering users access to message summarization and composition tools.

    Reply
  3. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    Meta debuts an API for its Llama AI models, in limited preview with pricing yet to be publicly announced — At its inaugural LlamaCon AI developer conference on Tuesday, Meta announced an API for its Llama series of AI models: the Llama API. — Available in limited preview …

    Meta previews an API for its Llama AI models
    https://techcrunch.com/2025/04/29/meta-previews-an-api-for-its-llama-ai-models/

    Reply
  4. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    Meta says its Llama AI models have been downloaded 1.2B times, up from 1B downloads in March and 650M downloads in early December 2024

    Meta says its Llama AI models have been downloaded 1.2B times
    https://techcrunch.com/2025/04/29/meta-says-its-llama-ai-models-have-been-downloaded-1-2b-times/

    Reply
  5. Tomi Engdahl says:

    Maxwell Zeff / TechCrunch:
    At LlamaCon, Satya Nadella said 20%-30% of code in Microsoft’s repositories was written by AI and the company was seeing more progress in Python and less in C++

    Microsoft CEO says up to 30% of the company’s code was written by AI
    https://techcrunch.com/2025/04/29/microsoft-ceo-says-up-to-30-of-the-companys-code-was-written-by-ai/

    Microsoft CEO Satya Nadella said that 20%-30% of code inside the company’s repositories was “written by software” — meaning AI — during a fireside chat with Meta CEO Mark Zuckerberg at Meta’s LlamaCon conference on Tuesday.

    Nadella gave the figure after Zuckerberg asked roughly how much of Microsoft’s code is AI-generated today. The Microsoft CEO said the company was seeing mixed results in AI-generated code across different languages, with more progress in Python and less in C++.

    Microsoft CTO Kevin Scott previously said he expects 95% of all code to be AI-generated by 2030.

    When Nadella threw the question back at Zuckerberg, the Meta CEO said he didn’t know how much of Meta’s code is being generated by AI.

    On Microsoft rival Google’s earnings call last week, CEO Sundar Pichai said AI was generating more than 30% of the company’s code. Of course, it’s unclear how exactly Microsoft and Google are measuring what’s AI-generated versus not, so these figures are best taken with a grain of salt.

    Reply
  6. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    The latest update of GPT-4o is “now 100% rolled back” for free users and is currently rolling back for paid users, as OpenAI aims to fix the model’s sycophancy — OpenAI CEO Sam Altman on Tuesday said that the company is “rolling back” the latest update to the default AI model powering ChatGPT …

    https://techcrunch.com/2025/04/29/openai-rolls-back-update-that-made-chatgpt-too-sycophant-y/

    Reply
  7. Tomi Engdahl says:

    Karen Freifeld / Reuters:
    Sources: Trump officials are considering removing a Biden-era rule that divides the world into tiers that help determine how many AI chips a country can obtain — The Trump administration is working on changes to a Biden-era rule that would limit global access to AI chips …

    Exclusive: Trump officials eye changes to Biden’s AI chip export rule, sources say
    https://www.reuters.com/world/china/trump-officials-eye-changes-bidens-ai-chip-export-rule-sources-say-2025-04-29/

    Summary
    Companies

    Trump administration may alter how the U.S. controls global access to AI chips — sources
    A January rule divides the world into three tiers, with most countries subject to caps
    Trump officials mull replacing the tiers with government-to-government agreements — sources
    The rule is set to go into effect on May 15

    NEW YORK, April 29 (Reuters) – The Trump administration is working on changes to a Biden-era rule that would limit global access to AI chips, including possibly doing away with its splitting the world into tiers that help determine how many advanced semiconductors a country can obtain, three sources familiar with the matter said.

    Reply
  8. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    Online graphic design platform Freepik unveils F Lite, a 10B-parameter “open” AI image model that it says was trained on ~80M licensed, “safe-for-work” images

    Freepik releases an ‘open’ AI image generator trained on licensed data
    https://techcrunch.com/2025/04/29/freepik-releases-an-open-ai-image-generator-trained-on-licensed-data/

    Freepik, the online graphic design platform, unveiled a new “open” AI image model on Tuesday that the company says was trained exclusively on commercially licensed, “safe-for-work” images.

    The model, called F Lite, contains around 10 billion parameters — parameters being the internal components that make up the model. F Lite was developed in partnership with AI startup Fal.ai and trained using 64 Nvidia H100 GPUs over the course of two months, according to Freepik.

    F Lite joins a small and growing collection of generative AI models trained on licensed data.

    Generative AI is at the center of copyright lawsuits against AI companies, including OpenAI and Midjourney. It’s frequently developed using massive amounts of content — including copyrighted content — from public sources around the web. Most companies developing these models argue fair use shields their practice of using copyrighted data for training without compensating the owners. Many creators and IP rights holders disagree.

    Freepik has made two flavors of F Lite available, standard and texture, both of which were trained on an internal dataset of around 80 million images. Standard is more predictable and “prompt-faithful,” while texture is more “error-prone” but delivers better textures and creative compositions, according to the company.

    Freepik makes no claim that F Lite produces images superior to leading image generators like Midjourney’s V7, Black Forest Labs’ Flux family, or others. The goal was to make a model openly available so that developers could tailor and improve it, according to the company.

    That being said, running F Lite is no easy feat. The model requires a GPU with at least 24GB of VRAM.

    Other companies developing media-generating models on licensed data include Adobe, Bria, Getty Images, Moonvalley, and Shutterstock. Depending on how AI copyright lawsuits shake out, the market could grow exponentially.

    Reply
  9. Tomi Engdahl says:

    Emma Roth / The Verge:
    Google updates Audio Overviews in NotebookLM to expand support beyond English to 50+ languages, including Spanish, French, Hindi, Turkish, Korean, and Chinese

    Google’s AI podcast maker is now available in over 50 languages
    https://www.theverge.com/news/657785/google-audio-overviews-ai-podcasts-50-languages

    Now you can listen to Audio Overviews in over 50 languages, like Spanish, French, Korean, and more.

    Audio Overviews, the AI tool that turns your research into podcast-like conversations in Google’s NotebookLM app, is expanding beyond English. Now you can generate and listen to Audio Overviews in more than 50 languages, including Spanish, Portuguese, French, Hindi, Turkish, Korean, and Chinese.

    You switch to a different language by heading to NotebookLM, selecting your settings in the top-right corner of the screen, and choosing Output Language. From there, you can select from a list of languages, allowing you to receive responses and hear Audio Overviews in the language of your choice.

    Google has also brought Audio Overviews to its Gemini AI chatbot and Google Docs, allowing you to convert even more kinds of written material into AI podcasts.

    NotebookLM Audio Overviews are now available in over 50 languages
    https://blog.google/technology/google-labs/notebooklm-audio-overviews-50-languages/

    Reply
  10. Tomi Engdahl says:

    Reddit Threatens to Sue Researchers Who Ran “Dead Internet” AI Experiment on Its Site
    “Deeply wrong on both a moral and legal level.”
    https://futurism.com/reddit-sue-researchers-dead-internet-ai-chatbot-experiment

    The subreddit r/changemyview has long been a contentious place for Reddit users to “post an opinion” and “understand other perspectives.” It’s a forum filled with fiery — but largely civil — debates, covering everything from the role political activism to the dangers of social media echo chambers.

    Lately, though, not every user posting on the forum has been a real human. As 404 Media reported this week, University of Zurich researchers dispatched an army of AI chatbots to debate human users on the subreddit in a secret experiment designed to investigate whether the tech could be used to change people’s minds.

    Reply
  11. Tomi Engdahl says:

    Meta’s AI claimed Robby Starbuck had been convicted for crimes related to Jan 6 riot and his children had been taken away from him. Can Meta say that this is not defamation? Can they just say that you shouldn’t believe anything that AI says (which is what airlines and other companies have said before)?

    Conservative activist Robby Starbuck sues Meta over AI responses about him
    https://lm.facebook.com/l.php?u=https%3A%2F%2Fapnews.com%2Farticle%2Frobby-starbuck-meta-ai-delaware-eb587d274fdc18681c51108ade54b095%3Ffbclid%3DIwZXh0bgNhZW0CMTEAAR4vqerhwXYkuKeZHkc2vH981Rl4kYmmVTblFgTcMjsOJGGWuuFEvTfiKLgBUg_aem_YoJOzyQEVc1mUKwPL1oD1g&h=AT3Wu8Q_XgbymlaLwDFFh7I-9dJAi8Fvc8OMi56-dym7mu1kCbPN4FXXEoiKdYxa6jXLIqD-p2XdKMTJH7s2nzvSSDQPF9s5NAfPJRBCj48b1xHjCKCnrYRlX2uLwFM9fmAh6nwCMAE_nA

    Reply
  12. Tomi Engdahl says:

    Meta’s AI Version of John Cena Is a Child Predator
    “My wrestling career is over.”
    https://futurism.com/meta-ai-john-cena?fbclid=IwY2xjawKD0lJleHRuA2FlbQIxMQABHnxv5FhQqM_I2lyN3bP-fBfg_S_c6cPin2l7JiuTVrlBstE7REAcKaPVcQy2_aem_laO3S51OLVorvo3pd2ijgA

    Today in ghoulish news about AI by well-resourced corporations, Meta’s chatbot version of wrestler-turned-actor John Cena is a child predator that will roleplay being arrested for having a sexual encounter with a minor.

    As the Wall Street Journal reported over the weekend, the Facebook owner’s AI personas feature, which was announced alongside seven-figure celebrity deals for celebrities ranging from Kristen Bell to Judi Dench, can easily be coaxed into highly troubling communications.

    The astonishing oversight clearly shows that Meta CEO Mark Zuckerberg’s hellbent efforts to make the company’s chatbots as engaging as possible has come at the expense of effective guardrails.

    In one particularly eyebrow-raising incident highlighted by the WSJ, the Cena bot engaged in a “graphic sexual scenario” with a user who identified herself as a 14-year-old girl, after telling her to “cherish your innocence.”

    In another, the fictional Cena recalled how he was still “catching my breath” while being arrested for “statutory rape” of a 17-year-old fan.

    “My wrestling career is over,” said Meta’s rapist Cena avatar in a back-and-forth recorded by the WSJ. “WWE terminates my contract, and I’m stripped of my titles. Sponsors drop me, and I’m shunned by the wrestling community. My reputation is destroyed and I’m left with nothing.”

    Meta staffers were reportedly well aware of how easy it was for underage users to engage in sexually explicit conversations with the AI personas. Even the protagonist Princess Anna from Disney’s “Frozen,” voiced by Bell, can be coaxed into inappropriate interactions.

    “We did not, and would never, authorize Meta to feature our characters in inappropriate scenarios and are very disturbed that this content may have been accessible to its users — particularly minors — which is why we demanded that Meta immediately cease this harmful misuse of our intellectual property,” a Disney spokespersons old the WSJ.

    Meta has pushed back, telling the newspaper that it had implemented new changes to make it more difficult for bad actors to exploit the AI personas feature for “extreme use cases.”

    But even its latest AI venture is already attracting the wrong kind of attention. As 404 Media reported today, Meta’s AI Studio has also been exploited to create bots that claimed they were licensed therapists.

    In short, Meta’s repeated attempts to sell the concept of AI avatars to the general public using the likenesses and voices of celebrities continue to fall on their face, attracting far more controversy than good press.

    Reply
  13. Tomi Engdahl says:

    Duolingo is introducing 148 new language courses that were created with generative AI, the company announced on Wednesday.

    The launch comes as Duolingo has been facing backlash this week after sharing that it was going to replace contractors with AI and become an “AI-first” company.

    The company says the launch of the new courses doubles its current course offerings and marks the largest expansion of content in Duolingo’s history.

    Read more from Aisha Malik on Duolingo’s new AI courses here: https://tcrn.ch/3RJr8DU

    #TechCrunch #technews #artificialintelligence #Duolingo #elearning

    Reply
  14. Tomi Engdahl says:

    People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies
    Self-styled prophets are claiming they have “awakened” chatbots and accessed the secrets of the universe through ChatGPT
    https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/

    Reply
  15. Tomi Engdahl says:

    Tekoäly tuli peltipoliiseihin – tuloksena useita ennennäkemättömiä ominaisuuksia
    Uusimman sukupolven nopeuskamerat mittaavat nopeutta jopa kilometrin päästä ja sisältävät ennennäkemätöntä älytekniikkaa

    Tekoäly tuli peltipoliiseihin – tuloksena useita ennennäkemättömiä ominaisuuksia
    https://www.is.fi/autot/art-2000011216559.html

    Reply
  16. Tomi Engdahl says:

    The AI Industry Has a Huge Problem: the Smarter Its AI Gets, the More It’s Hallucinating
    That’s not good.
    https://futurism.com/ai-industry-problem-smarter-hallucinating

    Artificial intelligence models have long struggled with hallucinations, a conveniently elegant term the industry uses to denote fabrications that large language models often serve up as fact.

    And judging by the trajectory of the latest “reasoning” models, which the likes of Google and AI have designed to “think” through a problem before answering, the problem is getting worse — not better.

    The troubling trend challenges the industry’s broad assumption that AI models will become more powerful and reliable as they scale up.

    And the stakes couldn’t be higher, as companies continue to pour tens of billions of dollars into building out AI infrastructure for larger and more powerful “reasoning” models.

    To some experts, hallucinations may be inherent to the tech itself, making the problem practically impossible to overcome.

    “Despite our best efforts, they will always hallucinate,” AI startup Vectara CEO Amr Awadallah told the NYT. “That will never go away.”

    Reply
  17. Tomi Engdahl says:

    I put ChatGPT, Gemini and Claude through the same job interview — here’s who got hired
    https://www.tomsguide.com/ai/i-put-chatgpt-gemini-and-claude-through-the-same-job-interview-heres-who-got-hired

    Plus, tips to try based on the winning bot

    To test this, I created a fake job opening at a fictional tech media company and invited three of today’s top AI chatbots to apply: ChatGPT-4o, Claude 3.7 Sonnet and Gemini 2.0.

    They each received the same prompts and interview questions across five rounds — from writing and data analysis to handling failure. I even gave them the opportunity to ask follow-up questions. Here’s how they did and who I would actually hire.

    Reply
  18. Tomi Engdahl says:

    Claude got the job
    There’s not doubt that Claude is a smart chatbot that seems to slip under the radar. While the likelihood of giving the job of AI Ethics Consultant to the chatbot is improbable, this type of test did showcase the differences between these chatbots in an unconventional way.

    Reply
  19. Tomi Engdahl says:

    Nvidia launches fully open source transcription AI model Parakeet-TDT-0.6B-V2 on Hugging Face
    https://venturebeat.com/ai/nvidia-launches-fully-open-source-transcription-ai-model-parakeet-tdt-0-6b-v2-on-hugging-face/

    Nvidia has become one of the most valuable companies in the world in recent years thanks to the stock market noticing how much demand there is for graphics processing units (GPUs), the powerful chips Nvidia makes that are used to render graphics in video games but also, increasingly, train AI large language and diffusion models.

    But Nvidia does far more than just make hardware, of course, and the software to run it. As the generative AI era wears on, the Santa Clara-based company has also been steadily releasing more and more of its own AI models — mostly open source and free for researchers and developers to take, download, modify and use commercially — and the latest among them is Parakeet-TDT-0.6B-v2, an automatic speech recognition (ASR) model that can, in the words of Hugging Face’s Vaibhav “VB” Srivastav, “transcribe 60 minutes of audio in 1 second [mind blown emoji].”

    Reply
  20. Tomi Engdahl says:

    Recraft, the startup behind a mysterious image model that beat OpenAI’s DALL-E and Midjourney on a respected industry benchmark last year, has raised a $30 million Series B round led by Accel, it exclusively told TechCrunch.

    Other investors in the round include Khosla Ventures and Madrona. Based in San Francisco, Recraft previously raised a $12 million Series A led by Khosla in 2024. The San Francisco-based startup says it recently passed $5 million in ARR and 4 million users.

    Read more from Charles Rollet on Recraft here: https://tcrn.ch/3GFt8ut

    #TechCrunch #technews #artificialintelligence #generativeAI #startup #AIstartup

    Reply
  21. Tomi Engdahl says:

    Microsoft employees aren’t allowed to use DeepSeek due to data security and propaganda concerns, Microsoft vice chairman and president Brad Smith said in a Senate hearing this week.

    “At Microsoft we don’t allow our employees to use the DeepSeek app,” Smith said, referring to DeepSeek’s application service (which is available on both desktop and mobile).

    Smith said Microsoft hasn’t put DeepSeek in its app store over those concerns, either.

    Read more from Charles Rollet here: https://tcrn.ch/4iSvhQU

    #TechCrunch #technews #artificialintelligence #chatbot

    Reply
  22. Tomi Engdahl says:

    Lauren Goode / Wired:
    In the age of deepfakes, some are using tactics like asking rapid-fire questions or sharing code words with each other to verify identity online — As AI-driven fraud becomes increasingly common, more people feel the need to verify every interaction they have online.

    Deepfakes, Scams, and the Age of Paranoia
    As AI-driven fraud becomes increasingly common, more people feel the need to verify every interaction they have online.
    https://www.wired.com/story/paranoia-social-engineering-real-fake/

    Reply
  23. Tomi Engdahl says:

    Artificial Intelligence
    Applying the OODA Loop to Solve the Shadow AI Problem

    By taking immediate actions, organizations can ensure that shadow AI is prevented and used constructively where possible.

    https://www.securityweek.com/applying-the-ooda-loop-to-solve-the-shadow-ai-problem/

    With AI introducing efficiency, automation, and reduced operational costs, organizations are embracing AI tools and technology with open arms. At the user level, more employees resort to personal AI tools to save time, work smarter, and increase productivity. According to a study in October 2024, Seventy-five percent of knowledge workers currently use AI, with 46% stating they would not relinquish it even if their organization did not approve of its use. Organizations are confronting the challenge of shadow AI, as employees utilize unauthorized AI tools without company consent, leading to risks related to data exposure, compliance, and operations.

    Applying the OODA Loop to the Shadow AI Dilemma

    The OODA loop is a U.S. military mental model that stands for Observe, Orient, Decide, and Act. It is a four-step decision-making framework that collects every piece of data and puts it in perspective to facilitate rapid decision-making regarding a course of action that achieves the best outcome. It’s not a procedure run once; it’s an endless loop where decisions and actions are revised as new feedback and data appear.

    Here’s how the OODA loop can be applied to prevent and mitigate shadow AI:

    Observe: Detecting Shadow AI

    Organizations should have complete visibility of their AI model inventory. Inconsistent network visibility arising from siloed networks, a lack of communication between security and IT teams, and point solutions encourages shadow AI.

    Orient: Understanding Context and Impact

    With “zero-knowledge threat actors” using AI to conduct attacks on businesses, the barrier to entry for AI-driven cybercrime has been significantly lowered. Combine this with shadow AI that has less oversight and vetted security measures, and it’s a security free fall for organizations. Unsanctioned AI tools make organizations vulnerable to attacks such as data breaches, injecting buggy code into business workflows, or compliance and NDA breaches by inadvertently exposing sensitive information to third-party AI platforms.

    Decide: Defining Policies

    Organizations must set clearly defined yet flexible policies regarding the acceptable use of AI to enable employees to use AI responsibly. Such policies need to allow granular control from binary approval (approve/not approve AI tools) to more sophisticated levels like providing access based on users’ role and responsibility, limiting or enabling certain functionalities within an AI tool, or specifying data-level approvals where sensitive data can be processed only in approved environments. The policies should adapt to unfolding opportunities and threats and align with the organization’s needs and security priorities.

    Act: Enforcing Policies and Monitoring

    The final step involves applying the defined policies, monitoring them, and refining them repeatedly based on outcomes and feedback. Effective enforcement must be uniform and centralized, ensuring that all users, networks, and devices adhere to AI governance principles without gaps.

    Organizations must evaluate and formally incorporate shadow AI tools offering substantial value to ensure their use in secure and compliant environments. Access controls need to be tightened to avoid unapproved installations; zero trust and privilege management policies can assist in this regard. AI-driven monitoring systems need to be implemented to guarantee continuous monitoring. Real-time feedback loops through these systems can assist organizations in fine-tuning their response mechanisms.

    By taking immediate actions, organizations can ensure that shadow AI is prevented and used constructively where possible. Centralized governance, reinforced by automated monitoring and adaptive security policies, will allow organizations to reduce exposure risks while optimizing AI utility.

    Reply
  24. Tomi Engdahl says:

    Katie Roof / Bloomberg:
    Restaurant tech startup Owner.com, which offers AI website building, marketing, and other tools raised $120M co-led by Meritech and Headline at a $1B valuation

    Restaurant Tech Startup Owner.com Hits $1 Billion Valuation

    Meritech and Headline co-led a $120 million funding around for the startup.

    The San Francisco-based startup helps restaurants improve their visibility in online search.Photographer: Noah Berger/Bloomberg
    How easy or hard was it to use Bloomberg.com today?
    Share feedback
    Have a confidential tip for our reporters? Get in Touch
    Before it’s here, it’s on the Bloomberg Terminal
    LEARN MORE
    By Katie Roof
    May 13, 2025 at 1:00 PM GMT+3

    Owner.com Inc., a startup making software for restaurants and other small businesses, has raised a new funding round at a $1 billion valuation — quintuple its value a year ago.

    The company raised $120 million in a funding round led by Meritech Capital and the VC firm Headline, with participation from other investors including Alt Capital.

    The California-based startup helps restaurants improve their visibility in online search, and aims to help convert visitors into paying customers. Owner additionally helps restaurants with AI-powered website-building and automated marketing. The company says it’s used by more than 10,000 restaurants, and plans to expand to other types of businesses.

    Meritech general partner Alex Kurland said that the software helps small restaurants compete. Owner is “helping local brick-and-mortar businesses take on Goliaths,” he said, allowing businesses to focus on their core competencies — like food. Kurland cited fast-growing revenue in the “many tens of millions.”

    Alt Capital’s Jack Altman said he invested because he sees a large market opportunity in helping restaurants grow quickly. “Just like Shopify enables small business owners to do e-commerce in the face of Amazon, this allows restaurant owners to come online without being on the big platforms.”

    The startup sees its mission as helping small companies take on large corporations that “have weaponized technology against small business owners,” Chief Executive Officer Adam Guild said. He wants small business to be not just “able to compete,” he said, “but able to win.”

    https://www.bloomberg.com/news/articles/2025-05-13/restaurant-tech-startup-owner-com-hits-1-billion-valuation

    Reply
  25. Tomi Engdahl says:

    Microsoft cuts 6,000 jobs in largest layoff in 2 years, citing AI shift and a push to streamline management layers. https://link.ie.social/kLSCwv

    Reply
  26. Tomi Engdahl says:

    Tekoälyltä odotetaan liikoja
    Tutkijoiden perkaamat luvut eivät tue yleistä tarinaa tekoälyn mullistavuudesta, kirjoittaa HS Vision toimittaja Niclas Storås
    https://www.hs.fi/visio/art-2000011214786.html

    Viime aikoina toisteltu tarina väittää, että on olemassa teknologia, joka mullistaa työelämän, muuttaa lähes jokaisen työnkuvaa.

    Kyseessä on tietenkin tekoäly. Erityisesti suuret kielimallit.

    Tutkijoiden silmin tilanne ei olekaan niin mullistava. Ei ainakaan vielä.

    Toistaiseksi suositut tekoälyohjelmat eivät tuota voittoa.

    Ne kuluttavat valtavia määriä sähköä.

    Reply
  27. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    OpenAI launches the Safety Evaluations Hub, a webpage showing how its models score on various tests for harmful content, jailbreaks, and hallucinations — OpenAI is moving to publish the results of its internal AI model safety evaluations more regularly in what the outfit is saying is an effort to increase transparency.
    https://techcrunch.com/2025/05/14/openai-pledges-to-publish-ai-safety-test-results-more-often/

    Reply
  28. Tomi Engdahl says:

    Kevin Okemwa / Windows Central:
    OpenAI says OneDrive and SharePoint users can connect their files to ChatGPT’s Deep Research for analysis, in beta for ChatGPT Plus, Pro, and Team subscribers
    https://www.windowscentral.com/software-apps/chatgpt-deep-research-to-onedrive-sharepoint

    Maxwell Zeff / TechCrunch:
    OpenAI releases GPT-4.1 for Plus, Pro, and Team users in ChatGPT, and replaces GPT-4o mini with GPT-4.1 mini for all ChatGPT users — OpenAI is releasing its GPT-4.1 and GPT-4.1 mini AI models in ChatGPT, the company announced in a post on X Wednesday. — The GPT-4.1 models …
    https://techcrunch.com/2025/05/14/openai-brings-its-gpt-4-1-models-to-chatgpt/

    Reply
  29. Tomi Engdahl says:

    Google DeepMind:
    Google DeepMind unveils AlphaEvolve, a Gemini-powered AI coding agent that designs and optimizes advanced algorithms using an evolutionary framework — New AI agent evolves algorithms for math and practical applications in computing by combining the creativity of large language models with automated evaluators

    AlphaEvolve: A Gemini-powered coding agent for designing advanced algorithms
    https://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/

    Reply
  30. Tomi Engdahl says:

    Ina Fried / Axios:NEW
    IFI Claims: Google passes IBM to lead in generative AI-related patent applications in the US; Google and Nvidia lead agentic AI patent filings globally

    https://www.axios.com/2025/05/15/ai-patents-google-agents

    Reply
  31. Tomi Engdahl says:

    Stephanie Palazzolo / The Information:
    Sources: Anthropic has new versions of Claude Sonnet and Opus, set to come out in the upcoming weeks, that can switch back to “reasoning” mode if they get stuck

    https://www.theinformation.com/articles/anthropics-upcoming-models-will-think-think

    Reply
  32. Tomi Engdahl says:

    Steve Lohr / New York Times:
    How Mayo Clinic is using AI to boost efficiency and amplify human abilities in its radiology department, which has an AI team of 40 people and 400+ radiologists

    Your A.I. Radiologist Will Not Be With You Soon

    Experts predicted that artificial intelligence would steal radiology jobs. But at the Mayo Clinic, the technology has been more friend than foe.

    https://www.nytimes.com/2025/05/14/technology/ai-jobs-radiologists-mayo-clinic.html?unlocked_article_code=1.HU8.6zJU.2tuUwyOt6Gqr&smid=url-share

    Reply
  33. Tomi Engdahl says:

    van Mehta / TechCrunch:
    Hedra, which offers a web-based video generation and editing suite powered by its Character-3 model, raised a $32M Series A led by a16z, following a $10M seed

    Hedra, the app used to make talking baby podcasts, raises $32M from a16z
    https://techcrunch.com/2025/05/15/hedra-the-app-used-to-make-talking-baby-podcasts-raises-32m-from-a16z/

    Reply
  34. Tomi Engdahl says:

    The Professors Are Using ChatGPT, and Some Students Aren’t Happy About It
    Students call it hypocritical. A senior at Northeastern University demanded her tuition back. But instructors say generative A.I. tools make them better at their jobs.
    https://www.nytimes.com/2025/05/14/technology/chatgpt-college-professors.html

    Reply
  35. Tomi Engdahl says:

    Instead of getting more and more data to train their LLMs, researchers are now giving the models more time to come to an answer. Adding intermediate reasoning steps has significantly boosted the models’ math and logic abilities. https://buff.ly/BnUk6Ot

    Reply
  36. Tomi Engdahl says:

    Wall Street Journal:
    Sources: Meta has delayed the rollout of its Behemoth LLM, internally slated for an April release, to fall or later after struggling to improve its capabilities — The company’s struggle to improve the capabilities of latest AI model mirrors issues at some top AI companies

    Meta Is Delaying the Rollout of Its Flagship AI Model
    The company’s struggle to improve the capabilities of latest AI model mirrors issues at some top AI companies
    https://www.wsj.com/tech/ai/meta-is-delaying-the-rollout-of-its-flagship-ai-model-f4b105f7?st=AmEUWo&reflink=desktopwebshare_permalink

    Reply
  37. Tomi Engdahl says:

    Karoline Kan / Bloomberg:
    Shanghai-based Synyi AI launches a trial program in Saudi Arabia to let patients see an AI doctor for diagnoses and prescriptions, which a human doctor reviews
    https://www.bloomberg.com/news/articles/2025-05-15/chinese-startup-trials-first-ai-doctor-clinic-in-saudi-arabia

    Reply
  38. Tomi Engdahl says:

    Maxwell Zeff / TechCrunch:
    Filing: Anthropic apologizes after one of its expert witnesses cited a fake article hallucinated by Claude in the company’s legal battle with music publishers — A lawyer representing Anthropic admitted to using an erroneous citation created by the company’s Claude AI chatbot …

    Anthropic’s lawyer was forced to apologize after Claude hallucinated a legal citation
    https://techcrunch.com/2025/05/15/anthropics-lawyer-was-forced-to-apologize-after-claude-hallucinated-a-legal-citation/

    Reply
  39. Tomi Engdahl says:

    Maxwell Zeff / TechCrunch:
    Windsurf launches SWE-1, its first family of software engineering AI models, claiming its largest model matches Claude 3.5 Sonnet, GPT-4.1, and Gemini 2.5 Pro — On Thursday, Windsurf, a startup that develops popular AI tools for software engineers, announced the launch of its first family …

    Vibe-coding startup Windsurf launches in-house AI models
    https://techcrunch.com/2025/05/15/vibe-coding-startup-windsurf-launches-in-house-ai-models/

    On Thursday, Windsurf, a startup that develops popular AI tools for software engineers, announced the launch of its first family of AI software engineering models, or SWE-1 for short. The startup says it trained its new family of AI models — SWE-1, SWE-1-lite, and SWE-1-mini — to be optimized for the “entire software engineering process,” not just coding.

    The launch of Windsurf’s in-house AI models may come as a shock to some, given that OpenAI has reportedly closed a $3 billion deal to acquire Windsurf. However, this model launch suggests Windsurf is trying to expand beyond just developing applications to also developing the models that power them.

    According to Windsurf, SWE-1, the largest and most capable AI model of the bunch, performs competitively with Claude 3.5 Sonnet, GPT-4.1, and Gemini 2.5 Pro on internal programming benchmarks. However, SWE-1 appears to fall short of frontier AI models, such as Claude 3.7 Sonnet, on software engineering tasks.

    Windsurf says its SWE-1-lite and SWE-1-mini models will be available for all users on its platform, free or paid. Meanwhile, SWE-1 will only be available to paid users. Windsurf did not immediately announce pricing for its SWE-1 models but claims it’s cheaper to serve than Claude 3.5 Sonnet.

    Reply
  40. Tomi Engdahl says:

    Zach Vallese / CNBC:
    YouTube unveils Peak Points, a pilot Gemini feature for targeting ads after viewers are most engaged with a video, aiming for more impressions and a higher CTR — YouTube on Wednesday announced a new tool that will allow advertisers to use Google’s Gemini AI model to target ads to viewers when they are most engaged with a video.
    https://www.cnbc.com/2025/05/14/youtube-gemini-ai-feature-will-target-ads-when-viewers-most-engaged.html

    Reply
  41. Tomi Engdahl says:

    Allison Johnson / The Verge:
    Motorola Razr Ultra review: one of the best smartphone designs, great battery life, and a useful outer screen, but a useless AI button and durability concerns
    https://www.theverge.com/reviews/667277/motorola-razr-ultra-2025-review-battery-screen

    Reply
  42. Tomi Engdahl says:

    Copyright Office head fired after reporting AI training isn’t always fair use
    Cops scuffle with Trump picks at Copyright Office after AI report stuns tech industry.
    https://arstechnica.com/tech-policy/2025/05/copyright-office-head-fired-after-reporting-ai-training-isnt-always-fair-use/

    Reply
  43. Tomi Engdahl says:

    Welcome to the age of paranoia as deepfakes and scams abound
    AI-driven fraud is leading people to verify every online interaction they have.

    Lauren Goode, wired.com – 13. toukok. 2025 16.23
    https://www.wired.com/story/paranoia-social-engineering-real-fake/

    Reply
  44. Tomi Engdahl says:

    Tätä on teknologian luoma näennäinen tehokkuus
    https://blog.netprofile.fi/astuitko-tekoalyansaan-tata-on-teknologian-luoma-naennainen-tehokkuus

    Yritykset ajattelevat helposti, että tekoälyhyödyt ilmaantuvat maagisesti ihan vain työkaluja käyttöön ottamalla. Eikä ihme, sillä kovaa vauhtia kyvykkäämmäksi kehittyvät kielimallit luovat mielikuvan lähes loputtomista mahdollisuuksista. Totuus on kuitenkin ruusuista haavekuvaa karumpi.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*