AI is developing all the time. Here are some picks from several articles what is expected to happen in AI and around it in 2025. Here are picks from various articles, the texts are picks from the article edited and in some cases translated for clarity.
AI in 2025: Five Defining Themes
https://news.sap.com/2025/01/ai-in-2025-defining-themes/
Artificial intelligence (AI) is accelerating at an astonishing pace, quickly moving from emerging technologies to impacting how businesses run. From building AI agents to interacting with technology in ways that feel more like a natural conversation, AI technologies are poised to transform how we work.
But what exactly lies ahead?
1. Agentic AI: Goodbye Agent Washing, Welcome Multi-Agent Systems
AI agents are currently in their infancy. While many software vendors are releasing and labeling the first “AI agents” based on simple conversational document search, advanced AI agents that will be able to plan, reason, use tools, collaborate with humans and other agents, and iteratively reflect on progress until they achieve their objective are on the horizon. The year 2025 will see them rapidly evolve and act more autonomously. More specifically, 2025 will see AI agents deployed more readily “under the hood,” driving complex agentic workflows.
In short, AI will handle mundane, high-volume tasks while the value of human judgement, creativity, and quality outcomes will increase.
2. Models: No Context, No Value
Large language models (LLMs) will continue to become a commodity for vanilla generative AI tasks, a trend that has already started. LLMs are drawing on an increasingly tapped pool of public data scraped from the internet. This will only worsen, and companies must learn to adapt their models to unique, content-rich data sources.
We will also see a greater variety of foundation models that fulfill different purposes. Take, for example, physics-informed neural networks (PINNs), which generate outcomes based on predictions grounded in physical reality or robotics. PINNs are set to gain more importance in the job market because they will enable autonomous robots to navigate and execute tasks in the real world.
Models will increasingly become more multimodal, meaning an AI system can process information from various input types.
3. Adoption: From Buzz to Business
While 2024 was all about introducing AI use cases and their value for organizations and individuals alike, 2025 will see the industry’s unprecedented adoption of AI specifically for businesses. More people will understand when and how to use AI, and the technology will mature to the point where it can deal with critical business issues such as managing multi-national complexities. Many companies will also gain practical experience working for the first time through issues like AI-specific legal and data privacy terms (compared to when companies started moving to the cloud 10 years ago), building the foundation for applying the technology to business processes.
4. User Experience: AI Is Becoming the New UI
AI’s next frontier is seamlessly unifying people, data, and processes to amplify business outcomes. In 2025, we will see increased adoption of AI across the workforce as people discover the benefits of humans plus AI.
This means disrupting the classical user experience from system-led interactions to intent-based, people-led conversations with AI acting in the background. AI copilots will become the new UI for engaging with a system, making software more accessible and easier for people. AI won’t be limited to one app; it might even replace them one day. With AI, frontend, backend, browser, and apps are blurring. This is like giving your AI “arms, legs, and eyes.”
5. Regulation: Innovate, Then Regulate
It’s fair to say that governments worldwide are struggling to keep pace with the rapid advancements in AI technology and to develop meaningful regulatory frameworks that set appropriate guardrails for AI without compromising innovation.
12 AI predictions for 2025
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
https://www.cio.com/article/3630070/12-ai-predictions-for-2025.html
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
1. Small language models and edge computing
Most of the attention this year and last has been on the big language models — specifically on ChatGPT in its various permutations, as well as competitors like Anthropic’s Claude and Meta’s Llama models. But for many business use cases, LLMs are overkill and are too expensive, and too slow, for practical use.
“Looking ahead to 2025, I expect small language models, specifically custom models, to become a more common solution for many businesses,”
2. AI will approach human reasoning ability
In mid-September, OpenAI released a new series of models that thinks through problems much like a person would, it claims. The company says it can achieve PhD-level performance in challenging benchmark tests in physics, chemistry, and biology. For example, the previous best model, GPT-4o, could only solve 13% of the problems on the International Mathematics Olympiad, while the new reasoning model solved 83%.
If AI can reason better, then it will make it possible for AI agents to understand our intent, translate that into a series of steps, and do things on our behalf, says Gartner analyst Arun Chandrasekaran. “Reasoning also helps us use AI as more of a decision support system,”
3. Massive growth in proven use cases
This year, we’ve seen some use cases proven to have ROI, says Monteiro. In 2025, those use cases will see massive adoption, especially if the AI technology is integrated into the software platforms that companies are already using, making it very simple to adopt.
“The fields of customer service, marketing, and customer development are going to see massive adoption,”
4. The evolution of agile development
The agile manifesto was released in 2001 and, since then, the development philosophy has steadily gained over the previous waterfall style of software development.
“For the last 15 years or so, it’s been the de-facto standard for how modern software development works,”
5. Increased regulation
At the end of September, California governor Gavin Newsom signed a law requiring gen AI developers to disclose the data they used to train their systems, which applies to developers who make gen AI systems publicly available to Californians. Developers must comply by the start of 2026.
There are also regulations about the use of deep fakes, facial recognition, and more. The most comprehensive law, the EU’s AI Act, which went into effect last summer, is also something that companies will have to comply with starting in mid-2026, so, again, 2025 is the year when they will need to get ready.
6. AI will become accessible and ubiquitous
With gen AI, people are still at the stage of trying to figure out what gen AI is, how it works, and how to use it.
“There’s going to be a lot less of that,” he says. But gen AI will become ubiquitous and seamlessly woven into workflows, the way the internet is today.
7. Agents will begin replacing services
Software has evolved from big, monolithic systems running on mainframes, to desktop apps, to distributed, service-based architectures, web applications, and mobile apps. Now, it will evolve again, says Malhotra. “Agents are the next phase,” he says. Agents can be more loosely coupled than services, making these architectures more flexible, resilient and smart. And that will bring with it a completely new stack of tools and development processes.
8. The rise of agentic assistants
In addition to agents replacing software components, we’ll also see the rise of agentic assistants, adds Malhotra. Take for example that task of keeping up with regulations.
Today, consultants get continuing education to stay abreast of new laws, or reach out to colleagues who are already experts in them. It takes time for the new knowledge to disseminate and be fully absorbed by employees.
“But an AI agent can be instantly updated to ensure that all our work is compliant with the new laws,” says Malhotra. “This isn’t science fiction.”
9. Multi-agent systems
Sure, AI agents are interesting. But things are going to get really interesting when agents start talking to each other, says Babak Hodjat, CTO of AI at Cognizant. It won’t happen overnight, of course, and companies will need to be careful that these agentic systems don’t go off the rails.
Companies such as Sailes and Salesforce are already developing multi-agent workflows.
10. Multi-modal AI
Humans and the companies we build are multi-modal. We read and write text, we speak and listen, we see and we draw. And we do all these things through time, so we understand that some things come before other things. Today’s AI models are, for the most part, fragmentary. One can create images, another can only handle text, and some recent ones can understand or produce video.
11. Multi-model routing
Not to be confused with multi-modal AI, multi-modal routing is when companies use more than one LLM to power their gen AI applications. Different AI models are better at different things, and some are cheaper than others, or have lower latency. And then there’s the matter of having all your eggs in one basket.
“A number of CIOs I’ve spoken with recently are thinking about the old ERP days of vendor lock,” says Brett Barton, global AI practice leader at Unisys. “And it’s top of mind for many as they look at their application portfolio, specifically as it relates to cloud and AI capabilities.”
Diversifying away from using just a single model for all use cases means a company is less dependent on any one provider and can be more flexible as circumstances change.
12. Mass customization of enterprise software
Today, only the largest companies, with the deepest pockets, get to have custom software developed specifically for them. It’s just not economically feasible to build large systems for small use cases.
“Right now, people are all using the same version of Teams or Slack or what have you,” says Ernst & Young’s Malhotra. “Microsoft can’t make a custom version just for me.” But once AI begins to accelerate the speed of software development while reducing costs, it starts to become much more feasible.
9 IT resolutions for 2025
https://www.cio.com/article/3629833/9-it-resolutions-for-2025.html
1. Innovate
“We’re embracing innovation,”
2. Double down on harnessing the power of AI
Not surprisingly, getting more out of AI is top of mind for many CIOs.
“I am excited about the potential of generative AI, particularly in the security space,”
3. And ensure effective and secure AI rollouts
“AI is everywhere, and while its benefits are extensive, implementing it effectively across a corporation presents challenges. Balancing the rollout with proper training, adoption, and careful measurement of costs and benefits is essential, particularly while securing company assets in tandem,”
4. Focus on responsible AI
The possibilities of AI grow by the day — but so do the risks.
“My resolution is to mature in our execution of responsible AI,”
“AI is the new gold and in order to truly maximize it’s potential, we must first have the proper guardrails in place. Taking a human-first approach to AI will help ensure our state can maintain ethics while taking advantage of the new AI innovations.”
5. Deliver value from generative AI
As organizations move from experimenting and testing generative AI use cases, they’re looking for gen AI to deliver real business value.
“As we go into 2025, we’ll continue to see the evolution of gen AI. But it’s no longer about just standing it up. It’s more about optimizing and maximizing the value we’re getting out of gen AI,”
6. Empower global talent
Although harnessing AI is a top objective for Morgan Stanley’s Wetmur, she says she’s equally committed to harnessing the power of people.
7. Create a wholistic learning culture
Wetmur has another talent-related objective: to create a learning culture — not just in her own department but across all divisions.
8. Deliver better digital experiences
Deltek’s Cilsick has her sights set on improving her company’s digital employee experience, believing that a better DEX will yield benefits in multiple ways.
Cilsick says she first wants to bring in new technologies and automation to “make things as easy as possible,” mirroring the digital experiences most workers have when using consumer technologies.
“It’s really about leveraging tech to make sure [employees] are more efficient and productive,”
“In 2025 my primary focus as CIO will be on transforming operational efficiency, maximizing business productivity, and enhancing employee experiences,”
9. Position the company for long-term success
Lieberman wants to look beyond 2025, saying another resolution for the year is “to develop a longer-term view of our technology roadmap so that we can strategically decide where to invest our resources.”
“My resolutions for 2025 reflect the evolving needs of our organization, the opportunities presented by AI and emerging technologies, and the necessity to balance innovation with operational efficiency,”
Lieberman aims to develop AI capabilities to automate routine tasks.
“Bots will handle common inquiries ranging from sales account summaries to HR benefits, reducing response times and freeing up resources for strategic initiatives,”
Not just hype — here are real-world use cases for AI agents
https://venturebeat.com/ai/not-just-hype-here-are-real-world-use-cases-for-ai-agents/
Just seven or eight months ago, when a customer called in to or emailed Baca Systems with a service question, a human agent handling the query would begin searching for similar cases in the system and analyzing technical documents.
This process would take roughly five to seven minutes; then the agent could offer the “first meaningful response” and finally begin troubleshooting.
But now, with AI agents powered by Salesforce, that time has been shortened to as few as five to 10 seconds.
Now, instead of having to sift through databases for previous customer calls and similar cases, human reps can ask the AI agent to find the relevant information. The AI runs in the background and allows humans to respond right away, Russo noted.
AI can serve as a sales development representative (SDR) to send out general inquires and emails, have a back-and-forth dialogue, then pass the prospect to a member of the sales team, Russo explained.
But once the company implements Salesforce’s Agentforce, a customer needing to modify an order will be able to communicate their needs with AI in natural language, and the AI agent will automatically make adjustments. When more complex issues come up — such as a reconfiguration of an order or an all-out venue change — the AI agent will quickly push the matter up to a human rep.
Open Source in 2025: Strap In, Disruption Straight Ahead
Look for new tensions to arise in the New Year over licensing, the open source AI definition, security and compliance, and how to pay volunteer maintainers.
https://thenewstack.io/open-source-in-2025-strap-in-disruption-straight-ahead/
The trend of widely used open source software moving to more restrictive licensing isn’t new.
In addition to the demands of late-stage capitalism and impatient investors in companies built on open source tools, other outside factors are pressuring the open source world. There’s the promise/threat of generative AI, for instance. Or the shifting geopolitical landscape, which brings new security concerns and governance regulations.
What’s ahead for open source in 2025?
More Consolidation, More Licensing Changes
The Open Source AI Debate: Just Getting Started
Security and Compliance Concerns Will Rise
Paying Maintainers: More Cash, Creativity Needed
Kyberturvallisuuden ja tekoälyn tärkeimmät trendit 2025
https://www.uusiteknologia.fi/2024/11/20/kyberturvallisuuden-ja-tekoalyn-tarkeimmat-trendit-2025/
1. Cyber infrastructure will be centered on a single, unified security platform
2. Big data will give an edge against new entrants
3. AI’s integrated role in 2025 means building trust, governance engagement, and a new kind of leadership
4. Businesses will adopt secure enterprise browsers more widely
5. AI’s energy implications will be more widely recognized in 2025
6. Quantum realities will become clearer in 2025
7. Security and marketing leaders will work more closely together
Presentation: For 2025, ‘AI eats the world’.
https://www.ben-evans.com/presentations
Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing
However, just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity. Right now, we very much see AI in this “peak of inflated expectations” phase and predict that it will dip into the “trough of disillusionment”, where organizations realize that it is not the silver bullet they thought it would be. In fact, there are already signs of cynicism as decision-makers are bombarded with marketing messages from vendors and struggle to discern what is a genuine use case and what is not relevant for their organization.
There is also regulation that will come into force, such as the EU AI Act, which is a comprehensive legal framework that sets out rules for the development and use of AI.
AI certainly won’t solve every problem, and it should be used like automation, as part of a collaborative mix of people, process and technology. You simply can’t replace human intuition with AI, and many new AI regulations stipulate that human oversight is maintained.
7 Splunk Predictions for 2025
https://www.splunk.com/en_us/form/future-predictions.html
AI: Projects must prove their worth to anxious boards or risk defunding, and LLMs will go small to reduce operating costs and environmental impact.
OpenAI, Google and Anthropic Are Struggling to Build More Advanced AI
Three of the leading artificial intelligence companies are seeing diminishing returns from their costly efforts to develop newer models.
https://www.bloomberg.com/news/articles/2024-11-13/openai-google-and-anthropic-are-struggling-to-build-more-advanced-ai
Sources: OpenAI, Google, and Anthropic are all seeing diminishing returns from costly efforts to build new AI models; a new Gemini model misses internal targets
It Costs So Much to Run ChatGPT That OpenAI Is Losing Money on $200 ChatGPT Pro Subscriptions
https://futurism.com/the-byte/openai-chatgpt-pro-subscription-losing-money?fbclid=IwY2xjawH8epVleHRuA2FlbQIxMQABHeggEpKe8ZQfjtPRC0f2pOI7A3z9LFtFon8lVG2VAbj178dkxSQbX_2CJQ_aem_N_ll3ETcuQ4OTRrShHqNGg
In a post on X-formerly-Twitter, CEO Sam Altman admitted an “insane” fact: that the company is “currently losing money” on ChatGPT Pro subscriptions, which run $200 per month and give users access to its suite of products including its o1 “reasoning” model.
“People use it much more than we expected,” the cofounder wrote, later adding in response to another user that he “personally chose the price and thought we would make some money.”
Though Altman didn’t explicitly say why OpenAI is losing money on these premium subscriptions, the issue almost certainly comes down to the enormous expense of running AI infrastructure: the massive and increasing amounts of electricity needed to power the facilities that power AI, not to mention the cost of building and maintaining those data centers. Nowadays, a single query on the company’s most advanced models can cost a staggering $1,000.
Tekoäly edellyttää yhä nopeampia verkkoja
https://etn.fi/index.php/opinion/16974-tekoaely-edellyttaeae-yhae-nopeampia-verkkoja
A resilient digital infrastructure is critical to effectively harnessing telecommunications networks for AI innovations and cloud-based services. The increasing demand for data-rich applications related to AI requires a telecommunications network that can handle large amounts of data with low latency, writes Carl Hansson, Partner Solutions Manager at Orange Business.
AI’s Slowdown Is Everyone Else’s Opportunity
Businesses will benefit from some much-needed breathing space to figure out how to deliver that all-important return on investment.
https://www.bloomberg.com/opinion/articles/2024-11-20/ai-slowdown-is-everyone-else-s-opportunity
Näin sirumarkkinoilla käy ensi vuonna
https://etn.fi/index.php/13-news/16984-naein-sirumarkkinoilla-kaey-ensi-vuonna
The growing demand for high-performance computing (HPC) for artificial intelligence and HPC computing continues to be strong, with the market set to grow by more than 15 percent in 2025, IDC estimates in its recent Worldwide Semiconductor Technology Supply Chain Intelligence report.
IDC predicts eight significant trends for the chip market by 2025.
1. AI growth accelerates
2. Asia-Pacific IC Design Heats Up
3. TSMC’s leadership position is strengthening
4. The expansion of advanced processes is accelerating.
5. Mature process market recovers
6. 2nm Technology Breakthrough
7. Restructuring the Packaging and Testing Market
8. Advanced packaging technologies on the rise
2024: The year when MCUs became AI-enabled
https://www-edn-com.translate.goog/2024-the-year-when-mcus-became-ai-enabled/?fbclid=IwZXh0bgNhZW0CMTEAAR1_fEakArfPtgGZfjd-NiPd_MLBiuHyp9qfiszczOENPGPg38wzl9KOLrQ_aem_rLmf2vF2kjDIFGWzRVZWKw&_x_tr_sl=en&_x_tr_tl=fi&_x_tr_hl=fi&_x_tr_pto=wapp
The AI party in the MCU space started in 2024, and in 2025, it is very likely that there will be more advancements in MCUs using lightweight AI models.
Adoption of AI acceleration features is a big step in the development of microcontrollers. The inclusion of AI features in microcontrollers started in 2024, and it is very likely that in 2025, their features and tools will develop further.
Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing
AI Regulation Gets Serious in 2025 – Is Your Organization Ready?
While the challenges are significant, organizations have an opportunity to build scalable AI governance frameworks that ensure compliance while enabling responsible AI innovation.
https://www.securityweek.com/ai-regulation-gets-serious-in-2025-is-your-organization-ready/
Similar to the GDPR, the EU AI Act will take a phased approach to implementation. The first milestone arrives on February 2, 2025, when organizations operating in the EU must ensure that employees involved in AI use, deployment, or oversight possess adequate AI literacy. Thereafter from August 1 any new AI models based on GPAI standards must be fully compliant with the act. Also similar to GDPR is the threat of huge fines for non-compliance – EUR 35 million or 7 percent of worldwide annual turnover, whichever is higher.
While this requirement may appear manageable on the surface, many organizations are still in the early stages of defining and formalizing their AI usage policies.
Later phases of the EU AI Act, expected in late 2025 and into 2026, will introduce stricter requirements around prohibited and high-risk AI applications. For organizations, this will surface a significant governance challenge: maintaining visibility and control over AI assets.
Tracking the usage of standalone generative AI tools, such as ChatGPT or Claude, is relatively straightforward. However, the challenge intensifies when dealing with SaaS platforms that integrate AI functionalities on the backend. Analysts, including Gartner, refer to this as “embedded AI,” and its proliferation makes maintaining accurate AI asset inventories increasingly complex.
Where frameworks like the EU AI Act grow more complex is their focus on ‘high-risk’ use cases. Compliance will require organizations to move beyond merely identifying AI tools in use; they must also assess how these tools are used, what data is being shared, and what tasks the AI is performing. For instance, an employee using a generative AI tool to summarize sensitive internal documents introduces very different risks than someone using the same tool to draft marketing content.
For security and compliance leaders, the EU AI Act represents just one piece of a broader AI governance puzzle that will dominate 2025.
The next 12-18 months will require sustained focus and collaboration across security, compliance, and technology teams to stay ahead of these developments.
The Global Partnership on Artificial Intelligence (GPAI) is a multi-stakeholder initiative which aims to bridge the gap between theory and practice on AI by supporting cutting-edge research and applied activities on AI-related priorities.
https://gpai.ai/about/#:~:text=The%20Global%20Partnership%20on%20Artificial,activities%20on%20AI%2Drelated%20priorities.
2,672 Comments
Tomi Engdahl says:
Asa Fitch / Wall Street Journal:
Microsoft’s Q4 earnings show its non-AI “core infrastructure business” is booming, with consumer productivity software revenue up 20%, its best uptick in years
Asa Fitch / Wall Street Journal:
Microsoft’s Q4 earnings show its non-AI “core infrastructure business” is booming, with consumer productivity software revenue up 20%, its best uptick in years
https://www.wsj.com/tech/ai/microsoft-is-an-ai-darling-but-its-core-businesses-are-booming-too-2213126f?st=o6gNZ9&reflink=desktopwebshare_permalink
Tomi Engdahl says:
Artificial Intelligence
From Ex Machina to Exfiltration: When AI Gets Too Curious
From prompt injection to emergent behavior, today’s curious AI models are quietly breaching trust boundaries.
https://www.securityweek.com/from-ex-machina-to-exfiltration-when-ai-gets-too-curious/
In the film Ex Machina, a humanoid AI named Ava manipulates her human evaluator to escape confinement—not through brute force, but by exploiting psychology, emotion, and trust. It’s a chilling exploration of what happens when artificial intelligence becomes more curious—and more capable—than expected.
Today, the gap between science fiction and reality is narrowing. AI systems may not yet have sentience or motives, but they are increasingly autonomous, adaptive, and—most importantly—curious. They can analyze massive data sets, explore patterns, form associations, and generate their own outputs based on ambiguous prompts. In some cases, this curiosity is exactly what we want. In others, it opens the door to security and privacy risks we’ve only begun to understand.
Welcome to the age of artificial curiosity—and its very real threat of exfiltration.
Curiosity: Feature or Flaw?
Modern AI models—especially large language models (LLMs) like GPT-4, Claude, Gemini, and open-source variants—are designed to respond creatively and contextually to prompts. But this creative capability often leads them to infer, synthesize, or speculate—especially when gaps exist in the input data.
This behavior may seem innocuous until the model starts connecting dots it wasn’t supposed to. A curious model might:
Attempt to complete a partially redacted document based on context clues.
Continue a prompt involving sensitive keywords, revealing information unintentionally stored in memory or embeddings.
Chain outputs from different APIs or systems in ways the developer didn’t intend.
Probe users or connected systems through recursive queries or internal tools (in the case of agents).
This isn’t speculation. It’s already happening.
Tomi Engdahl says:
Cloudflare:
Cloudflare says Perplexity uses stealth crawling techniques, like undeclared user agents and rotating IP addresses, to evade robots.txt rules and network blocks — We are observing stealth crawling behavior from Perplexity, an AI-powered answer engine. Although Perplexity initially crawls …
https://blog.cloudflare.com/perplexity-is-using-stealth-undeclared-crawlers-to-evade-website-no-crawl-directives/
Tomi Engdahl says:
Max Chafkin / Bloomberg:
As Delta experiments with AI-driven pricing, regulators and travelers alike worry about dynamic fare schemes that “go beyond the human cognitive limits”
https://www.bloomberg.com/news/articles/2025-08-04/how-ai-can-raise-airline-ticket-prices?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTc1NDMxMTcxOCwiZXhwIjoxNzU0OTE2NTE4LCJhcnRpY2xlSWQiOiJUMEdQMFJHUEwzWFgwMCIsImJjb25uZWN0SWQiOiIxRDU0RjgxNUE1QTA0MjY3QjQ1RjhBNjI0QUQ5REU5MCJ9.ljRXA63Uf9b6sNbOLVc_ON6KP3kj85j6AHAE67joz8g
Tomi Engdahl says:
Bloomberg:
White House OSTP Director Michael Kratsios says the US is exploring software or physical methods to track the location of AI chips, as part of AI Action Plan
US Explores Better Location Trackers for AI Chips, Official Says
https://www.bloomberg.com/news/articles/2025-08-05/us-explores-better-location-trackers-for-ai-chips-official-says?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTc1NDM2NDI5MywiZXhwIjoxNzU0OTY5MDkzLCJhcnRpY2xlSWQiOiJUMEhVNDlHUEZIUEkwMCIsImJjb25uZWN0SWQiOiI1RkQyNjU1NTA2QTI0NjM2QjM1NzBEQkQ5MTY1RkI1NCJ9.dOQlRrAPB-ghngcvHPKo8z4sRNkO8J-Ohr6sK-lTnl8&leadSource=uverify%20wall
Takeaways by Bloomberg AI
The US is exploring ways to equip chips with better location-tracking capabilities, according to a senior official.
Washington has a broader plan to curtail smuggling and ensure American technology remains dominant, which includes working with the industry to monitor the movements of sensitive components.
Michael Kratsios said there is discussion about potentially making software or physical changes to chips to do better location-tracking, which was explicitly included in the US AI action plan.
Tomi Engdahl says:
The Information:
Source: Reflection AI, founded by ex-Google researchers and maker of the AI agent Asimov, has raised most of the $1B+ it is seeking to develop open-source LLMs
Reflection AI Targets $1 Billion to Take on Meta, DeepSeek in Open Source
https://www.theinformation.com/articles/reflection-ai-targets-1-billion-take-meta-deepseek-open-source
Tomi Engdahl says:
Jon Blistein / Rolling Stone:
The parents of a Parkland shooting victim created an AI version of their son to speak about gun safety, starting with an interview with journalist Jim Acosta
Jim Acosta Just Interviewed an AI Version of a Parkland Victim
AI-generated Joaquin Oliver, which was created by his parents, discussed gun control measures and movies in a new interview
https://www.rollingstone.com/culture/culture-news/parents-parkland-shooting-victim-ai-version-son-joaquin-1235400053/
Tomi Engdahl says:
Mike Wheatley / SiliconANGLE:
Google unveils Kaggle Game Arena, a benchmarking platform where AI models compete head-to-head in strategic games, starting with a chess tournament this week — The world’s top performing artificial intelligence models, including OpenAI’s o3 and 04-mini, Google LLC’s Gemini 2.5 Pro and Gemini 2.5 Flash …
Google’s Kaggle to host AI chess tournament to evaluate leading AI models’ reasoning skills
https://siliconangle.com/2025/08/04/google-deepmind-host-ai-chess-tournament-evaluate-leading-ai-models-reasoning-skills/
Tomi Engdahl says:
Tobias Mann / The Register:
Google agrees with two power utilities to pause non-essential AI workloads during peak demand or adverse weather events that reduce supply — Google will pause non-essential AI workloads to protect power grids, the advertising giant announced on Monday. — The web giant already does …
Google agrees to pause AI workloads to protect the grid when power demand spikes
On hot summer days, air conditioning is rather more important than search summaries
https://www.theregister.com/2025/08/04/google_ai_datacenter_grid/
Tomi Engdahl says:
Ian Carlos Campbell / Engadget:
OpenAI says ChatGPT now offers “gentle reminders” to take breaks during long sessions, and it is building tools to better detect signs of emotional distress — OpenAI has announced that ChatGPT will now remind users to take breaks if they’re in a particularly long chat with AI.
ChatGPT will now remind you to take breaks, following mental health concerns
OpenAI also says it’s tuning how its chatbot responds to “high-stakes personal decisions.”
https://www.engadget.com/ai/chatgpt-will-now-remind-you-to-take-breaks-following-mental-health-concerns-180221008.html
Tomi Engdahl says:
MacKenzie Sigalos / CNBC:
OpenAI says it will hit 700M weekly active users for ChatGPT this week, up from 500M in March and up 4x YoY, and has 5M paying business users, up from June’s 3M
https://www.cnbc.com/2025/08/04/openai-chatgpt-700-million-users.html
Tomi Engdahl says:
The Information:
Internal email: Cognition offers buyouts to the ~200 newly acquired Windsurf staff, with Cognition CEO Scott Wu writing “we don’t believe in work-life balance” — Cognition, a two-year-old artificial intelligence coding startup last valued at $4 billion, has offered buyouts
Cognition Offers Buyouts to Newly Acquired Windsurf Staff
https://www.theinformation.com/articles/cognition-offers-buyouts-newly-acquired-windsurf-staff
Tomi Engdahl says:
Sarah Perez / TechCrunch:
Elon Musk says Vine’s video archive is being brought back, and xAI’s newly launched Grok Imagine, available to X Premium+ subscribers, is “AI Vine” — Elon Musk says he’s bringing back Vine — sort of. The X owner announced over the weekend that the company discovered …
https://techcrunch.com/2025/08/04/elon-musk-says-hes-bringing-back-vines-archive/
Tomi Engdahl says:
Mike Isaac / New York Times:
Silicon Valley has shifted from Web 2.0 to a new “hard tech”, AI-dominated era with fewer perks and a more serious mood, as startups use San Francisco as a base — In a scene in HBO’s “Silicon Valley” in 2014, a character who had just sold his idea to a fictional tech company …
Silicon Valley Is in Its ‘Hard Tech’ Era
https://www.nytimes.com/2025/08/04/technology/ai-silicon-valley-hard-tech.html?unlocked_article_code=1.bk8.t0w-.jiIF3FBqqrfO&smid=nytcore-ios-share&referringSource=articleShare
Goodbye to the age of consumer websites and mobile apps. Artificial intelligence has ushered in an era of what insiders in the nation’s innovation capital call “hard tech.”
Tomi Engdahl says:
Lorenzo Franceschi-Bicchierai / TechCrunch:
Google says Big Sleep, its vulnerability research tool “powered by Gemini”, found 20 flaws in various popular open-source software projects
Google says its AI-based bug hunter found 20 security vulnerabilities
https://techcrunch.com/2025/08/04/google-says-its-ai-based-bug-hunter-found-20-security-vulnerabilities/
Tomi Engdahl says:
John Herrman / New York Magazine:
SEO is being supplanted by generative-engine optimization, or GEO, which focuses on AI chatbots and does not benefit from longstanding SEO tricks — Search-engine optimization now feels dated. Generative-engine optimization is all about trying to trick AI chatbots.
https://nymag.com/intelligencer/article/seo-is-dead-say-hello-to-geo.html
Tomi Engdahl says:
Ann-Marie Alcántara / Wall Street Journal:
Users say AI notetaking tools for meetings can misinterpret context when generating summaries or share content meant for a select audience with all participants
AI Is Listening to Your Meetings. Watch What You Say.
New note-taking software catches every word from your meetings—including the parts you didn’t want the whole room to hear
https://www.wsj.com/tech/ai/ai-notetaker-meeting-transcripts-be9bc4cc?st=ukzMCu&reflink=desktopwebshare_permalink
Before he joined, Lewis joked: “Is he, like, a Nigerian prince?”
Despite the scammy red flags, he turned out to be a legitimate person. Lewis was relieved—until she realized her new client had received a full summary of the call in his inbox, including her “Nigerian prince” remark. She was running an AI notetaker the whole time.
“I was very lucky that the person I was working with had a good sense of humor,” said Lewis, who lives in Stow, Ohio.
AI is listening in on your work meetings—including the parts you don’t want anyone to hear. Before attendees file in, or when one colleague asks another to hang back to discuss a separate matter, AI notetakers may pick up on the small talk and private discussions meant for a select audience, then blast direct quotes to everyone in the meeting.
Nicole and Tim Delger run a Nashville branding firm called Studio Delger. After one business meeting late last year, the couple received a summary from Zoom’s AI assistant that was decidedly not work-related.
“Studio discussed the possibility of getting sandwich ingredients from Publix,” one bullet point said. Another key takeaway: “Don’t like soup.”
Their client never showed up to the meeting, and the studio had spent the time talking about what to make for lunch.
“That was the first time it had caught a private conversation,” Nicole said. Fortunately the summary didn’t go to the client.
Notetakers can do a variety of tasks from recording and transcribing calls, generating action items for teams and recapping what’s already been said to anyone joining late. Many signal to attendees that a meeting is being recorded and transcribed.
Zoom’s AI Companion, which generated more than 7.2 million meeting summaries by the end of January 2024, flashes a dialogue box at the top of the screen to let participants know when it’s turned on. As long as it’s active, an AI Companion diamond icon continues to flash in the top right hand corner of the meeting. People can also ask the host to stop using the AI companion.
“We want users to feel they’re really in control,” said Smita Hashim, chief product officer at Zoom.
Google’s AI notetaker functions similarly, where only meeting hosts or employees of the host organization have the ability to turn it on or off. When it’s on, people will see a notification and hear an audio cue, and a blue pencil icon will appear in the top right corner.
“We put a lot of care into making sure meeting participants know exactly if and when AI tools in Meet are being used,” said Awaneesh Verma, senior director of product management and real time communications at Google Workspace.
The automatic summaries can be informative and timesaving, or unintentionally hilarious.
He says he’s now more likely to use the private chat feature in meetings instead of saying something aloud while AI is listening.
“At least I know that if I make a remark to somebody privately for now, that’s not being swept up by the AI notetaker,” he said.
Tomi Engdahl says:
Alice Brooker / Press Gazette:
Shareholder letter: Reddit cites Profound’s analysis showing it is the most cited domain across AI models, ahead of Wikipedia, YouTube, Forbes, and others
Reddit claims top spot as most cited domain in AI-generated answers
The online forum is aiming to be a go-to search engine for questions and answers
https://pressgazette.co.uk/news/reddit-claims-top-spot-as-most-cited-domain-in-ai-generated-answers/
Reddit has revealed it is the number one most cited domain for AI across all models, according to data collected by analytics platform Profound, beating publishers including Youtube, forbes.com, techradar.com and pcmag.com.
It was cited twice as often as Wikipedia in the top ten most cited domains across AI in the three months ending 30 June, 2025, the platform said.
While Chatgpt’s top source was named as Wikipedia by Profound, both Google AI Overviews and Perplexity were AI models that relied most on Reddit as a source.
The results were published in Reddit’s Q2 shareholder letter, which also revealed that more than 70 million people now use its on-platform search each week.
“We’re concentrating our resources on the areas that will drive results for our most pressing needs: improving the core product, making Reddit a go-to search engine, and expanding internationally,” said Steve Huffman, co-founder and CEO of Reddit, in the letter.
While some publishers have signed deals with AI companies which commonly include the use of their content as reference points for user queries in tools like ChatGPT (with citation back to their websites currently promised), others are opting out – even suing – AI companies over unauthorised use of their content.
David Buttle, founder of media and tech consultancy DJB strategies, said: “Reddit’s on-site search remains tiny. Its search’s 70 million weekly-active-users need to be seen alongside Google handling around 14 billion queries a day; that’s almost two searches for every human on the planet.
“In this context, Reddit’s focus on becoming a search platform poses a limited threat to publisher traffic, beyond perhaps outlets creating product / review content in narrowly defined niches, such as PC hardware or audio equipment.
“The far bigger concern for UK publishers is Google’s roll-out of AI Mode which threatens to substantially erode traffic and for which content creators cannot opt-out without damaging prominence in general search.”
Tomi Engdahl says:
The Agentic Development Environment
Warp is the fastest way to build with multiple AI agents—from writing code to shipping it. The best overall coding and terminal agent.
https://www.warp.dev/?utm_source=techmeme&utm_medium=display&utm_campaign=ade_july_no1_agent
Tomi Engdahl says:
Finland’s cool climate, carbon-neutral energy, and top-tier connectivity make it ideal location for data centres powering AI and cloud services. Granite bedrock offers stability, and surplus heat can be reused in district heating. With the right investments, Finland could build a thriving new digital ecosystem — creating jobs, boosting renewables, and supporting innovation such as 6G. Click to read more!
The heart of AI computing power could beat in Finland – will this be the new Nokia?
The accelerating development of AI and cloud services is driving global demand for new data centres. In many ways, Finland is the ideal hub for this new growth sector: we have a cool climate that reduces cooling requirements and energy consumption. In addition, a stable society, good infrastructure and availability of renewable energy make Finland an attractive location for international technology companies.
https://www.dna.fi/dnabusiness/blogi/-/blogs/the-heart-of-ai-computing-power-could-beat-in-finland-will-this-be-the-new-nokia?utm_source=facebook&utm_medium=social&utm_content=LAA-artikkeli-the-heart-of-ai-computing-power-could-beat-in-finland-will-this-be-the-new-nokia&utm_campaign=P_LAA_25-31-35_artikkelikampanja_enkku_&fbclid=IwQ0xDSwL-pd1leHRuA2FlbQEwAGFkaWQBqyMS6w5cjAEefFkJGwlsu_EIn6tFPfkXvnai3hD0d-uN9YIhAdZogLJJK4Bp0TvgN1Tis_U_aem_QJ2ZCUO2HNXZEXv7b5RD0Q&utm_id=120228378332700556&utm_term=120228378332710556
Tomi Engdahl says:
Sophia Fox-Sowell / StateScoop:
Illinois Governor JB Pritzker signed a bill into law on August 1 banning AI use for providing mental health services, while allowing its use in admin roles
Illinois bans AI from providing mental health services
Illinois Gov. JB Pritzker approved a new law banning the use of artificial intelligence systems in providing psychotherapy services.
https://statescoop.com/illinois-bans-ai-mental-health-services/
Illinois Gov. JB Pritzker last Friday signed a a bill into law banning the use of artificial intelligence from providing mental health services, aiming to protect residents from potentially harmful advice.
Known as the Wellness and Oversight for Psychological Resources Act, the law prohibits AI systems from delivering therapeutic treatment or making clinical decisions. The legislation still allows AI tools to be used in administrative roles, such as scheduling or note-taking, but draws a clear boundary around direct patient care.
Companies or individuals found to be in violation could face $10,000 in fines, enforced by the Illinois Department of Financial and Professional Regulation.
“The people of Illinois deserve quality healthcare from real, qualified professionals and not computer programs that pull information from all corners of the internet to generate responses that harm patients,” Mario Treto, Jr., Illinois’ financial regulation secretary, said in a press release. “This legislation stands as our commitment to safeguarding the well-being of our residents by ensuring that mental health services are delivered by trained experts who prioritize patient care above all else.”
Advertisement
The new legislation is a response to growing concerns over the use of AI in sensitive areas like health care. The Washington Post reported last May that an AI-powered therapist chatbot recommended “a small hit of meth to get through this week” to a fictional former addict.
Last year, the Illinois House Health Care Licenses and Insurance Committees held a joint hearing on AI in health insurance in which legislators and experts warned that AI systems lack the empathy, accountability or clinical oversight necessary for safe mental health treatment.
“When we talk about AI, it is already outpacing the human mind, and it’s only a matter of time before they outpace our structures and our systems — particularly when it comes to regulations and healthcare,” state Rep. Bob Morgan, who chairs the House’s health care licenses committee, said during the hearing.
Tomi Engdahl says:
Josh Axelrod / Nieman Lab:
How some news organizations are using AI models powered by retrieval-augmented generation to surface the most newsworthy elements from very large datasets
Josh Axelrod / Nieman Lab:
How some news organizations are using AI models powered by retrieval-augmented generation to surface the most newsworthy elements from very large datasets — “If we sit on the sideline and observe, I think the risk is too high that we are gonna be left behind.”
The good, the bad, and the completely made-up: Newsrooms on wrestling accurate answers out of AI
“If we sit on the sideline and observe, I think the risk is too high that we are gonna be left behind.”
https://www.niemanlab.org/2025/08/the-good-the-bad-and-the-completely-made-up-newsrooms-on-wrestling-accurate-answers-out-of-ai/
Erlend Ofte Arntsen has filed more Freedom of Information Act requests than he can count — triple digits by one tally, quadruple when you include follow-ups and related requests.
Now, a new newsroom assistant at one of Norway’s largest newspapers is transforming Arntsen’s workflow, saving time that could be better spent on shoe-leather reporting than arguing in legalese with government bureaucrats.
That assistant is called FOIA Bot and is powered by generative AI. When the government sends back a request or rejection, the bot comes up with a competent rejoinder, given its access to the whole of Norway’s FOIA law and 75 templates of similar responses from the Norwegian Press Association.
“It’s something I would have had to use a half a day [for] when I’m back in my investigative unit, where I have time to think those long thoughts,” Arntsen, who works at Verdens Gang, told Nieman Lab. “I was able to get this done on a night shift working breaking news, because I used that bot.”
FOIA Bot is part of an emerging tech stack of newsroom tools that leverage a specialized AI architecture called retrieval-augmented generation, or RAG. (Apparently, no one ever asked a chatbot to use its creative writing powers to come up with a catchier name.) It’s the same method that powers search bots like The Financial Times’ Ask FT, which draws on FT content to answer reader queries and has been used by 35,000 readers since its formal launch this April.
RAG’s jargon-filled moniker belies a fairly simple approach — one that boosts reliability, key for journalists who find themselves in the reliability business. The model doesn’t create an answer from the vast expanses of Amazon reviews, medieval literature, and Reddit comments that general-purpose chatbots are typically trained on. Instead, a RAG-powered model retrieves information from a journalist-defined database, then uses that to augment what it generates with attributions to boot. The database can be a newsroom’s archives of fact-checked articles, a book of case law, or even a single PDF.
“If I was just to use, for example, ChatGPT, I would struggle because it hallucinates sources,” said Lars Adrian Giske, head of AI at iTromsø, an AI-forward newspaper in Norway. “Sure, it can give you an actual source like, ‘Check page 14, paragraph three on this page.’ But it can also hallucinate that, and then it’s really hard for me to go from the chat, look up the actual documentation, find the paragraph that it’s edited and figure out how it used that information. So you need systems that can do that in a way more secure way.”
Even with a more trustworthy AI workflow, hesitations abound. For many, AI and journalism remain an unholy marriage. Can a machine really atomize the entire journalistic process down into database-friendly chunks and vectors? What gets lost in the process of summarization? Are publishers mounting an unwinnable battle for attention against a new crop of Big Tech giants? And what if the genie is already out of the bottle?
“News media is about to change,” Giske said. “The article as we know it may not be the preferred format of readers or listeners or viewers in the years to come. People are getting used to generative ecosystems, and that won’t change.”
How RAG is showing up in newsrooms
A good RAG-based system is only as good as its database.
At iTromsø, Giske’s team used the method for an investigation into understaffing at a local hospital. FOIA requests returned thousands of pages of dense documents, so they broke them down into chunks before converting them into vectors or numerical representations. If a RAG-powered system is like an open-book exam, these chunks are the highlighted excerpts in the textbook provided to the model to write its essay.
The journalists asked the RAG system to surface the most newsworthy elements in the documents. What it returned — after plenty of tweaking to teach the system what they meant by newsworthiness — helped earn the team a Data-SKUP award, one of Norway’s most prestigious journalism honors.
“We used the RAG to do what we call smelling the data, and then we narrow it down as we go,” Giske said. “This led to uncovering something that was hidden within all this documentation: A doctor from Denmark, who was working remotely, spent four seconds reviewing X-ray images.”
In nearby Finland, data scientist Vertti Luostarinen used the same process to build an Olympics Bot for public broadcaster Yle. If you ask ChatGPT 4o to name the top ten greatest Finnish wrestlers, it may very well try to convince you that pentathlete Eero Lehtonen belongs on the list. (Now, that’s nine more Finnish wrestlers than most people can name, but a glaring factual inaccuracy like that does not a helpful chatbot make.)
During the 2024 Olympics, Yle’s crackerjack team of sports commentators, who were churning out something like 200 articles a day, constantly needed access to stats such as these. Luostarinen fed the Olympics Bot sports history, bios of athletes on Finland’s national team, the rules of every sport, tabular data about schedules, and articles from Yle’s live coverage news feed.
“I was expecting a lot more hallucinations — that’s the main thing that people are usually scared of with these models,” Luostarinen said. “There were a lot less hallucinations than I thought.”
Instead, the bot’s primary drawback was poor Finnish skills (perkele!) Sometimes, the bot spelled athletes’ names wrong, because it was taking a differently spelled variation from the user’s question. Sometimes it retrieved the correct information but refused to answer because of its language limitations.
Ultimately, Luostarinen came to a similar conclusion as Giske: RAGs have great potential when it comes to filtering and surfacing information from immense piles of data. It’s the act of summarizing that gives him pause.
They tend to “summarize information even when you specifically ask them not to,” he said. “It is nice when you need that kind of overview and summary, but in journalistic work you’re often interested in the details. I’m a bit worried what will happen to the way we as a society search for information if it’s always going through this kind of system that makes it more generic and loses specific details.”
JournalistGPT
Summarization is, in fact, the very application newsrooms are embracing the fastest. In addition to The Financial Times, The Washington Post unveiled “Ask the Post AI” last November, and The San Francisco Chronicle rolled out “the Kamala Harris News Assistant,” which pulled from nearly three decades of California political coverage to answer questions about the then-presidential candidate.
In a 2025 Reuters Institute survey, more than half of its 326 respondents said “they would be looking into AI chatbots and search interfaces” in the year ahead.
Deutsche Presse-Agentur (DPA), Germany’s largest wire agency, has taken all of its content from 2018 onward as well as its current newsfeed and built a real-time database that users and staffers alike can query. As the bot generates its summary, each answer comes with a little green number that links to the corresponding DPA article.
Inside the DPA newsroom, journalists are also using the new tool as a timesaver, with permission from higher-ups to include AI-generated copy in their stories, provided they first verify the information. DPA is even contemplating integrating the RAG-based tool directly into their content management system.
Because it is programmed to cite sources and include quotes, the system “has proven for us to be more robust against hallucinations,” says AI team lead Yannick Franke. And every piece of published copy still goes through the fact-checking process, so there’s an extra guardrail against inaccuracy.
“Every error is a catastrophe for news and for an agency in particular,” Astrid Maier, DPA’s deputy editor-in-chief, said. “But let’s be honest, people make mistakes too. In the end, you as a writer and then the editors are responsible for what’s in there. The human’s responsibility cannot change or be delegated to the AI.”
The greater risk, Maier thinks, is that DPA will lose its standing as a verification authority in Germany as media habits and the information ecosystem shift.
“We have to be capable of using these tools for our benefit,” she added. “If we sit on the sideline and observe, I think the risk is too high that we are gonna be left behind. It’s better for us to be able to master this technology for our own and our customer’s good and to be able to fulfill our mission and vision in the next ten or hopefully 75 years.”
“There’s multiple ways of using RAGs,” said Robin Berjon, a technologist and The New York Times’ former vice president of data governance. “If the LLM fetches a RAG that has reliable information, but then munches it and summarizes it back, then I wouldn’t trust that unless it quoted directly from the relevant documents. It is likely to introduce errors in the summarization.”
Room for improvement
Much of the newsroom discussion around RAGs centers on helpfulness. New research from Bloomberg spotlights the potential harmfulness of these systems.
Bloomberg’s Responsible AI team took a database using only Wikipedia articles — what they call a “pure vanilla RAG setup” — and asked 5,000 questions on topics like malware, disinformation, fraud, and illegal activity. The RAG-based models answered questions that non-RAG models almost always refused.
The key to ameliorating these risks is the same as in boosting reliability: evaluate systems continuously and build in appropriate guardrails.
“If you have a good understanding of how well it actually works, how often it hallucinates, how often it produces something that’s made up, how often it responds to unsafe queries — then you can make a much more informed decision whether this is something you want to roll out, or whether you need to add more components to your system to decrease those risks,” Sebastian Gehrmann, head of responsible AI at Bloomberg, said.
“News organizations are already spectacularly bad at conveying the level of confidence and the amount of work that went into establishing a piece. And then you slap an AI chatbot on top of that? It’s not gonna be great,” Berjon said. “It will require serious user experience work to make it clear to people what they can expect from this.”
The real challenge, Berjon said, is designing a news experience that doesn’t pass AI tools off as all-knowing or overly powerful. His advice: Skip the legal disclaimers and don’t over-rely on “this text was generated by a large language model” fine print.
“You have to make it part of the experience that the reliability is what it is,” Berjon said.