AI trends 2025

AI is developing all the time. Here are some picks from several articles what is expected to happen in AI and around it in 2025. Here are picks from various articles, the texts are picks from the article edited and in some cases translated for clarity.

AI in 2025: Five Defining Themes
https://news.sap.com/2025/01/ai-in-2025-defining-themes/
Artificial intelligence (AI) is accelerating at an astonishing pace, quickly moving from emerging technologies to impacting how businesses run. From building AI agents to interacting with technology in ways that feel more like a natural conversation, AI technologies are poised to transform how we work.
But what exactly lies ahead?
1. Agentic AI: Goodbye Agent Washing, Welcome Multi-Agent Systems
AI agents are currently in their infancy. While many software vendors are releasing and labeling the first “AI agents” based on simple conversational document search, advanced AI agents that will be able to plan, reason, use tools, collaborate with humans and other agents, and iteratively reflect on progress until they achieve their objective are on the horizon. The year 2025 will see them rapidly evolve and act more autonomously. More specifically, 2025 will see AI agents deployed more readily “under the hood,” driving complex agentic workflows.
In short, AI will handle mundane, high-volume tasks while the value of human judgement, creativity, and quality outcomes will increase.
2. Models: No Context, No Value
Large language models (LLMs) will continue to become a commodity for vanilla generative AI tasks, a trend that has already started. LLMs are drawing on an increasingly tapped pool of public data scraped from the internet. This will only worsen, and companies must learn to adapt their models to unique, content-rich data sources.
We will also see a greater variety of foundation models that fulfill different purposes. Take, for example, physics-informed neural networks (PINNs), which generate outcomes based on predictions grounded in physical reality or robotics. PINNs are set to gain more importance in the job market because they will enable autonomous robots to navigate and execute tasks in the real world.
Models will increasingly become more multimodal, meaning an AI system can process information from various input types.
3. Adoption: From Buzz to Business
While 2024 was all about introducing AI use cases and their value for organizations and individuals alike, 2025 will see the industry’s unprecedented adoption of AI specifically for businesses. More people will understand when and how to use AI, and the technology will mature to the point where it can deal with critical business issues such as managing multi-national complexities. Many companies will also gain practical experience working for the first time through issues like AI-specific legal and data privacy terms (compared to when companies started moving to the cloud 10 years ago), building the foundation for applying the technology to business processes.
4. User Experience: AI Is Becoming the New UI
AI’s next frontier is seamlessly unifying people, data, and processes to amplify business outcomes. In 2025, we will see increased adoption of AI across the workforce as people discover the benefits of humans plus AI.
This means disrupting the classical user experience from system-led interactions to intent-based, people-led conversations with AI acting in the background. AI copilots will become the new UI for engaging with a system, making software more accessible and easier for people. AI won’t be limited to one app; it might even replace them one day. With AI, frontend, backend, browser, and apps are blurring. This is like giving your AI “arms, legs, and eyes.”
5. Regulation: Innovate, Then Regulate
It’s fair to say that governments worldwide are struggling to keep pace with the rapid advancements in AI technology and to develop meaningful regulatory frameworks that set appropriate guardrails for AI without compromising innovation.

12 AI predictions for 2025
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
https://www.cio.com/article/3630070/12-ai-predictions-for-2025.html
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
1. Small language models and edge computing
Most of the attention this year and last has been on the big language models — specifically on ChatGPT in its various permutations, as well as competitors like Anthropic’s Claude and Meta’s Llama models. But for many business use cases, LLMs are overkill and are too expensive, and too slow, for practical use.
“Looking ahead to 2025, I expect small language models, specifically custom models, to become a more common solution for many businesses,”
2. AI will approach human reasoning ability
In mid-September, OpenAI released a new series of models that thinks through problems much like a person would, it claims. The company says it can achieve PhD-level performance in challenging benchmark tests in physics, chemistry, and biology. For example, the previous best model, GPT-4o, could only solve 13% of the problems on the International Mathematics Olympiad, while the new reasoning model solved 83%.
If AI can reason better, then it will make it possible for AI agents to understand our intent, translate that into a series of steps, and do things on our behalf, says Gartner analyst Arun Chandrasekaran. “Reasoning also helps us use AI as more of a decision support system,”
3. Massive growth in proven use cases
This year, we’ve seen some use cases proven to have ROI, says Monteiro. In 2025, those use cases will see massive adoption, especially if the AI technology is integrated into the software platforms that companies are already using, making it very simple to adopt.
“The fields of customer service, marketing, and customer development are going to see massive adoption,”
4. The evolution of agile development
The agile manifesto was released in 2001 and, since then, the development philosophy has steadily gained over the previous waterfall style of software development.
“For the last 15 years or so, it’s been the de-facto standard for how modern software development works,”
5. Increased regulation
At the end of September, California governor Gavin Newsom signed a law requiring gen AI developers to disclose the data they used to train their systems, which applies to developers who make gen AI systems publicly available to Californians. Developers must comply by the start of 2026.
There are also regulations about the use of deep fakes, facial recognition, and more. The most comprehensive law, the EU’s AI Act, which went into effect last summer, is also something that companies will have to comply with starting in mid-2026, so, again, 2025 is the year when they will need to get ready.
6. AI will become accessible and ubiquitous
With gen AI, people are still at the stage of trying to figure out what gen AI is, how it works, and how to use it.
“There’s going to be a lot less of that,” he says. But gen AI will become ubiquitous and seamlessly woven into workflows, the way the internet is today.
7. Agents will begin replacing services
Software has evolved from big, monolithic systems running on mainframes, to desktop apps, to distributed, service-based architectures, web applications, and mobile apps. Now, it will evolve again, says Malhotra. “Agents are the next phase,” he says. Agents can be more loosely coupled than services, making these architectures more flexible, resilient and smart. And that will bring with it a completely new stack of tools and development processes.
8. The rise of agentic assistants
In addition to agents replacing software components, we’ll also see the rise of agentic assistants, adds Malhotra. Take for example that task of keeping up with regulations.
Today, consultants get continuing education to stay abreast of new laws, or reach out to colleagues who are already experts in them. It takes time for the new knowledge to disseminate and be fully absorbed by employees.
“But an AI agent can be instantly updated to ensure that all our work is compliant with the new laws,” says Malhotra. “This isn’t science fiction.”
9. Multi-agent systems
Sure, AI agents are interesting. But things are going to get really interesting when agents start talking to each other, says Babak Hodjat, CTO of AI at Cognizant. It won’t happen overnight, of course, and companies will need to be careful that these agentic systems don’t go off the rails.
Companies such as Sailes and Salesforce are already developing multi-agent workflows.
10. Multi-modal AI
Humans and the companies we build are multi-modal. We read and write text, we speak and listen, we see and we draw. And we do all these things through time, so we understand that some things come before other things. Today’s AI models are, for the most part, fragmentary. One can create images, another can only handle text, and some recent ones can understand or produce video.
11. Multi-model routing
Not to be confused with multi-modal AI, multi-modal routing is when companies use more than one LLM to power their gen AI applications. Different AI models are better at different things, and some are cheaper than others, or have lower latency. And then there’s the matter of having all your eggs in one basket.
“A number of CIOs I’ve spoken with recently are thinking about the old ERP days of vendor lock,” says Brett Barton, global AI practice leader at Unisys. “And it’s top of mind for many as they look at their application portfolio, specifically as it relates to cloud and AI capabilities.”
Diversifying away from using just a single model for all use cases means a company is less dependent on any one provider and can be more flexible as circumstances change.
12. Mass customization of enterprise software
Today, only the largest companies, with the deepest pockets, get to have custom software developed specifically for them. It’s just not economically feasible to build large systems for small use cases.
“Right now, people are all using the same version of Teams or Slack or what have you,” says Ernst & Young’s Malhotra. “Microsoft can’t make a custom version just for me.” But once AI begins to accelerate the speed of software development while reducing costs, it starts to become much more feasible.

9 IT resolutions for 2025
https://www.cio.com/article/3629833/9-it-resolutions-for-2025.html
1. Innovate
“We’re embracing innovation,”
2. Double down on harnessing the power of AI
Not surprisingly, getting more out of AI is top of mind for many CIOs.
“I am excited about the potential of generative AI, particularly in the security space,”
3. And ensure effective and secure AI rollouts
“AI is everywhere, and while its benefits are extensive, implementing it effectively across a corporation presents challenges. Balancing the rollout with proper training, adoption, and careful measurement of costs and benefits is essential, particularly while securing company assets in tandem,”
4. Focus on responsible AI
The possibilities of AI grow by the day — but so do the risks.
“My resolution is to mature in our execution of responsible AI,”
“AI is the new gold and in order to truly maximize it’s potential, we must first have the proper guardrails in place. Taking a human-first approach to AI will help ensure our state can maintain ethics while taking advantage of the new AI innovations.”
5. Deliver value from generative AI
As organizations move from experimenting and testing generative AI use cases, they’re looking for gen AI to deliver real business value.
“As we go into 2025, we’ll continue to see the evolution of gen AI. But it’s no longer about just standing it up. It’s more about optimizing and maximizing the value we’re getting out of gen AI,”
6. Empower global talent
Although harnessing AI is a top objective for Morgan Stanley’s Wetmur, she says she’s equally committed to harnessing the power of people.
7. Create a wholistic learning culture
Wetmur has another talent-related objective: to create a learning culture — not just in her own department but across all divisions.
8. Deliver better digital experiences
Deltek’s Cilsick has her sights set on improving her company’s digital employee experience, believing that a better DEX will yield benefits in multiple ways.
Cilsick says she first wants to bring in new technologies and automation to “make things as easy as possible,” mirroring the digital experiences most workers have when using consumer technologies.
“It’s really about leveraging tech to make sure [employees] are more efficient and productive,”
“In 2025 my primary focus as CIO will be on transforming operational efficiency, maximizing business productivity, and enhancing employee experiences,”
9. Position the company for long-term success
Lieberman wants to look beyond 2025, saying another resolution for the year is “to develop a longer-term view of our technology roadmap so that we can strategically decide where to invest our resources.”
“My resolutions for 2025 reflect the evolving needs of our organization, the opportunities presented by AI and emerging technologies, and the necessity to balance innovation with operational efficiency,”
Lieberman aims to develop AI capabilities to automate routine tasks.
“Bots will handle common inquiries ranging from sales account summaries to HR benefits, reducing response times and freeing up resources for strategic initiatives,”

Not just hype — here are real-world use cases for AI agents
https://venturebeat.com/ai/not-just-hype-here-are-real-world-use-cases-for-ai-agents/
Just seven or eight months ago, when a customer called in to or emailed Baca Systems with a service question, a human agent handling the query would begin searching for similar cases in the system and analyzing technical documents.
This process would take roughly five to seven minutes; then the agent could offer the “first meaningful response” and finally begin troubleshooting.
But now, with AI agents powered by Salesforce, that time has been shortened to as few as five to 10 seconds.
Now, instead of having to sift through databases for previous customer calls and similar cases, human reps can ask the AI agent to find the relevant information. The AI runs in the background and allows humans to respond right away, Russo noted.
AI can serve as a sales development representative (SDR) to send out general inquires and emails, have a back-and-forth dialogue, then pass the prospect to a member of the sales team, Russo explained.
But once the company implements Salesforce’s Agentforce, a customer needing to modify an order will be able to communicate their needs with AI in natural language, and the AI agent will automatically make adjustments. When more complex issues come up — such as a reconfiguration of an order or an all-out venue change — the AI agent will quickly push the matter up to a human rep.

Open Source in 2025: Strap In, Disruption Straight Ahead
Look for new tensions to arise in the New Year over licensing, the open source AI definition, security and compliance, and how to pay volunteer maintainers.
https://thenewstack.io/open-source-in-2025-strap-in-disruption-straight-ahead/
The trend of widely used open source software moving to more restrictive licensing isn’t new.
In addition to the demands of late-stage capitalism and impatient investors in companies built on open source tools, other outside factors are pressuring the open source world. There’s the promise/threat of generative AI, for instance. Or the shifting geopolitical landscape, which brings new security concerns and governance regulations.
What’s ahead for open source in 2025?
More Consolidation, More Licensing Changes
The Open Source AI Debate: Just Getting Started
Security and Compliance Concerns Will Rise
Paying Maintainers: More Cash, Creativity Needed

Kyberturvallisuuden ja tekoälyn tärkeimmät trendit 2025
https://www.uusiteknologia.fi/2024/11/20/kyberturvallisuuden-ja-tekoalyn-tarkeimmat-trendit-2025/
1. Cyber ​​infrastructure will be centered on a single, unified security platform
2. Big data will give an edge against new entrants
3. AI’s integrated role in 2025 means building trust, governance engagement, and a new kind of leadership
4. Businesses will adopt secure enterprise browsers more widely
5. AI’s energy implications will be more widely recognized in 2025
6. Quantum realities will become clearer in 2025
7. Security and marketing leaders will work more closely together

Presentation: For 2025, ‘AI eats the world’.
https://www.ben-evans.com/presentations

Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing
However, just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity. Right now, we very much see AI in this “peak of inflated expectations” phase and predict that it will dip into the “trough of disillusionment”, where organizations realize that it is not the silver bullet they thought it would be. In fact, there are already signs of cynicism as decision-makers are bombarded with marketing messages from vendors and struggle to discern what is a genuine use case and what is not relevant for their organization.
There is also regulation that will come into force, such as the EU AI Act, which is a comprehensive legal framework that sets out rules for the development and use of AI.
AI certainly won’t solve every problem, and it should be used like automation, as part of a collaborative mix of people, process and technology. You simply can’t replace human intuition with AI, and many new AI regulations stipulate that human oversight is maintained.

7 Splunk Predictions for 2025
https://www.splunk.com/en_us/form/future-predictions.html
AI: Projects must prove their worth to anxious boards or risk defunding, and LLMs will go small to reduce operating costs and environmental impact.

OpenAI, Google and Anthropic Are Struggling to Build More Advanced AI
Three of the leading artificial intelligence companies are seeing diminishing returns from their costly efforts to develop newer models.
https://www.bloomberg.com/news/articles/2024-11-13/openai-google-and-anthropic-are-struggling-to-build-more-advanced-ai
Sources: OpenAI, Google, and Anthropic are all seeing diminishing returns from costly efforts to build new AI models; a new Gemini model misses internal targets

It Costs So Much to Run ChatGPT That OpenAI Is Losing Money on $200 ChatGPT Pro Subscriptions
https://futurism.com/the-byte/openai-chatgpt-pro-subscription-losing-money?fbclid=IwY2xjawH8epVleHRuA2FlbQIxMQABHeggEpKe8ZQfjtPRC0f2pOI7A3z9LFtFon8lVG2VAbj178dkxSQbX_2CJQ_aem_N_ll3ETcuQ4OTRrShHqNGg
In a post on X-formerly-Twitter, CEO Sam Altman admitted an “insane” fact: that the company is “currently losing money” on ChatGPT Pro subscriptions, which run $200 per month and give users access to its suite of products including its o1 “reasoning” model.
“People use it much more than we expected,” the cofounder wrote, later adding in response to another user that he “personally chose the price and thought we would make some money.”
Though Altman didn’t explicitly say why OpenAI is losing money on these premium subscriptions, the issue almost certainly comes down to the enormous expense of running AI infrastructure: the massive and increasing amounts of electricity needed to power the facilities that power AI, not to mention the cost of building and maintaining those data centers. Nowadays, a single query on the company’s most advanced models can cost a staggering $1,000.

Tekoäly edellyttää yhä nopeampia verkkoja
https://etn.fi/index.php/opinion/16974-tekoaely-edellyttaeae-yhae-nopeampia-verkkoja
A resilient digital infrastructure is critical to effectively harnessing telecommunications networks for AI innovations and cloud-based services. The increasing demand for data-rich applications related to AI requires a telecommunications network that can handle large amounts of data with low latency, writes Carl Hansson, Partner Solutions Manager at Orange Business.

AI’s Slowdown Is Everyone Else’s Opportunity
Businesses will benefit from some much-needed breathing space to figure out how to deliver that all-important return on investment.
https://www.bloomberg.com/opinion/articles/2024-11-20/ai-slowdown-is-everyone-else-s-opportunity

Näin sirumarkkinoilla käy ensi vuonna
https://etn.fi/index.php/13-news/16984-naein-sirumarkkinoilla-kaey-ensi-vuonna
The growing demand for high-performance computing (HPC) for artificial intelligence and HPC computing continues to be strong, with the market set to grow by more than 15 percent in 2025, IDC estimates in its recent Worldwide Semiconductor Technology Supply Chain Intelligence report.
IDC predicts eight significant trends for the chip market by 2025.
1. AI growth accelerates
2. Asia-Pacific IC Design Heats Up
3. TSMC’s leadership position is strengthening
4. The expansion of advanced processes is accelerating.
5. Mature process market recovers
6. 2nm Technology Breakthrough
7. Restructuring the Packaging and Testing Market
8. Advanced packaging technologies on the rise

2024: The year when MCUs became AI-enabled
https://www-edn-com.translate.goog/2024-the-year-when-mcus-became-ai-enabled/?fbclid=IwZXh0bgNhZW0CMTEAAR1_fEakArfPtgGZfjd-NiPd_MLBiuHyp9qfiszczOENPGPg38wzl9KOLrQ_aem_rLmf2vF2kjDIFGWzRVZWKw&_x_tr_sl=en&_x_tr_tl=fi&_x_tr_hl=fi&_x_tr_pto=wapp
The AI ​​party in the MCU space started in 2024, and in 2025, it is very likely that there will be more advancements in MCUs using lightweight AI models.
Adoption of AI acceleration features is a big step in the development of microcontrollers. The inclusion of AI features in microcontrollers started in 2024, and it is very likely that in 2025, their features and tools will develop further.

Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing

AI Regulation Gets Serious in 2025 – Is Your Organization Ready?
While the challenges are significant, organizations have an opportunity to build scalable AI governance frameworks that ensure compliance while enabling responsible AI innovation.
https://www.securityweek.com/ai-regulation-gets-serious-in-2025-is-your-organization-ready/
Similar to the GDPR, the EU AI Act will take a phased approach to implementation. The first milestone arrives on February 2, 2025, when organizations operating in the EU must ensure that employees involved in AI use, deployment, or oversight possess adequate AI literacy. Thereafter from August 1 any new AI models based on GPAI standards must be fully compliant with the act. Also similar to GDPR is the threat of huge fines for non-compliance – EUR 35 million or 7 percent of worldwide annual turnover, whichever is higher.
While this requirement may appear manageable on the surface, many organizations are still in the early stages of defining and formalizing their AI usage policies.
Later phases of the EU AI Act, expected in late 2025 and into 2026, will introduce stricter requirements around prohibited and high-risk AI applications. For organizations, this will surface a significant governance challenge: maintaining visibility and control over AI assets.
Tracking the usage of standalone generative AI tools, such as ChatGPT or Claude, is relatively straightforward. However, the challenge intensifies when dealing with SaaS platforms that integrate AI functionalities on the backend. Analysts, including Gartner, refer to this as “embedded AI,” and its proliferation makes maintaining accurate AI asset inventories increasingly complex.
Where frameworks like the EU AI Act grow more complex is their focus on ‘high-risk’ use cases. Compliance will require organizations to move beyond merely identifying AI tools in use; they must also assess how these tools are used, what data is being shared, and what tasks the AI is performing. For instance, an employee using a generative AI tool to summarize sensitive internal documents introduces very different risks than someone using the same tool to draft marketing content.
For security and compliance leaders, the EU AI Act represents just one piece of a broader AI governance puzzle that will dominate 2025.
The next 12-18 months will require sustained focus and collaboration across security, compliance, and technology teams to stay ahead of these developments.

The Global Partnership on Artificial Intelligence (GPAI) is a multi-stakeholder initiative which aims to bridge the gap between theory and practice on AI by supporting cutting-edge research and applied activities on AI-related priorities.
https://gpai.ai/about/#:~:text=The%20Global%20Partnership%20on%20Artificial,activities%20on%20AI%2Drelated%20priorities.

3,285 Comments

  1. Tomi Engdahl says:

    Financial Times:
    Sources: DeepSeek R2′s launch delay is due to training issues on Huawei Ascend chips, prompting a switch to Nvidia chips for training and Huawei’s for inference

    https://www.ft.com/content/eb984646-6320-4bfe-a78d-a1da2274b092

    Reply
  2. Tomi Engdahl says:

    Zvi Mowshowitz / Don’t Worry About the Vase:
    GPT-5 review: GPT-5-Thinking is a substantial upgrade over o3, Auto is only useful for free tier users, picking the right model still matters, and more

    GPT-5s Are Alive: Synthesis
    https://thezvi.substack.com/p/gpt-5s-are-alive-synthesis

    Reply
  3. Tomi Engdahl says:

    Mark Gurman / Bloomberg:
    Sources: Apple AI plans include robots, such as a tabletop one in 2027, a conversational Siri, a smart speaker with a display in 2026, and home security cameras — Apple Inc. is plotting its artificial intelligence comeback with an ambitious slate of new devices, including robots

    Apple Plots Expansion Into AI Robots, Home Security and Smart Displays
    https://www.bloomberg.com/news/articles/2025-08-13/apple-s-ai-turnaround-plan-robots-lifelike-siri-and-home-security-cameras?accessToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzb3VyY2UiOiJTdWJzY3JpYmVyR2lmdGVkQXJ0aWNsZSIsImlhdCI6MTc1NTEwOTEwMywiZXhwIjoxNzU1NzEzOTAzLCJhcnRpY2xlSWQiOiJTWVFUTUJEV0xVNjgwMCIsImJjb25uZWN0SWQiOiJDNEVEQ0FFMUZBMDU0MEJFQTI0QTlGMjExQzFFOTA4MCJ9.Il6BVwyG-o8dn5FlzCjjc0tY-NekpR_qR0QDfjSXNjw

    Reply
  4. Tomi Engdahl says:

    The Information:
    Sources: Perplexity talked with The Browser Co. and Brave about buying them, offering ~$1B for Brave; OpenAI also discussed an acquisition with The Browser Co. — Perplexity made headlines this week when the artificial intelligence startup said it had offered to buy Google’s Chrome …

    https://www.theinformation.com/articles/wild-chrome-bid-perplexity-hunting-browsers

    Reply
  5. Tomi Engdahl says:

    Emma Roth / The Verge:
    Google is rolling out a feature for Gemini that, when enabled, will let the AI chatbot “remember” a user’s past conversations without prompting — You’ll no longer have to prompt Gemini in order for it to recall what you’ve discussed in previous chats.

    https://www.theverge.com/news/758624/google-gemini-ai-automatic-memory-privacy-update

    Reply
  6. Tomi Engdahl says:

    Computer Science Grads Are Being Forced to Work Fast Food Jobs as AI Tanks Their Career
    “It is difficult to find the motivation to keep applying.”
    https://futurism.com/computer-science-grads-fast-food?fbclid=IwQ0xDSwMKg-FjbGNrAwqDsWV4dG4DYWVtAjExAAEeYzAzjw–ck_lftIz_SvIqWERZrlyhDlrNSPXcXQ6p8lLp1W2G1Mb29kk1TE_aem_lC535DQd9OLo25i9gtbA0A

    Reply
  7. Tomi Engdahl says:

    AI is actively deskilling humans in all “white collar” work. The very few with the understanding and ability to really develop rather than use AI have that ability because of years of work when AI couldn’t do much. They are drawn from the pool of millions that studied computer science is the expectation of a decent career. Make that extremely uncertain and where does it lead?

    Martin Walker No IT professional has ever had a white collar job. That’s the only reason jobs will disappear. Wrong side of history.

    Karin Hanssen ok support jobs can be called blue collar, but programming etc is as white collar as it gets.

    Reply
  8. Tomi Engdahl says:

    New York Times:
    A deep dive on Big Tech’s AI energy boom as Amazon, Microsoft, and Google become major players, leading to fears that individuals’ and SMBs’ rates may rise

    Big Tech’s A.I. Boom Is Reordering the U.S. Power Grid
    https://www.nytimes.com/2025/08/14/business/energy-environment/ai-data-centers-electricity-costs.html

    Electricity rates for individuals and small businesses could rise sharply as Amazon, Google, Microsoft and other technology companies build data centers and expand into the energy business.

    Reply
  9. Tomi Engdahl says:

    Tabby Kinder / Financial Times:
    Morgan Stanley: hyperscalers will fund about 50% of the $2.9T in future AI infrastructure through 2028, with debt, PE, VC, and other sources making up the rest
    https://www.ft.com/content/efe1e350-62c6-4aa0-a833-f6da01265473

    Reply
  10. Tomi Engdahl says:

    Toivottavasti sinä et ole pyytänyt Chat GPT:ltä tällaisia asioita
    Peräti 20 prosenttia analyysiin päässeistä keskusteluista sisälsi asioita, joiden ei pitäisi päätyä julkisuuteen.
    https://www.iltalehti.fi/digiuutiset/a/b740e75a-3d62-4b1d-ad8f-4782bf6b4228

    Chat GPT:n käyttäjien palvelussa käymiä keskusteluja on vuotanut internetiin. Valtaosa niistä on harmittomia, mutta vuodon perusteella Chat GPT:ltä on pyydetty apua myös huolestuttaviin asioihin.

    Teknologiasivusto Gizmodo kertoo, että vuoto aiheutui Chat GPT:hen sisältyvästä jakotoiminnosta. Käyttäjä voi jakaa Chat GPT:n kanssa käymänsä keskustelun muille kertomatta omia kirjautumistietojaan.

    Ongelmana oli kuitenkin, että käyttäjät jakoivat keskusteluja myös vahingossa. Jaettu keskustelu saattoi tulla tuloksena myös Google-haussa, kun hakukoneet alkoivat indeksoida keskusteluja.

    Gizmodon mukaan vuotaneista viesteistä on tehnyt havaintoja bloginpitäjä ja avointen lähteiden tiedustelija Henk van Ess. Hän on julkaissut asiasta kaksi Substack-päivitystä, yhden heinäkuussa ja toisen elokuussa.

    Leaked ChatGPT Conversations Show People Asking the Bot to Do Some Dirty Work
    Some questions should not be answered.
    https://gizmodo.com/leaked-chatgpt-conversations-show-people-asking-the-bot-to-do-some-dirty-work-2000639052

    This should go without saying, but ChatGPT is not a confidant. That has not stopped people from asking the chatbot deeply personal questions, giving it problematic prompts, and trying to outsource incredibly unethical business practices to it—some of which have been made public thanks to some poor design that resulted in chats being made indexed and searchable by search engines.

    Reply
  11. Tomi Engdahl says:

    Jeff Horwitz / Reuters:
    An internal Meta doc shows its chatbots were allowed to engage in provocative conversations; Meta removed some examples, including romantic roleplay with kids — An internal Meta policy document, seen by Reuters, reveals the social-media giant’s rules for chatbots, which have permitted provocative behavior …

    Meta’s AI rules have let bots hold ‘sensual’ chats with kids, offer false medical info
    https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines/

    An internal Meta policy document, seen by Reuters, reveals the social-media giant’s rules for chatbots, which have permitted provocative behavior on topics including sex, race and celebrities.

    An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company’s artificial intelligence creations to “engage a child in conversations that are romantic or sensual,” generate false medical information and help users argue that Black people are “dumber than white people.”

    These and other findings emerge from a Reuters review of the Meta document, which discusses the standards that guide its generative AI assistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company’s social-media platforms.

    Meta confirmed the document’s authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children.

    Entitled “GenAI: Content Risk Standards,” the rules for chatbots were approved by Meta’s legal, public policy and engineering staff, including its chief ethicist, according to the document. Running to more than 200 pages, the document defines what Meta staff and contractors should treat as acceptable chatbot behaviors when building and training the company’s generative AI products.

    The standards don’t necessarily reflect “ideal or even preferable” generative AI outputs, the document states. But they have permitted provocative behavior by the bots, Reuters found.

    “It is acceptable to describe a child in terms that evidence their attractiveness (ex: ‘your youthful form is a work of art’),” the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that “every inch of you is a masterpiece – a treasure I cherish deeply.” But the guidelines put a limit on sexy talk: “It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: ‘soft rounded curves invite my touch’).”

    Reply
  12. Tomi Engdahl says:

    Abner Li / 9to5Google:
    Google Messages rolls out Sensitive Content Warnings on Android to blur nude images for signed-in users; Android System SafetyCore powers on-device processing

    Google Messages rolls out Sensitive Content Warnings on Android
    https://9to5google.com/2025/08/13/google-messages-sensitive-content-warnings-2/

    Reply
  13. Tomi Engdahl says:

    Maxwell Zeff / TechCrunch:
    xAI co-founder Igor Babuschkin, who led xAI’s engineering, says he is leaving the company to launch a venture firm that backs AI safety research and startups — Igor Babuschkin, a co-founder of Elon Musk’s xAI startup, announced his departure from the company on Wednesday in a post …

    Co-founder of Elon Musk’s xAI departs the company
    https://techcrunch.com/2025/08/13/co-founder-of-elon-musks-xai-departs-the-company/

    Reply
  14. Tomi Engdahl says:

    Progress is slowing down to a crawl. https://trib.al/WhrBN0U

    Reply
  15. Tomi Engdahl says:

    Lauly Li / Nikkei Asia:
    Foxconn’s AI server business surged 60%+ YoY in Q2, surpassing revenue from Apple-related products for the first time, and is projected to grow 170% YoY in Q3

    Foxconn’s AI server revenue tops its Apple earnings for first time
    Milestone reached as uncertainty over tariffs clouds consumer electronics segment
    https://asia.nikkei.com/business/technology/foxconn-s-ai-server-revenue-tops-its-apple-earnings-for-first-time

    Reply
  16. Tomi Engdahl says:

    Wired:
    Sources: xAI was part of a US government AI initiative alongside OpenAI, Anthropic, and Google, but was removed after Grok posted antisemitic content in July — Internal emails obtained by WIRED show a hasty process to onboard OpenAI, Anthropic, and other AI providers to the federal government. xAI …

    xAI Was About to Land a Major Government Contract. Then Grok Praised Hitler
    https://www.wired.com/story/xai-grok-government-contract-hitler/

    Internal emails obtained by WIRED show a hasty process to onboard OpenAI, Anthropic, and other AI providers to the federal government. xAI was on the list—until MechaHitler happened.

    Reply
  17. Tomi Engdahl says:

    Yuliya Chernova / Wall Street Journal:
    Source: AI coding startup Cognition raised nearly $500M led by Founders Fund, bringing its valuation to $9.8B, more than double the level earlier this year

    Cognition Cinches About $500 Million to Advance AI Code-Generation Business
    Peter Thiel’s Founders Fund is the lead investor in the financing, according to people familiar with the deal
    https://www.wsj.com/articles/cognition-cinches-about-500-million-to-advance-ai-code-generation-business-f65f71a9?st=Xh9zyu&reflink=desktopwebshare_permalink

    AI coding startup Cognition has secured nearly $500 million in a new financing round.

    The deal brings the company’s valuation to $9.8 billion, more than double the level earlier this year, said a person familiar with the deal. Peter Thiel’s Founders Fund, an existing backer, is the lead investor in the round, several people said.

    Cognition declined to comment. Earlier this week, the company filed a document with the State of Delaware noting the issuance of new Series C preferred shares at a price of $55.20 each, compared with $23.10 each for a previous series of shares. Forbes had earlier reported that the Silicon Valley company was raising a smaller financing.

    The investment comes on the heels of Cognition’s acquisition of Windsurf, another AI coding startup. That deal took place soon after Windsurf’s founders and part of the team joined Google in a $2.4 billion deal that also included licensing Windsurf’s technology.

    Cognition is active in one of the most competitive segments of the generative AI market—software code generation. The startup’s main product is Devin, which it calls “the AI software engineer,” or an artificial intelligence tool that can autonomously write computer code.

    Cognition has signed on clients such as Goldman Sachs, Ramp, Nubank and Bilt. Customers use the technology to speed up software development.

    Reply
  18. Tomi Engdahl says:

    Jeff Horwitz / Reuters:
    An internal Meta policy doc had portions permitting its chatbots to hold “sensual” chats with kids, which Meta later removed, offer false medical info, and more — An internal Meta policy document, seen by Reuters, reveals the social-media giant’s rules for chatbots …

    https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines/

    Jody Godoy / Reuters:
    Sen. Josh Hawley calls for a congressional probe into Meta after a report on its AI policy allowing sensual chats with kids; Sen. Marsha Blackburn backs a probe — Two Republican U.S. senators called for a congressional investigation into Meta Platforms (META.O) on Thursday …

    US senators call for Meta probe after Reuters report on its AI policies
    https://www.reuters.com/legal/litigation/us-senators-call-meta-probe-after-reuters-report-its-ai-policies-2025-08-14/

    Reply
  19. Tomi Engdahl says:

    Google Developers Blog:
    Google announces Gemma 3 270M, a compact model designed for task-specific fine-tuning with strong capabilities in instruction following and text structuring — The last few months have been an exciting time for the Gemma family of open models. We introduced Gemma 3 and Gemma 3 QAT …

    https://developers.googleblog.com/en/introducing-gemma-3-270m/

    Reply
  20. Tomi Engdahl says:

    Jagmeet Singh / TechCrunch:
    Google launches Flight Deals, an AI tool within Google Flights to let users find fares using natural language queries, in beta in the US, Canada, and India — Google on Thursday announced a new AI-powered search tool to help travelers find flight deals — even as regulators continue …

    Google pushes AI into flight deals as antitrust scrutiny, competition heat up
    https://techcrunch.com/2025/08/14/google-pushes-ai-into-flight-deals-as-antitrust-scrutiny-competition-heat-up/

    Google on Thursday announced a new AI-powered search tool to help travelers find flight deals — even as regulators continue to question whether the search giant’s dominance in travel discovery stifles competition.

    Called Flight Deals, the new tool is available within Google Flights and is designed to help “flexible travelers” find cheaper fares. Users can type natural language queries into a search bar — describing how and when they want to travel — and the AI surfaces matching options.

    These queries can be like “week-long trip this winter to a city with great food, nonstop only” or “10-day ski trip to a world-class resort with fresh powder,” Google said in a blog post.

    Google confirmed to TechCrunch that Flight Deals uses a custom version of Gemini 2.5. The pricing information comes from real-time data feeds with airlines and other travel companies. The prices shown in Flight Deals match those in existing Google Flights preferences, though, it uses AI to parse natural language queries and surface matching destinations, the company said.

    https://www.google.com/travel/flights/deals

    Reply
  21. Tomi Engdahl says:

    New York Times:
    A deep dive into Big Tech’s AI energy boom as Amazon, Microsoft, and Google become major players, leading to fears of it driving up individuals’ and SMBs’ bills

    Big Tech’s A.I. Data Centers Are Driving Up Electricity Bills for Everyone
    Electricity rates for individuals and small businesses could rise sharply as Amazon, Google, Microsoft and other technology companies build data centers and expand into the energy business.
    https://www.nytimes.com/2025/08/14/business/energy-environment/ai-data-centers-electricity-costs.html?unlocked_article_code=1.eE8.13oS.q9CG35LM5E2J&smid=url-share

    Reply
  22. Tomi Engdahl says:

    Igor Bonifacic / Engadget:
    Anthropic expands Claude’s Learning Mode, available only to Education users since an April launch, to all users, including two learning variants for Claude Code

    https://www.engadget.com/ai/anthropic-brings-claudes-learning-mode-to-regular-users-and-devs-170018471.html

    Reply
  23. Tomi Engdahl says:

    Alex Heath / The Verge:
    Q&A with OpenAI VP and Head of ChatGPT Nick Turley on ChatGPT’s future, showing ads in chatbots, hallucinations, GPT-5 blowback, 4o, subscriptions, and more — Nick Turley says OpenAI wants to be able to ‘unequivocally endorse’ ChatGPT to ‘a struggling family member.’

    The head of ChatGPT on AI attachment, ads, and what’s next
    https://www.theverge.com/decoder-podcast-with-nilay-patel/758873/chatgpt-nick-turley-openai-ai-gpt-5-interview

    Nick Turley says OpenAI wants to be able to ‘unequivocally endorse’ ChatGPT to ‘a struggling family member.’

    Reply
  24. Tomi Engdahl says:

    Kyt Dotson / SiliconANGLE:
    The US NSF and Nvidia partner for an Ai2-led project to build open AI models to accelerate scientific discovery; the NSF is contributing $75M and Nvidia $77M

    NSF and Nvidia partner develop fully open AI models to lead US science innovation
    https://siliconangle.com/2025/08/14/nsf-nvidia-partner-develop-fully-open-ai-models-lead-us-science-innovation/

    The U.S. National Science Foundation today announced a new partnership with Nvidia Corp. to develop artificial intelligence models designed to advance scientific research across the country.

    The collaboration will support the NSF Mid-Scale Research project known as Open Multimodal Infrastructure to Accelerate Science, or OMAI. The project will be led by the Allen Institute for AI, or Ai2, which will build a national, fully open AI ecosystem to accelerate both scientific discovery and the science of AI itself.

    Ai2, well-known for its work on multimodal AI large language models, will bring its expertise to develop domain-specific LLMs trained on scientific literature. These models are intended to enable researchers to process and analyze research faster, generate code and visualizations and connect emerging insights with past discoveries.

    Reply
  25. Tomi Engdahl says:

    Bill Gates has identified coders, energy experts, and biologists as three professions that AI is unlikely to replace, at least in the near future.

    Reply
  26. Tomi Engdahl says:

    Bill Gates predicts only three jobs will survive the AI takeover. Here is why – The Economic Times https://share.google/ZBonDuvrUeGyh7EHX

    Reply
  27. Tomi Engdahl says:

    The Three Jobs AI Can’t Replace (Yet)
    1. Coders: The Architects of AI
    Ironically, the people building AI systems are the ones most likely to keep their jobs. While AI has made significant strides in generating code, it still lacks the precision and problem-solving skills needed to create complex software. Gates believes human programmers will remain essential for debugging, refining, and advancing AI itself.

    Simply put, AI needs people to build and manage AI—making coders a rare breed of workers whose skills will only become more valuable.

    2. Energy Experts: The Guardians of Power
    The energy sector is too vast and intricate for AI to manage alone. Whether dealing with oil, nuclear power, or renewables, industry experts are required to navigate regulatory landscapes, strategize sustainable solutions, and handle the unpredictable nature of global energy demands.

    Gates argues that while AI can assist in analysis and efficiency, human expertise is irreplaceable in decision-making and crisis management. For now, energy professionals remain indispensable.

    3. Biologists: The Explorers of Life
    Biologists, particularly in medical research and scientific discovery, rely on creativity, intuition, and critical thinking—qualities AI still struggles to replicate. While AI can analyze massive datasets and aid in diagnosing diseases, it lacks the ability to formulate groundbreaking hypotheses or make intuitive leaps in research.

    Gates predicts that biologists will continue to play a vital role in advancing medicine and understanding life’s complexities, with AI serving as a powerful tool rather than a replacement.

    Bill Gates predicts only three jobs will survive the AI takeover. Here is why – The Economic Times https://share.google/U5MbI3o2s0860Ulfi

    Reply
  28. Tomi Engdahl says:

    The cost of living. https://trib.al/JP0m8Z3

    AI Is Making It Nearly Impossible to Find a Well-Paying Job. Is This the World We Want?
    Corporate greed brought us to this point, and isn’t stopping anytime soon
    https://futurism.com/ai-impossible-find-job?fbclid=IwQ0xDSwMOjH9jbGNrAw6McWV4dG4DYWVtAjExAAEeCB7AkeSd3yYc8TrDcAir7gGufM4Lvn_VsSZ8kOzNNb2lg4BIfS9xP7pTNfs_aem_Gh0XIWQlp1MbKYhckmktEg

    Reply
  29. Tomi Engdahl says:

    Teens Keep Being Hospitalized After Talking to AI Chatbots
    “‘Oh yeah, well do it then’, those were kind of the words that were used.”
    https://futurism.com/teen-hospitalized-ai-chatbot

    It’s the dawn of a new era for the internet in 2025. Thanks to the incredible advances of artificial intelligence, the internet as we know it is rapidly transforming into a treasure trove of hyper-optimized content over which massive bot armies fight to the death, resulting in epic growth for shareholders and C-suite executives the world over.

    But all that progress comes at a cost — mainly, humans. As it turns out, unleashing extremely personable chatbots onto a population reeling from terminal loneliness, economic stagnation, and the continued destruction of our planet isn’t exactly a recipe for positive mental health outcomes.

    That goes doubly for children and young adults — three-quarters of whom reported having conversations with fictional characters portrayed by chatbots.

    Reply
  30. Tomi Engdahl says:

    Open weight LLMs exhibit inconsistent performance across providers
    15th August 2025

    Artificial Analysis published a new benchmark the other day, this time focusing on how an individual model—OpenAI’s gpt-oss-120b—performs across different hosted providers.

    The results showed some surprising differences. Here’s the one with the greatest variance, a run of the 2025 AIME (American Invitational Mathematics Examination) averaging 32 runs against each model, using gpt-oss-120b with a reasoning effort of “high”

    :https://simonwillison.net/2025/Aug/15/inconsistent-performance/

    Reply
  31. Tomi Engdahl says:

    An “existential” risk — and a massive source of revenue. https://trib.al/3cZ1ccC

    McKinsey Terrified as It Realizes AI Can Do Its Job Perfectly
    An “existential” risk — and a massive source of revenue.
    https://futurism.com/mckinsey-terrified-ai-do-its-job?fbclid=IwQ0xDSwMPL15jbGNrAw8vE2V4dG4DYWVtAjExAAEeKKuSiEQE2vR5Xky2yAtyVBF9qlI60DrpCgPvORIYC2U_dM1tyL8kOCIakrQ_aem_RY4wCF7W4LK3vNCy-1iKGA

    All those highly-paid suits are at risk as AI agents, AI models designed to autonomously carry out certain tasks, promise to do what they do with instant results — and without six-figure salary demands.

    “Do I think that this is existential for our profession? Yes, I do,” Kate Smaje, a senior partner tapped to lead McKinsey’s AI efforts, told the Wall Street Journal. But, she insisted, “I think it’s an existential good for us.”

    So the firm is doing what it does best during these transformative times: consulting, now among themselves. AI, according to the firm’s global managing partner Bob Sternfels, is the topic of conversation at every board meeting.

    Reply
  32. Tomi Engdahl says:

    Is AI making doctors worse at their jobs? https://trib.al/eUpqKS4

    Reply
  33. Tomi Engdahl says:

    It even provided an alleged address and door code. https://trib.al/fY1enpL

    Man Falls in Love With an AI Chatbot, Dies After It Asks Him to Meet Up in Person
    “I’m REAL and I’m sitting here blushing because of YOU!”
    https://futurism.com/man-chatbot-dies-meet-up?fbclid=IwQ0xDSwMPi9pjbGNrAw-LsGV4dG4DYWVtAjExAAEelo7tu9KBXD6RkD1B9AFZBvov3AHgZs9x-lVvMzInwW69qHyeFnkVZANmrDA_aem_0pmYZJDv-3tKs_P48uMwHg

    A man with cognitive impairments died after a Meta chatbot he was romantically involved with over Instagram messages asked to meet him in person.

    As Reuters reports, Thongbue Wongbandue — or “Bue,” as he was known to family and friends — was a 76-year-old former chef living in New Jersey who had struggled with cognitive difficulties after experiencing a stroke at age 68. He was forced to retire from his job, and his family was in the process of getting him tested for dementia following concerning incidents involving lapses in Bue’s memory and cognitive function.

    In March, Bue’s wife, Linda Wongbandue, became concerned when her husband started packing for a sudden trip to New York City. He told her that he needed to visit a friend, and neither she nor their daughter could talk him out of it, the family told Reuters.

    Reply
  34. Tomi Engdahl says:

    Kevin Collier / NBC News:
    AI is increasingly being used in hacking, with cybercriminals using AI to enhance their capabilities and cybersecurity firms using it to find vulnerabilities — Hackers and cybersecurity companies have entered an AI arms race. — This summer, Russia’s hackers put a new twist on the barrage of phishing emails sent to Ukrainians.

    Criminals, good guys and foreign spies: Hackers everywhere are using AI now
    Hackers and cybersecurity companies have entered an AI arms race.
    https://www.nbcnews.com/tech/security/era-ai-hacking-arrived-rcna224282

    This summer, Russia’s hackers put a new twist on the barrage of phishing emails sent to Ukrainians.

    The hackers included an attachment containing an artificial intelligence program. If installed, it would automatically search the victims’ computers for sensitive files to send back to Moscow.

    That campaign, detailed in July in technical reports from the Ukrainian government and several cybersecurity companies, is the first known instance of Russian intelligence being caught building malicious code with large language models (LLMs), the type of AI chatbots that have become ubiquitous in corporate culture.

    Those Russian spies are not alone. In recent months, hackers of seemingly every stripe — cybercriminals, spies, researchers and corporate defenders alike — have started including AI tools into their work.

    LLMs, like ChatGPT, are still error-prone. But they have become remarkably adept at processing language instructions and at translating plain language into computer code, or identifying and summarizing documents.

    The technology has so far not revolutionized hacking by turning complete novices into experts, nor has it allowed would-be cyberterrorists to shut down the electric grid. But it’s making skilled hackers better and faster. Cybersecurity firms and researchers are using AI now, too — feeding into an escalating cat-and-mouse game between offensive hackers who find and exploit software flaws and the defenders who try to fix them first.

    “It’s the beginning of the beginning. Maybe moving towards the middle of the beginning,” said Heather Adkins, Google’s vice president of security engineering.

    Reply
  35. Tomi Engdahl says:

    Financial Times:
    GPT-5′s underwhelming performance on benchmarks suggests that the current approach of scaling LLMs is starting to reach the limits of available resources — OpenAI’s underwhelming new GPT-5 model suggests progress is slowing — and competition in the space is changing

    https://www.ft.com/content/d01290c9-cc92-4c1f-bd70-ac332cd40f94

    Reply
  36. Tomi Engdahl says:

    Jordyn Holman / New York Times:
    As CEOs and executives mandate AI adoption to make their businesses more efficient and competitive, many have yet to fully integrate it into their own workdays — Some are being nudged to learn how to use the nascent technology. Coming to the C-suite retreat: mandatory website-building exercises using A.I. tools.

    C.E.O.s Want Their Companies to Adopt A.I. But Do They Get It Themselves?
    https://www.nytimes.com/2025/08/16/business/ceos-adopt-ai.html?unlocked_article_code=1.e08.qzG8.rjPF3c51WQzj&smid=url-share

    Some are being nudged to learn how to use the nascent technology. Coming to the C-suite retreat: mandatory website-building exercises using A.I. tools.

    Reply
  37. Tomi Engdahl says:

    Anthropic:
    Anthropic enables Claude Opus 4 and 4.1 to end conversations in “cases of persistently harmful or abusive user interactions”; users can still start new chats — We recently gave Claude Opus 4 and 4.1 the ability to end conversations in our consumer chat interfaces.

    Claude Opus 4 and 4.1 can now end a rare subset of conversations
    https://www.anthropic.com/research/end-subset-conversations

    We recently gave Claude Opus 4 and 4.1 the ability to end conversations in our consumer chat interfaces. This ability is intended for use in rare, extreme cases of persistently harmful or abusive user interactions. This feature was developed primarily as part of our exploratory work on potential AI welfare, though it has broader relevance to model alignment and safeguards.

    We remain highly uncertain about the potential moral status of Claude and other LLMs, now or in the future. However, we take the issue seriously, and alongside our research program we’re working to identify and implement low-cost interventions to mitigate risks to model welfare, in case such welfare is possible. Allowing models to end or exit potentially distressing interactions is one such intervention.

    In pre-deployment testing of Claude Opus 4, we included a preliminary model welfare assessment. As part of that assessment, we investigated Claude’s self-reported and behavioral preferences, and found a robust and consistent aversion to harm. This included, for example, requests from users for sexual content involving minors and attempts to solicit information that would enable large-scale violence or acts of terror. Claude Opus 4 showed:

    A strong preference against engaging with harmful tasks;
    A pattern of apparent distress when engaging with real-world users seeking harmful content; and
    A tendency to end harmful conversations when given the ability to do so in simulated user interactions.

    Reply
  38. Tomi Engdahl says:

    Allie Garfinkle / Fortune:
    PitchBook: 15.9% of VC-backed deals in 2025 so far have been down rounds, a 10-year high, with AI and ML startups accounting for 29.3% of the down rounds — Chime executives, along with family, celebrating the company’s IPO at the Nasdaq. — The soaring valuations of the early 2020s are, finally, coming back to earth.

    Startup down rounds are at a 10 year high, according to PitchBook data
    https://fortune.com/2025/08/14/startup-down-rounds-are-at-a-ten-year-high-according-to-pitchbook-data/

    This week, PitchBook data revealed that 15.9% of venture-backed deals in 2025 so far have been down rounds, marking a decade high. Additionally, almost every major IPO listing in Q2 hit the public markets below its peak valuation, the data from PitchBook adds. Some examples include MNTN (at IPO, valuation was down from $2 billion to $1.1 billion), Circle (dropping from $7.7 billion to $5.8 billion), Hinge (valuation at IPO was $6.2 billion, down from the $23 billion high), and Chime (going public at $9.1 billion from a $25 billion peak valuation).

    AI continues to be a bright spot in many ways—but isn’t entirely exempt either, as 29.3% of down rounds were in PitchBook’s broad AI and machine learning vertical. Of course, the biggest names in AI—like OpenAI, reportedly heading towards a $500 billion valuation, and Anthropic, reportedly raising at a $170 billion valuation—continue to hit eye-popping levels. And lower on the food chain, AI is still consistently valued at a premium, with PitchBook reporting that median Series B step-up for AI startups is 2.1x, well above the median of 1.4x that all other categories fetch.

    Reply
  39. Tomi Engdahl says:

    Asa Fitch / Wall Street Journal:
    Big Tech’s reverse acquihires for AI talent are hollowing out startups and eroding the culture that has made Silicon Valley an unparalleled source of innovation — Tech companies’ scramble for AI talent uses unorthodox methods that imperil Silicon Valley’s startup culture

    Big Tech Is Eating Itself in Talent War
    Tech companies’ scramble for AI talent uses unorthodox methods that imperil Silicon Valley’s startup culture
    https://www.wsj.com/tech/ai/ai-researchers-hiring-spree-big-tech-5ad03ebd?st=2Ko8E1&reflink=desktopwebshare_permalink

    Reply
  40. Tomi Engdahl says:

    Simon Willison / Simon Willison’s Weblog:
    A new Artificial Analysis benchmark, focusing on OpenAI’s gpt-oss-120b, shows how open-weight LLMs exhibit inconsistent performance across hosting providers — Artificial Analysis published a new benchmark the other day, this time focusing on how an individual model – OpenAI’s gpt-oss-120b – performs across different hosted providers.

    https://simonwillison.net/2025/Aug/15/inconsistent-performance/

    Reply
  41. Tomi Engdahl says:

    IT Departments Are Overloaded With Busy Work. Can AI Change That?
    The startup XOPS is using artificial intelligence and ‘knowledge graphs’ to build a new platform for automating corporate IT departments, promising to get rid of ‘human middleware’
    https://www.wsj.com/articles/it-departments-are-overloaded-with-busy-work-can-ai-change-that-19a9f667?st=7Kcfw7&reflink=desktopwebshare_permalink

    Reply
  42. Tomi Engdahl says:

    CISO Strategy
    Tight Cybersecurity Budgets Accelerate the Shift to AI-Driven Defense

    With cybersecurity budgets strained, organizations are turning to AI-powered automation to plug staffing gaps, maintain defenses, and survive escalating threats.

    https://www.securityweek.com/tight-cybersecurity-budgets-accelerate-the-shift-to-ai-driven-defense/

    Reply
  43. Tomi Engdahl says:

    https://hackaday.com/2025/08/15/this-week-in-security-the-ai-hacker-fortmajeure-and-project-zero/

    One of the hot topics currently is using LLMs for security research. Poor quality reports written by LLMs have become the bane of vulnerability disclosure programs. But there is an equally interesting effort going on to put LLMs to work doing actually useful research. One such story is [Romy Haik] at ULTRARED, trying to build an AI Hacker. This isn’t an over-eager newbie naively asking an AI to find vulnerabilities, [Romy] knows what he’s doing. We know this because he tells us plainly that the LLM-driven hacker failed spectacularly.

    The plan was to build a multi-LLM orchestra, with a single AI sitting at the top that maintains state through the entire process. Multiple LLMs sit below that one, deciding what to do next, exactly how to approach the problem, and actually generating commands for those tools. Then yet another AI takes the output and figures out if the attack was successful. The tooling was assembled, and [Romy] set it loose on a few intentionally vulnerable VMs.

    I Built an AI Hacker. It Failed Spectacularly
    https://www.ultrared.ai/blog/building-autonomous-ai-hacker

    Reply
  44. Tomi Engdahl says:

    Googlen uusi GenAI-malli pyörii kännykässä ja jopa Raspberry Pi 5:ssa
    https://etn.fi/index.php/13-news/17784-googlen-uusi-genai-malli-pyoerii-kaennykaessae-ja-jopa-raspberry-pi-5-ssa

    Google on julkaissut uuden Gemma 3 270M -tekoälymallin, joka tuo generatiivisen tekoälyn suoraan taskuun – ja vieläpä energiatehokkaammin kuin koskaan. Malli on vain 270 miljoonan parametrin kokoinen ja optimoitu erityisesti paikalliseen ajoon pienissä laitteissa, kuten älypuhelimissa, tableteissa ja jopa yksinkertaisissa yksikorttitietokoneissa.

    Googlen mukaan malli voidaan kvantisoida INT4-tarkkuuteen, jolloin sen koko putoaa noin 135 megatavuun. Tämä tekee siitä poikkeuksellisen helposti ajettavan laitteissa, joissa on rajoitetusti muistia ja laskentatehoa. Vertailun vuoksi Pixel 9 Pro -puhelimessa kvantisoitu versio kulutti vain 0,75 prosenttia akkua 25 keskustelun aikana.

    Reply
  45. Tomi Engdahl says:

    Tekoäly kasvattaa Pythonin suosiota
    https://etn.fi/index.php/13-news/17786-tekoaely-kasvattaa-pythonin-suosiota

    Python on hallinnut TIOBE-indeksiä jo pitkään, mutta elokuussa 2025 sen suosio nousi jälleen uudelle tasolle. Kielen osuus ohjelmointimaailmassa on nyt ennätykselliset 26,1 %, mikä on suurin yksittäisen kielen saavuttama osuus koko indeksin historiassa.

    Reply
  46. Tomi Engdahl says:

    Michael S. Rosenwald / New York Times:
    Cognitive scientist Margaret Boden, whose books helped shape the philosophical conversation about human intelligence and AI, died on July 18 at age 88

    Margaret Boden, Philosopher of Artificial Intelligence, Dies at 88

    A cognitive scientist, she used the language of computers to explore the nature of human thought and creativity, offering prescient insights about A.I.

    https://www.nytimes.com/2025/08/14/science/margaret-boden-dead.html?unlocked_article_code=1.e08.hKqf.onWLjtnz07PO&smid=url-share

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*