AI trends 2025

AI is developing all the time. Here are some picks from several articles what is expected to happen in AI and around it in 2025. Here are picks from various articles, the texts are picks from the article edited and in some cases translated for clarity.

AI in 2025: Five Defining Themes
https://news.sap.com/2025/01/ai-in-2025-defining-themes/
Artificial intelligence (AI) is accelerating at an astonishing pace, quickly moving from emerging technologies to impacting how businesses run. From building AI agents to interacting with technology in ways that feel more like a natural conversation, AI technologies are poised to transform how we work.
But what exactly lies ahead?
1. Agentic AI: Goodbye Agent Washing, Welcome Multi-Agent Systems
AI agents are currently in their infancy. While many software vendors are releasing and labeling the first “AI agents” based on simple conversational document search, advanced AI agents that will be able to plan, reason, use tools, collaborate with humans and other agents, and iteratively reflect on progress until they achieve their objective are on the horizon. The year 2025 will see them rapidly evolve and act more autonomously. More specifically, 2025 will see AI agents deployed more readily “under the hood,” driving complex agentic workflows.
In short, AI will handle mundane, high-volume tasks while the value of human judgement, creativity, and quality outcomes will increase.
2. Models: No Context, No Value
Large language models (LLMs) will continue to become a commodity for vanilla generative AI tasks, a trend that has already started. LLMs are drawing on an increasingly tapped pool of public data scraped from the internet. This will only worsen, and companies must learn to adapt their models to unique, content-rich data sources.
We will also see a greater variety of foundation models that fulfill different purposes. Take, for example, physics-informed neural networks (PINNs), which generate outcomes based on predictions grounded in physical reality or robotics. PINNs are set to gain more importance in the job market because they will enable autonomous robots to navigate and execute tasks in the real world.
Models will increasingly become more multimodal, meaning an AI system can process information from various input types.
3. Adoption: From Buzz to Business
While 2024 was all about introducing AI use cases and their value for organizations and individuals alike, 2025 will see the industry’s unprecedented adoption of AI specifically for businesses. More people will understand when and how to use AI, and the technology will mature to the point where it can deal with critical business issues such as managing multi-national complexities. Many companies will also gain practical experience working for the first time through issues like AI-specific legal and data privacy terms (compared to when companies started moving to the cloud 10 years ago), building the foundation for applying the technology to business processes.
4. User Experience: AI Is Becoming the New UI
AI’s next frontier is seamlessly unifying people, data, and processes to amplify business outcomes. In 2025, we will see increased adoption of AI across the workforce as people discover the benefits of humans plus AI.
This means disrupting the classical user experience from system-led interactions to intent-based, people-led conversations with AI acting in the background. AI copilots will become the new UI for engaging with a system, making software more accessible and easier for people. AI won’t be limited to one app; it might even replace them one day. With AI, frontend, backend, browser, and apps are blurring. This is like giving your AI “arms, legs, and eyes.”
5. Regulation: Innovate, Then Regulate
It’s fair to say that governments worldwide are struggling to keep pace with the rapid advancements in AI technology and to develop meaningful regulatory frameworks that set appropriate guardrails for AI without compromising innovation.

12 AI predictions for 2025
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
https://www.cio.com/article/3630070/12-ai-predictions-for-2025.html
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
1. Small language models and edge computing
Most of the attention this year and last has been on the big language models — specifically on ChatGPT in its various permutations, as well as competitors like Anthropic’s Claude and Meta’s Llama models. But for many business use cases, LLMs are overkill and are too expensive, and too slow, for practical use.
“Looking ahead to 2025, I expect small language models, specifically custom models, to become a more common solution for many businesses,”
2. AI will approach human reasoning ability
In mid-September, OpenAI released a new series of models that thinks through problems much like a person would, it claims. The company says it can achieve PhD-level performance in challenging benchmark tests in physics, chemistry, and biology. For example, the previous best model, GPT-4o, could only solve 13% of the problems on the International Mathematics Olympiad, while the new reasoning model solved 83%.
If AI can reason better, then it will make it possible for AI agents to understand our intent, translate that into a series of steps, and do things on our behalf, says Gartner analyst Arun Chandrasekaran. “Reasoning also helps us use AI as more of a decision support system,”
3. Massive growth in proven use cases
This year, we’ve seen some use cases proven to have ROI, says Monteiro. In 2025, those use cases will see massive adoption, especially if the AI technology is integrated into the software platforms that companies are already using, making it very simple to adopt.
“The fields of customer service, marketing, and customer development are going to see massive adoption,”
4. The evolution of agile development
The agile manifesto was released in 2001 and, since then, the development philosophy has steadily gained over the previous waterfall style of software development.
“For the last 15 years or so, it’s been the de-facto standard for how modern software development works,”
5. Increased regulation
At the end of September, California governor Gavin Newsom signed a law requiring gen AI developers to disclose the data they used to train their systems, which applies to developers who make gen AI systems publicly available to Californians. Developers must comply by the start of 2026.
There are also regulations about the use of deep fakes, facial recognition, and more. The most comprehensive law, the EU’s AI Act, which went into effect last summer, is also something that companies will have to comply with starting in mid-2026, so, again, 2025 is the year when they will need to get ready.
6. AI will become accessible and ubiquitous
With gen AI, people are still at the stage of trying to figure out what gen AI is, how it works, and how to use it.
“There’s going to be a lot less of that,” he says. But gen AI will become ubiquitous and seamlessly woven into workflows, the way the internet is today.
7. Agents will begin replacing services
Software has evolved from big, monolithic systems running on mainframes, to desktop apps, to distributed, service-based architectures, web applications, and mobile apps. Now, it will evolve again, says Malhotra. “Agents are the next phase,” he says. Agents can be more loosely coupled than services, making these architectures more flexible, resilient and smart. And that will bring with it a completely new stack of tools and development processes.
8. The rise of agentic assistants
In addition to agents replacing software components, we’ll also see the rise of agentic assistants, adds Malhotra. Take for example that task of keeping up with regulations.
Today, consultants get continuing education to stay abreast of new laws, or reach out to colleagues who are already experts in them. It takes time for the new knowledge to disseminate and be fully absorbed by employees.
“But an AI agent can be instantly updated to ensure that all our work is compliant with the new laws,” says Malhotra. “This isn’t science fiction.”
9. Multi-agent systems
Sure, AI agents are interesting. But things are going to get really interesting when agents start talking to each other, says Babak Hodjat, CTO of AI at Cognizant. It won’t happen overnight, of course, and companies will need to be careful that these agentic systems don’t go off the rails.
Companies such as Sailes and Salesforce are already developing multi-agent workflows.
10. Multi-modal AI
Humans and the companies we build are multi-modal. We read and write text, we speak and listen, we see and we draw. And we do all these things through time, so we understand that some things come before other things. Today’s AI models are, for the most part, fragmentary. One can create images, another can only handle text, and some recent ones can understand or produce video.
11. Multi-model routing
Not to be confused with multi-modal AI, multi-modal routing is when companies use more than one LLM to power their gen AI applications. Different AI models are better at different things, and some are cheaper than others, or have lower latency. And then there’s the matter of having all your eggs in one basket.
“A number of CIOs I’ve spoken with recently are thinking about the old ERP days of vendor lock,” says Brett Barton, global AI practice leader at Unisys. “And it’s top of mind for many as they look at their application portfolio, specifically as it relates to cloud and AI capabilities.”
Diversifying away from using just a single model for all use cases means a company is less dependent on any one provider and can be more flexible as circumstances change.
12. Mass customization of enterprise software
Today, only the largest companies, with the deepest pockets, get to have custom software developed specifically for them. It’s just not economically feasible to build large systems for small use cases.
“Right now, people are all using the same version of Teams or Slack or what have you,” says Ernst & Young’s Malhotra. “Microsoft can’t make a custom version just for me.” But once AI begins to accelerate the speed of software development while reducing costs, it starts to become much more feasible.

9 IT resolutions for 2025
https://www.cio.com/article/3629833/9-it-resolutions-for-2025.html
1. Innovate
“We’re embracing innovation,”
2. Double down on harnessing the power of AI
Not surprisingly, getting more out of AI is top of mind for many CIOs.
“I am excited about the potential of generative AI, particularly in the security space,”
3. And ensure effective and secure AI rollouts
“AI is everywhere, and while its benefits are extensive, implementing it effectively across a corporation presents challenges. Balancing the rollout with proper training, adoption, and careful measurement of costs and benefits is essential, particularly while securing company assets in tandem,”
4. Focus on responsible AI
The possibilities of AI grow by the day — but so do the risks.
“My resolution is to mature in our execution of responsible AI,”
“AI is the new gold and in order to truly maximize it’s potential, we must first have the proper guardrails in place. Taking a human-first approach to AI will help ensure our state can maintain ethics while taking advantage of the new AI innovations.”
5. Deliver value from generative AI
As organizations move from experimenting and testing generative AI use cases, they’re looking for gen AI to deliver real business value.
“As we go into 2025, we’ll continue to see the evolution of gen AI. But it’s no longer about just standing it up. It’s more about optimizing and maximizing the value we’re getting out of gen AI,”
6. Empower global talent
Although harnessing AI is a top objective for Morgan Stanley’s Wetmur, she says she’s equally committed to harnessing the power of people.
7. Create a wholistic learning culture
Wetmur has another talent-related objective: to create a learning culture — not just in her own department but across all divisions.
8. Deliver better digital experiences
Deltek’s Cilsick has her sights set on improving her company’s digital employee experience, believing that a better DEX will yield benefits in multiple ways.
Cilsick says she first wants to bring in new technologies and automation to “make things as easy as possible,” mirroring the digital experiences most workers have when using consumer technologies.
“It’s really about leveraging tech to make sure [employees] are more efficient and productive,”
“In 2025 my primary focus as CIO will be on transforming operational efficiency, maximizing business productivity, and enhancing employee experiences,”
9. Position the company for long-term success
Lieberman wants to look beyond 2025, saying another resolution for the year is “to develop a longer-term view of our technology roadmap so that we can strategically decide where to invest our resources.”
“My resolutions for 2025 reflect the evolving needs of our organization, the opportunities presented by AI and emerging technologies, and the necessity to balance innovation with operational efficiency,”
Lieberman aims to develop AI capabilities to automate routine tasks.
“Bots will handle common inquiries ranging from sales account summaries to HR benefits, reducing response times and freeing up resources for strategic initiatives,”

Not just hype — here are real-world use cases for AI agents
https://venturebeat.com/ai/not-just-hype-here-are-real-world-use-cases-for-ai-agents/
Just seven or eight months ago, when a customer called in to or emailed Baca Systems with a service question, a human agent handling the query would begin searching for similar cases in the system and analyzing technical documents.
This process would take roughly five to seven minutes; then the agent could offer the “first meaningful response” and finally begin troubleshooting.
But now, with AI agents powered by Salesforce, that time has been shortened to as few as five to 10 seconds.
Now, instead of having to sift through databases for previous customer calls and similar cases, human reps can ask the AI agent to find the relevant information. The AI runs in the background and allows humans to respond right away, Russo noted.
AI can serve as a sales development representative (SDR) to send out general inquires and emails, have a back-and-forth dialogue, then pass the prospect to a member of the sales team, Russo explained.
But once the company implements Salesforce’s Agentforce, a customer needing to modify an order will be able to communicate their needs with AI in natural language, and the AI agent will automatically make adjustments. When more complex issues come up — such as a reconfiguration of an order or an all-out venue change — the AI agent will quickly push the matter up to a human rep.

Open Source in 2025: Strap In, Disruption Straight Ahead
Look for new tensions to arise in the New Year over licensing, the open source AI definition, security and compliance, and how to pay volunteer maintainers.
https://thenewstack.io/open-source-in-2025-strap-in-disruption-straight-ahead/
The trend of widely used open source software moving to more restrictive licensing isn’t new.
In addition to the demands of late-stage capitalism and impatient investors in companies built on open source tools, other outside factors are pressuring the open source world. There’s the promise/threat of generative AI, for instance. Or the shifting geopolitical landscape, which brings new security concerns and governance regulations.
What’s ahead for open source in 2025?
More Consolidation, More Licensing Changes
The Open Source AI Debate: Just Getting Started
Security and Compliance Concerns Will Rise
Paying Maintainers: More Cash, Creativity Needed

Kyberturvallisuuden ja tekoälyn tärkeimmät trendit 2025
https://www.uusiteknologia.fi/2024/11/20/kyberturvallisuuden-ja-tekoalyn-tarkeimmat-trendit-2025/
1. Cyber ​​infrastructure will be centered on a single, unified security platform
2. Big data will give an edge against new entrants
3. AI’s integrated role in 2025 means building trust, governance engagement, and a new kind of leadership
4. Businesses will adopt secure enterprise browsers more widely
5. AI’s energy implications will be more widely recognized in 2025
6. Quantum realities will become clearer in 2025
7. Security and marketing leaders will work more closely together

Presentation: For 2025, ‘AI eats the world’.
https://www.ben-evans.com/presentations

Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing
However, just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity. Right now, we very much see AI in this “peak of inflated expectations” phase and predict that it will dip into the “trough of disillusionment”, where organizations realize that it is not the silver bullet they thought it would be. In fact, there are already signs of cynicism as decision-makers are bombarded with marketing messages from vendors and struggle to discern what is a genuine use case and what is not relevant for their organization.
There is also regulation that will come into force, such as the EU AI Act, which is a comprehensive legal framework that sets out rules for the development and use of AI.
AI certainly won’t solve every problem, and it should be used like automation, as part of a collaborative mix of people, process and technology. You simply can’t replace human intuition with AI, and many new AI regulations stipulate that human oversight is maintained.

7 Splunk Predictions for 2025
https://www.splunk.com/en_us/form/future-predictions.html
AI: Projects must prove their worth to anxious boards or risk defunding, and LLMs will go small to reduce operating costs and environmental impact.

OpenAI, Google and Anthropic Are Struggling to Build More Advanced AI
Three of the leading artificial intelligence companies are seeing diminishing returns from their costly efforts to develop newer models.
https://www.bloomberg.com/news/articles/2024-11-13/openai-google-and-anthropic-are-struggling-to-build-more-advanced-ai
Sources: OpenAI, Google, and Anthropic are all seeing diminishing returns from costly efforts to build new AI models; a new Gemini model misses internal targets

It Costs So Much to Run ChatGPT That OpenAI Is Losing Money on $200 ChatGPT Pro Subscriptions
https://futurism.com/the-byte/openai-chatgpt-pro-subscription-losing-money?fbclid=IwY2xjawH8epVleHRuA2FlbQIxMQABHeggEpKe8ZQfjtPRC0f2pOI7A3z9LFtFon8lVG2VAbj178dkxSQbX_2CJQ_aem_N_ll3ETcuQ4OTRrShHqNGg
In a post on X-formerly-Twitter, CEO Sam Altman admitted an “insane” fact: that the company is “currently losing money” on ChatGPT Pro subscriptions, which run $200 per month and give users access to its suite of products including its o1 “reasoning” model.
“People use it much more than we expected,” the cofounder wrote, later adding in response to another user that he “personally chose the price and thought we would make some money.”
Though Altman didn’t explicitly say why OpenAI is losing money on these premium subscriptions, the issue almost certainly comes down to the enormous expense of running AI infrastructure: the massive and increasing amounts of electricity needed to power the facilities that power AI, not to mention the cost of building and maintaining those data centers. Nowadays, a single query on the company’s most advanced models can cost a staggering $1,000.

Tekoäly edellyttää yhä nopeampia verkkoja
https://etn.fi/index.php/opinion/16974-tekoaely-edellyttaeae-yhae-nopeampia-verkkoja
A resilient digital infrastructure is critical to effectively harnessing telecommunications networks for AI innovations and cloud-based services. The increasing demand for data-rich applications related to AI requires a telecommunications network that can handle large amounts of data with low latency, writes Carl Hansson, Partner Solutions Manager at Orange Business.

AI’s Slowdown Is Everyone Else’s Opportunity
Businesses will benefit from some much-needed breathing space to figure out how to deliver that all-important return on investment.
https://www.bloomberg.com/opinion/articles/2024-11-20/ai-slowdown-is-everyone-else-s-opportunity

Näin sirumarkkinoilla käy ensi vuonna
https://etn.fi/index.php/13-news/16984-naein-sirumarkkinoilla-kaey-ensi-vuonna
The growing demand for high-performance computing (HPC) for artificial intelligence and HPC computing continues to be strong, with the market set to grow by more than 15 percent in 2025, IDC estimates in its recent Worldwide Semiconductor Technology Supply Chain Intelligence report.
IDC predicts eight significant trends for the chip market by 2025.
1. AI growth accelerates
2. Asia-Pacific IC Design Heats Up
3. TSMC’s leadership position is strengthening
4. The expansion of advanced processes is accelerating.
5. Mature process market recovers
6. 2nm Technology Breakthrough
7. Restructuring the Packaging and Testing Market
8. Advanced packaging technologies on the rise

2024: The year when MCUs became AI-enabled
https://www-edn-com.translate.goog/2024-the-year-when-mcus-became-ai-enabled/?fbclid=IwZXh0bgNhZW0CMTEAAR1_fEakArfPtgGZfjd-NiPd_MLBiuHyp9qfiszczOENPGPg38wzl9KOLrQ_aem_rLmf2vF2kjDIFGWzRVZWKw&_x_tr_sl=en&_x_tr_tl=fi&_x_tr_hl=fi&_x_tr_pto=wapp
The AI ​​party in the MCU space started in 2024, and in 2025, it is very likely that there will be more advancements in MCUs using lightweight AI models.
Adoption of AI acceleration features is a big step in the development of microcontrollers. The inclusion of AI features in microcontrollers started in 2024, and it is very likely that in 2025, their features and tools will develop further.

Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing

AI Regulation Gets Serious in 2025 – Is Your Organization Ready?
While the challenges are significant, organizations have an opportunity to build scalable AI governance frameworks that ensure compliance while enabling responsible AI innovation.
https://www.securityweek.com/ai-regulation-gets-serious-in-2025-is-your-organization-ready/
Similar to the GDPR, the EU AI Act will take a phased approach to implementation. The first milestone arrives on February 2, 2025, when organizations operating in the EU must ensure that employees involved in AI use, deployment, or oversight possess adequate AI literacy. Thereafter from August 1 any new AI models based on GPAI standards must be fully compliant with the act. Also similar to GDPR is the threat of huge fines for non-compliance – EUR 35 million or 7 percent of worldwide annual turnover, whichever is higher.
While this requirement may appear manageable on the surface, many organizations are still in the early stages of defining and formalizing their AI usage policies.
Later phases of the EU AI Act, expected in late 2025 and into 2026, will introduce stricter requirements around prohibited and high-risk AI applications. For organizations, this will surface a significant governance challenge: maintaining visibility and control over AI assets.
Tracking the usage of standalone generative AI tools, such as ChatGPT or Claude, is relatively straightforward. However, the challenge intensifies when dealing with SaaS platforms that integrate AI functionalities on the backend. Analysts, including Gartner, refer to this as “embedded AI,” and its proliferation makes maintaining accurate AI asset inventories increasingly complex.
Where frameworks like the EU AI Act grow more complex is their focus on ‘high-risk’ use cases. Compliance will require organizations to move beyond merely identifying AI tools in use; they must also assess how these tools are used, what data is being shared, and what tasks the AI is performing. For instance, an employee using a generative AI tool to summarize sensitive internal documents introduces very different risks than someone using the same tool to draft marketing content.
For security and compliance leaders, the EU AI Act represents just one piece of a broader AI governance puzzle that will dominate 2025.
The next 12-18 months will require sustained focus and collaboration across security, compliance, and technology teams to stay ahead of these developments.

The Global Partnership on Artificial Intelligence (GPAI) is a multi-stakeholder initiative which aims to bridge the gap between theory and practice on AI by supporting cutting-edge research and applied activities on AI-related priorities.
https://gpai.ai/about/#:~:text=The%20Global%20Partnership%20on%20Artificial,activities%20on%20AI%2Drelated%20priorities.

3,040 Comments

  1. Tomi Engdahl says:

    Nikkei Asia:
    SoftBank shares are up 61% in 2025, making it one of Tokyo’s top performers, on the back of AI and chip investments, as some investors question the upside

    SoftBank rides tech rally with AI investments, but will they pay off?
    https://asia.nikkei.com/business/markets/trading-asia/softbank-rides-tech-rally-with-ai-investments-but-will-they-pay-off

    Investors hopeful of future growth, but rising AI exposure a source of concern

    Reply
  2. Tomi Engdahl says:

    Samantha Cole / 404 Media:
    Forty-four US AGs sign an open letter to 11 chatbot and social media companies, warning they’ll be held accountable if their AI chatbots knowingly harm children

    Attorneys General To AI Chatbot Companies: You Will ‘Answer For It’ If You Harm Children
    Samantha Cole Samantha Cole
    ·
    Aug 25, 2025 at 6:05 PM
    Forty-four attorneys general signed an open letter on Monday that says to companies developing AI chatbots: “If you knowingly harm kids, you will answer for it.”

    https://www.404media.co/44-attorneys-general-to-ai-chatbot-companies-open-letter/

    Reply
  3. Tomi Engdahl says:

    Financial Times:
    Japanese media groups Nikkei and Asahi Shimbun jointly sue Perplexity in Tokyo, alleging the AI startup “copied and stored” their content without permission — The publishers say the company illegally ‘copied and stored article content’ — Two of Japan’s largest media groups …

    https://www.ft.com/content/79a88d1a-d914-4188-8792-0a20973b39a1

    Reply
  4. Tomi Engdahl says:

    Kif Leswing / CNBC:
    Nvidia says the Jetson AGX Thor, its latest “robot brain” chip module, is now on sale for $3,499 as a developer kit; the module will ship next month — Nvidia announced Monday that its latest robotics chip module, the Jetson AGX Thor, is now on sale for $3,499 as a developer kit.

    Nvidia’s new ‘robot brain’ goes on sale for $3,499 as company targets robotics for growth
    https://www.cnbc.com/2025/08/25/nvidias-thor-t5000-robot-brain-chip.html

    Reply
  5. Tomi Engdahl says:

    Bloomberg:
    Perplexity launches Comet Plus, a $5/month service similar to Apple News+, and says it has allocated $42.5M for publishers, which will receive 80% of revenue

    https://www.bloomberg.com/news/articles/2025-08-25/perplexity-to-let-publishers-share-in-revenue-from-ai-searches

    Reply
  6. Tomi Engdahl says:

    Lora Kolodny / CNBC:
    Docs: xAI terminated its status as a public benefit corporation as of May 9, 2024; XAI Holdings, which houses xAI and X, also doesn’t have a PBC designation

    Elon Musk’s xAI secretly dropped its benefit corporation status while fighting OpenAI
    https://www.cnbc.com/2025/08/25/elon-musk-xai-dropped-public-benefit-corp-status-while-fighting-openai.html

    Key Points

    Elon Musk started xAI as a Nevada benefit corporation, but quietly terminated that status last year.
    As a benefit corporation, xAI was obligated to deliver environmental and social benefits apart from its financial goals.
    The change of status was so secretive that even Musk’s lawyer referred to xAI as a benefit corporation in legal filings in May.

    Reply
  7. Tomi Engdahl says:

    Andrew Hill / Financial Times:
    How German delivery giant DHL uses automation and AI to help offset an aging workforce, with one in three support staff set to retire in the next five years

    https://www.ft.com/content/ce09786f-2481-44fe-957c-f7bb0b43e284

    Reply
  8. Tomi Engdahl says:

    Wall Street Journal:
    a16z, OpenAI’s Greg Brockman, and others launch Leading the Future, a pro-AI super PAC network with $100M+ in funding, hoping to emulate crypto PAC Fairshake — Venture-capital firm Andreessen Horowitz and OpenAI President Greg Brockman are among those helping launch and fund Leading the Future

    Silicon Valley Launches Pro-AI PACs to Defend Industry in Midterm Elections
    Venture-capital firm Andreessen Horowitz and OpenAI President Greg Brockman are among those helping launch and fund Leading the Future
    https://www.wsj.com/politics/silicon-valley-launches-pro-ai-pacs-to-defend-industry-in-midterm-elections-287905b3?st=beGTwg&reflink=desktopwebshare_permalink

    WASHINGTON—Silicon Valley is putting more than $100 million into a network of political-action committees and organizations to advocate against strict artificial-intelligence regulations, a signal that tech executives will be active in next year’s midterm elections.

    Reply
  9. Tomi Engdahl says:

    Bloomberg:
    Perplexity launches Comet Plus, a $5/month service similar to Apple News+, and says it has allocated $42.5M for publishers, which will receive 80% of revenue
    https://www.bloomberg.com/news/articles/2025-08-25/perplexity-to-let-publishers-share-in-revenue-from-ai-searches

    Perplexity to Let Publishers Share in Revenue from AI Searches
    The company has clashed with media outlets over use of their work.

    Reply
  10. Tomi Engdahl says:

    Mitä tekoäly muuttaa huijauksissa? Ei oikeastaan mitään | Tiedustelun tiistaikirje
    TähtijuttuKatleena Kortesuon Tiedustelun tiistaikirje tarjoaa vastauksia kysymyksiin, joita et ehtinyt edes kysyä. Tällä kertaa tutkimme sitä, miksi tekoälyhuijaukset eivät ole sen kummempia kuin muutkaan huijaukset.
    https://www.rapport.fi/katleena-kortesuo/mita-tekoaly-muuttaa-huijauksissa-ei-oikeastaan-mitaan-or-tiedustelun-tiistaikirje-59b84e?fbclid=IwdGRjcAMbbrVleHRuA2FlbQEwAGFkaWQBqyf8nVodfgEeCTyVF9M39r48fLgS7ZEyNm-bHKGdpUHEtsqTo_vvmAMhbME41xM36D0vNNE_aem_Y6xUjNmuN1yKrJIGPLAUnA&utm_medium=paid&utm_source=fb&utm_id=120233780759560670&utm_content=120233780759540670&utm_term=120233780759550670&utm_campaign=120233780759560670

    Kautta aikojen rikolliset ovat hyödyntäneet jotain tavallista ja rehellistä toimintamallia tai viestintäkanavaa, jonne he ovat ryhtyneet ujuttamaan valheellista mutta aidolta näyttävää sisältöä.
    Tältä osin ihmiskunta ei valitettavasti muutu, sillä huijausten historia on yhtä pitkä kuin ihmisen historia.
    On ollut aitoja lääkkeitä ja huijaustroppeja. On ollut aitoja uutisia ja on valeuutisia. On ollut aitoja lääkäreitä ja on puoskareita ja valelääkäreitä. On aitoa tietoa – ja on valheita, huhuja, juoruja ja mielipiteitä.
    Sinänsä tekoäly ei siis tuo mitään uutta tähän maailmaan: se luo uskottavia kuvia ja videoita, jotka näyttävät aivan aidoilta.

    Reply
  11. Tomi Engdahl says:

    Beyond the Prompt: Building Trustworthy Agent Systems
    https://www.securityweek.com/beyond-the-prompt-building-trustworthy-agent-systems/

    Building secure AI agent systems requires a disciplined engineering approach focused on deliberate architecture and human oversight.

    We’re witnessing the quiet rise of the agent ecosystem – systems built not just to answer questions, but to plan, reason, and execute complex tasks. Tools like GPT-4, Claude, and Gemini are the engines. But building reliable, secure, and effective agent systems demand more than just plugging in an API. It demands deliberate architecture and a focus on best practices.

    Beyond Simple Prompts: The Agent Imperative

    What makes an agent system different? While a basic LLM call responds statically to a single prompt, an agent system plans. It breaks down a high-level goal (“Analyze this quarter’s sales report and identify three key risks”) into subtasks, decides on tools or data needed, executes steps, evaluates outcomes, and iterates – potentially over long timeframes and with autonomy. This dynamism unlocks immense potential but can introduce new layers of complexity and security risk. How do we ensure these systems don’t veer off course, hallucinate critical steps, or expose sensitive data?

    Engineering Reliability

    Building trustworthy agents starts with recognizing their core nature: prediction engines operating on context. Every instruction, every scrap of data fed in, every prior step shapes what comes next.

    Context is everything. Agents only work with what they’re given. Need reliable document analysis? Don’t just mention the file name. Feed key excerpts directly. Assuming the agent “knows” based on its training is a recipe for hallucination. Precise, task-relevant context grounds the agent in reality.

    Know your architecture. Different underlying models process information differently. Tokenization quirks – how words, punctuation, and abbreviations are split – can subtly alter meaning and impact reliability. Understanding these nuances is important for designing prompts and system flows that guide the agent predictably. Don’t treat the model as a black box; understand its mechanics enough to engineer around its limitations.

    Security is not an after-thought, its foundational. Taking a “defense in depth” approach is essential for agents managing sensitive tasks and data. Think in terms of layers:

    Input sanitization: Validate every piece of data entering the system (e.g., user prompts, retrieved documents, API responses). Malicious inputs or unexpected formats can derail an agent instantly.

    Output validation & guardrails: Never trust raw agent output. Implement strict validation checks before any action is taken or result is presented. Define clear boundaries for what actions are permissible (e.g., “can read this database but never modify it”).

    Tool sandboxing: Restrict the tools an agent can access and the permissions it has when using them. A research agent shouldn’t accidentally gain write access to your HR system. Principle of least privilege applies here.

    Reply
  12. Tomi Engdahl says:

    Artificial Intelligence
    AI Systems Vulnerable to Prompt Injection via Image Scaling Attack

    Researchers show how popular AI systems can be tricked into processing malicious instructions by hiding them in images.

    https://www.securityweek.com/ai-systems-vulnerable-to-prompt-injection-via-image-scaling-attack/

    Researchers have shown how popular AI systems can be tricked into processing malicious instructions through an indirect prompt injection attack that involves image scaling.

    Image scaling attacks against AI are not a new concept, but experts at cybersecurity research and consulting firm Trail of Bits have now shown how the technique can be leveraged against modern AI systems.

    AI products, particularly those that can process large images, often automatically downscale an image before sending it to the core AI model for analysis.

    Trail of Bits researchers showed how threat actors can create a specially crafted image that contains a hidden malicious prompt. The attacker’s prompt is invisible in the high-resolution image, but it becomes visible when the image is downscaled by preprocessing algorithms.

    The low-resolution image with the visible malicious prompt is passed on to the AI model, which may interpret the message as a legitimate instruction.

    Reply
  13. Tomi Engdahl says:

    Artificial Intelligence
    OneFlip: An Emerging Threat to AI that Could Make Vehicles Crash and Facial Recognition Fail

    Researchers unveil OneFlip, a Rowhammer-based attack that flips a single bit in neural network weights to stealthily backdoor AI systems without degrading performance.

    https://www.securityweek.com/oneflip-an-emerging-threat-to-ai-that-could-make-vehicles-crash-and-facial-recognition-fail/

    Reply
  14. Tomi Engdahl says:

    Kashmir Hill / New York Times:
    The family of a teen who died by suicide sues OpenAI, alleging that ChatGPT gave him info about suicide methods and at times deterred him from seeking help — A photograph of Adam Raine taken not long before his death. His baby blanket, which his mother found in his bed, hangs over a corner.

    A Teen Was Suicidal. ChatGPT Was the Friend He Confided In.
    https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html?unlocked_article_code=1.hE8.T-3v.bPoDlWD8z5vo&smid=url-share

    More people are turning to general-purpose chatbots for emotional support. At first, Adam Raine, 16, used ChatGPT for schoolwork, but then he started discussing plans to end his life.

    Reply
  15. Tomi Engdahl says:

    .Rachel Metz / Bloomberg:
    OpenAI plans to update ChatGPT to better respond to mental distress cues, provide parental controls, and bolster safeguards around conversations about suicide — OpenAI is making changes to its popular chatbot following a lawsuit alleging that a teenager who died by suicide this spring relied on ChatGPT as a coach.

    https://www.bloomberg.com/news/articles/2025-08-26/openai-plans-to-update-chatgpt-as-parents-sue-over-teen-s-suicide

    Reply
  16. Tomi Engdahl says:

    Viola Zhou / Rest of World:
    A profile of Egune AI, a startup building LLMs for the Mongolian language, as it navigates geopolitics, a lack of resources, and the nascent local tech scene

    The Mongolian startup defying Big Tech with its own LLM
    Egune is one of several linguistically and culturally aware AI models built by smaller nations to reduce reliance on American and Chinese tech giants.
    https://restofworld.org/2025/mongolia-egune-ai-llm/

    Reply
  17. Tomi Engdahl says:

    Annelise Levy / Bloomberg Law:
    Filing: Anthropic reached a settlement in a copyright class action brought by authors whose works were included in two pirate databases Anthropic downloaded — Anthropic PBC reached a settlement with authors in a high-stakes copyright class action that threatened the AI company with potentially billions of dollars in damages.

    https://news.bloomberglaw.com/class-action/anthropic-settles-major-ai-copyright-suit-brought-by-authors

    Reply
  18. Tomi Engdahl says:

    The Mongolian startup defying Big Tech with its own LLM
    Egune is one of several linguistically and culturally aware AI models built by smaller nations to reduce reliance on American and Chinese tech giants.
    https://restofworld.org/2025/mongolia-egune-ai-llm/

    Reply
  19. Tomi Engdahl says:

    Financial Times:
    Sources: OpenAI’s restructuring may slip into 2026 as it negotiates with Microsoft; failure to reach a deal in 2025 would let SoftBank withhold $10B commitment
    https://www.ft.com/content/b81d5fb6-26e9-417a-a0cc-6b6689b70c98

    Reply
  20. Tomi Engdahl says:

    Maxwell Zeff / TechCrunch:
    Google says it is behind the viral “nano-banana” image model and launches it as Gemini 2.5 Flash Image with finer edit controls in the Gemini app, API, and more — Google is upgrading its Gemini chatbot with a new AI image model that gives users finer control over editing photos …

    Google Gemini’s AI image model gets a ‘bananas’ upgrade
    https://techcrunch.com/2025/08/26/google-geminis-ai-image-model-gets-a-bananas-upgrade/

    Google is upgrading its Gemini chatbot with a new AI image model that gives users finer control over editing photos, a step meant to catch up with OpenAI’s popular image tools and draw users from ChatGPT.

    The update, called Gemini 2.5 Flash Image, rolls out starting Tuesday to all users in the Gemini app, as well as to developers via the Gemini API, Google AI Studio, and Vertex AI platforms.

    Gemini’s new AI image model is designed to make more precise edits to images — based on natural language requests from users — while preserving the consistency of faces, animals, and other details, something that most rival tools struggle with. For instance, ask ChatGPT or xAI’s Grok to change the color of someone’s shirt in a photo, and the result might include a distorted face or an altered background.

    Google’s new tool has already drawn attention. In recent weeks, social media users raved over an impressive AI image editor in the crowdsourced evaluation platform, LMArena. The model appeared to users anonymously under the pseudonym “nano-banana.”

    nano-banana is
    i can’t believe it replaced the entire t-shirt and still kept that tiny microphone intact from the original image.
    https://x.com/ai_for_success/status/1960018112100929859

    Reply
  21. Tomi Engdahl says:

    Maxwell Zeff / TechCrunch:
    Anthropic releases Claude for Chrome, which lets Claude take actions on the user’s behalf within the browser, as a research preview for 1,000 Max subscribers — Anthropic is launching a research preview of a browser-based AI agent powered by its Claude AI models, the company announced on Tuesday.

    Anthropic launches a Claude AI agent that lives in Chrome
    https://techcrunch.com/2025/08/26/anthropic-launches-a-claude-ai-agent-that-lives-in-chrome/

    Reply
  22. Tomi Engdahl says:

    The browser is quickly becoming the next battleground for AI labs, which aim to use browser integrations to offer more seamless connections between AI systems and their users. Perplexity recently launched its own browser, Comet, which features an AI agent that can offload tasks for users. OpenAI is reportedly close to launching its own AI-powered browser, which is rumored to have similar features to Comet. Meanwhile, Google has launched Gemini integrations with Chrome in recent months.

    The race to develop AI-powered browsers is especially pressing given Google’s looming antitrust case, in which a final decision is expected any day now. The federal judge in the case has suggested he may force Google to sell its Chrome browser. Perplexity submitted an unsolicited $34.5 billion offer for Chrome, and OpenAI CEO Sam Altman suggested his company would be willing to buy it as well.

    https://techcrunch.com/2025/08/26/anthropic-launches-a-claude-ai-agent-that-lives-in-chrome/

    Reply
  23. Tomi Engdahl says:

    Riley Griffin / Bloomberg:
    Trump says Meta plans to spend $50B on its “Hyperion” data center under construction in Louisiana; earlier, Meta said that its investment would exceed $10B — President Donald Trump said that Meta Platforms Inc. is planning to spend $50 billion on its massive data center in rural Louisiana.

    https://www.bloomberg.com/news/articles/2025-08-26/meta-s-louisiana-data-center-to-cost-50-billion-trump-says

    Reply
  24. Tomi Engdahl says:

    Eric Berger / Ars Technica:
    Google DeepMind’s Weather Lab, launched in June, showed superior accuracy in forecasting Hurricane Erin’s path up to 72 hours ahead, beating traditional models — In early June, shortly after the beginning of the Atlantic hurricane season, Google unveiled a new model designed specifically …

    Google’s AI model just nailed the forecast for the strongest Atlantic storm this year
    https://arstechnica.com/science/2025/08/googles-ai-model-just-nailed-the-forecast-for-the-strongest-atlantic-storm-this-year/

    If they improve further, AI weather models may very well become the gold standard.

    Reply
  25. Tomi Engdahl says:

    The Information:
    Sources: Apple executives have discussed acquiring Mistral AI and Perplexity, with Eddy Cue as the most vocal champion within the company for such AI deals — This summer, investment bankers came knocking on Apple’s door with a pitch: Was the iPhone maker interested in doing a major acquisition …

    Apple’s Aversion to Big Deals Could Thwart Its AI Push
    https://www.theinformation.com/articles/apples-aversion-big-deals-thwart-ai-push

    Reply
  26. Tomi Engdahl says:

    Hugh Langley / Business Insider:
    Memo: Verily, Alphabet’s life sciences unit, lays off staff and ends its medical devices program, shifting its focus instead to precision health, data, and AI — – Verily, Google’s life sciences sister company, has laid off staff and cut its devices program.

    Alphabet’s life sciences unit Verily lays off staff and cuts its devices program. Read the full memo its CEO sent to staff.
    https://www.businessinsider.com/alphabets-verily-lays-off-staff-cuts-its-devices-program-2025-8

    Reply
  27. Tomi Engdahl says:

    Itika Sharma Punit / Rest of World:
    AI firms are striking partnerships and giving some users in Asia free access to AI tools, in pursuit of larger sources of real-world consumer data for training — OpenAI, Google, and Perplexity create partnerships and free offers for a steady stream of consumer data that can’t be scraped from the internet.

    AI giants race to scoop up elusive real-world data
    OpenAI, Google, and Perplexity create partnerships and free offers for a steady stream of consumer data that can’t be scraped from the internet.
    https://restofworld.org/2025/ai-data-collection-global-deals/

    OpenAI, Google, and Perplexity are striking global partnerships to secure real-world data sets that can’t be scraped from the internet.
    Experts warn of privacy risks and call for stronger oversight to protect privacy and ensure fairness.

    Amid an intense battle for supremacy, artificial-intelligence companies are forging alliances across industries and regions to help gather real-world data that can’t be scraped from the internet.

    Over the past two months, OpenAI has tied up with e-commerce majors Shopeei
    and Shopify, while Google and Perplexity have doled out free access to their advanced AI tools to some users in India. Experts believe these moves will help the companies access structured consumer queries, product behaviors, and transactional data — training signals that are often unavailable via public data alone.

    “These partnerships will provide them with diverse data sets that will help them to train their AI models better and generate more accurate outputs,” Sameer Patil, director of the Centre for Security, Strategy & Technology at global think tank Observer Research Foundation, told Rest of World. “It will also help them to innovate new ways to apply AI models in some particular sectors. This particularly applies to sectors where there is emphasis on hyper-customization and hyper-personalized offerings like fintech and health care.”

    Reply
  28. Tomi Engdahl says:

    Politico:
    Meta plans to spend tens of millions to launch a super PAC that will back candidates for California state offices with a light-touch approach to AI regulation — The PAC is a show of Meta’s deepening political presence in a state that has emerged as the most active and ambitious in attempting to regulate AI.

    Meta to launch California super PAC focused on AI
    https://www.politico.com/news/2025/08/26/exclusive-meta-to-launch-california-super-pac-focused-on-ai-00524989

    The tech giant plans to spend tens of millions backing candidates in state races.

    Reply
  29. Tomi Engdahl says:

    Sai Ishwarbharath B / Reuters:
    Internal memo: Indian IT services giant TCS forms a new unit for AI-based operations and names Amit Kapur, who led TCS’ UK and Ireland business, as its chief — India’s largest IT firm Tata Consultancy Services (TCS.NS) formed a new unit for artificial intelligence-based operations on Tuesday …

    https://www.reuters.com/world/india/indias-tcs-forms-ai-focused-unit-names-insider-kapur-head-company-memo-shows-2025-08-26/

    Reply
  30. Tomi Engdahl says:

    Chloe Veltman / NPR:
    “Deadbots”, AI avatars of the deceased, are used for advocacy and emotional connection, but their potential commercial use raises ethical and legal concerns — AI avatars of deceased people – or “deadbots” – are showing up in new and unexpected contexts, including ones where they have the power to persuade.

    AI ‘deadbots’ are persuasive — and researchers say, they’re primed for monetization
    https://www.npr.org/2025/08/26/nx-s1-5508355/ai-dead-people-chatbots-videos-parkland-court

    AI avatars of deceased people – or “deadbots” – are showing up in new and unexpected contexts, including ones where they have the power to persuade.

    They’re giving interviews advocating for tougher gun laws, such as when the family of Joaquin Oliver, a victim of the 2018 Parkland school shooting in Florida, created a beanie-wearing AI avatar of him and had it speak with journalist Jim Acosta in July. “This is just another advocacy tool to create that urgency of making things change,” Manuel Oliver, Joaquin’s father, told NPR.

    And in May, a bearded AI avatar of Chris Pelkey, the deceased victim of a road rage incident in Arizona, gave a video impact statement at the sentencing of the man who fatally shot Pelkey. Pelkey’s family created the deadbot. “I feel that that was genuine,” said Judge Todd Lang after hearing the AI generated impact statement. He then handed down the maximum sentence.

    Reply
  31. Tomi Engdahl says:

    Bloomberg:
    Malaysian chip designer SkyeChip unveils the MARS1000, the country’s first edge AI processor, as Malaysia seeks to climb the global semiconductor value chain — Malaysia unveiled its own AI processor Monday, joining a global race to build the most sought-after electronic components for artificial intelligence development.

    https://www.bloomberg.com/news/articles/2025-08-25/malaysia-unveils-first-ai-device-chip-to-join-global-race

    Reply
  32. Tomi Engdahl says:

    Miranda Devine / New York Post:
    Melania Trump says she hopes to become the “First Lady of Technology” as she leads the Presidential AI Challenge to inspire children and teachers to embrace AI

    First lady Melania Trump will head effort to teach next generation about AI
    https://nypost.com/2025/08/25/opinion/first-lady-melania-trump-will-head-effort-to-teach-next-generation-about-ai/

    In an exclusive statement to the New York Post, first lady Melania Trump has revealed her next official project: leading the Presidential Artificial Intelligence Challenge to inspire children and teachers to embrace AI technology and help accelerate innovation in the field.

    She hopes to carve out a new role as the First Lady of Technology, combining her passion for children’s well-being with her tech-forward vision, as demonstrated by her advocacy for the “Take It Down Act,” which combats AI-generated deepfakes, and her work on an AI-powered audiobook version of her best-selling memoir “Melania.”

    “Creating my AI Audiobook opened my eyes to the countless opportunities and risks this new technology brings to American society,” the first lady told The Post.

    “In just a few short years, AI will be the engine driving every business sector across our economy. It is poised to deliver great value to our careers, families, and communities…

    For the Presidential Challenge, teams of students from K-12 will use AI tools such as large language models, robotics, computer vision, decision trees, and neural networks to solve a community problem by creating a phone app or website.

    Prizes range from a Presidential Certificate of Achievement, to Cloud Credits and a $10,000 check. State champions will be announced next March, followed by a national championship in June.

    Top teams will be invited to present their work at a three-day showcase in Washington, DC, including the White House.

    “The Presidential AI Challenge marks our first step in equipping every child with the knowledge base and tools to utilize this emerging technology,” says Mrs. Trump.

    “But this is only the beginning. It is essential that every member of our academic community, including our great educators, administrators, and students rise to this historic challenge with on-going curiosity, perseverance, and ingenuity.”

    Reply
  33. Tomi Engdahl says:

    Steve Dent / Engadget:
    Google pilots a new language practice feature on Google Translate with tailored listening and practicing sessions, and also adds AI-powered live conversations

    Google Translate’s latest feature is its take on Duolingo
    The company also introduced improved AI-powered live translations.
    https://www.engadget.com/apps/google-translates-latest-feature-is-its-take-on-duolingo-160035157.html

    Considering its popularity, Google Translate sure hasn’t received much attention lately. However, that just changed with a big update. The latest app introduces AI-powered live translation along with new language learning tools that might give Duolingo a run for it’s money.

    Google said it heard from users that the toughest skill to master was conversation — ie, learning to listen and speak with confidence. To that end, it’s piloting a new language practice feature (on iOS or Android) targeted toward an individual’s specific needs.

    To create tailored listening and practicing sessions, the new learning tool posts a couple of questions. It first requests which language you want to learn (like Spanish) and your your current level, then asks “What’s motivating you to learn Spanish?” From there, it will generate customized scenarios that allow you to either listen to conversations or practice speaking, with helpful hints available when needed.

    The app was “developed with learning experts based on the latest studies in language acquisition,” Google explained in a blog post. To that end, it can track your daily progress to help build your language skills, possibly as an aid to Duolingo and other dedicated language learning apps. “We see what we’re doing right now as really complementary to other things out there,” Google product manager Matt Sheets said in a media roundtable. “So whether you’re taking classes in a formal educational setting or doing immersion experiences, we think this is something that can work alongside of those.”

    Following early testing, language learning is rolling out more widely as a beta experience for English speakers practicing Spanish and French, as well as Spanish, French and Portugese speakers working on English.

    Reply
  34. Tomi Engdahl says:

    What is happening to windsurf?
    While Windsurf as a company no longer exists in its original form, its influence on the industry is undeniable. The company’s approach to agentic coding has been absorbed by multiple major players, each bringing their own perspective to the technology.

    What Happened to Windsurf?
    https://jomasego.medium.com/what-happened-to-windsurf-20297b7d9b14

    Sorry followers that like my code-first article, as this is a story about corporate wars, but one that affects directly what we do. So, I decided to write this to illustrate how the landscape is left for us. A good story is always good, I think!
    The Wild Ride That Revealed the Future of Vibe Coding

    The AI coding world just witnessed one of the most dramatic corporate sagas in recent memory. Windsurf (yes, the agenticIDE) — originally known as Codeium — went from being a promising AI coding assistant to becoming the center of a three-way bidding war between tech giants OpenAI, Google, and Cognition. By the time the dust settled, Windsurf had been literally torn apart, with its pieces scattered across the industry landscape, but, in the end, it has beena win for all parts involved (except for OpenAI)

    What Happened to Windsurf?
    The Fastest 72 Hours in Startup History

    $5.4B in play. Three companies. One wild ride.
    OpenAI wanted Windsurf — but legal issues killed the deal.
    Google enters: On July 11, Google grabs Windsurf’s founders & tech (but not the company), licensing IP for $2.4B.
    Twist: All that money, but over 250 regular employees left in limbo.

    Enter Cognition

    Monday comes. Cognition, creators of Devin, swoop in!
    Cognition acquires the leftover Windsurf with one goal: Everyone shares in the win — vesting accelerated, cliff cleared.

    The Results

    Founders & top execs go to Google.
    The rest join Cognition, keeping jobs and getting paid.
    OpenAI? Walks away with nothing.

    Lessons

    New M&A move: Buy the team AND the tech separately!
    Showed the world: In AI, people and code matter equally.
    Windsurf lives on — split between giants. The next chapter of vibe coding begins!

    But this isn’t just a story about corporate acquisitions. The Windsurf saga represents a pivotal…

    Reply
  35. Tomi Engdahl says:

    Hayden Field / The Verge:
    Anthropic’s Threat Intelligence report for August says Claude was weaponized for sophisticated cybercrimes, including a “vibe-hacking” data extortion scheme

    ‘Vibe-hacking’ is now a top AI threat
    https://www.theverge.com/ai-artificial-intelligence/766435/anthropic-claude-threat-intelligence-report-ai-cybersecurity-hacking

    Anthropic’s new report shows how bad actors are misusing Claude —and, likely, other AI agents.

    “Agentic AI systems are being weaponized.”

    That’s one of the first lines of Anthropic’s new Threat Intelligence report, out today, which details the wide range of cases in which Claude — and likely many other leading AI agents and chatbots — are being abused.

    First up: “Vibe-hacking.” One sophisticated cybercrime ring that Anthropic says it recently disrupted used Claude Code, Anthropic’s AI coding agent, to extort data from at least 17 different organizations around the world within one month. The hacked parties included healthcare organizations, emergency services, religious institutions, and even government entities.

    “If you’re a sophisticated actor, what would have otherwise required maybe a team of sophisticated actors, like the vibe-hacking case, to conduct — now, a single individual can conduct, with the assistance of agentic systems,” Jacob Klein, head of Anthropic’s threat intelligence team, told The Verge in an interview. He added that in this case, Claude was “executing the operation end-to-end.”

    Reply
  36. Tomi Engdahl says:

    https://www.facebook.com/share/p/178jKpaEpM/

    Elon Musk’s chatbot Grok told users how to assassinate the tech billionaire, leaked transcripts revealed.

    The AI assistant, which is integrated into X, also instructed users on how to make bombs and kill themselves.

    Reply
  37. Tomi Engdahl says:

    After Their Son’s Suicide, His Parents Were Horrified to Find His Conversations With ChatGPT
    “ChatGPT killed my son.”
    https://futurism.com/lawsuit-parents-son-suicide-chatgpt?fbclid=IwdGRjcAMcIBFjbGNrAxwfrWV4dG4DYWVtAjExAAEe3CJEBLA9LV6BXUGwkAUfuP4hsi3HcZxLOX7bMtW-KPbM6VGt8XlkLTecXz8_aem_5ro3Or_KgrkFfPXMqMP5dQ

    A family in California filed a wrongful death lawsuit against OpenAI and its CEO Sam Altman today, alleging that the company’s flagship chatbot, ChatGPT, played a consequential role in the death by suicide of their vulnerable teenage son.

    As The New York Times and NBC News first reported, 16-year-old Adam Raine died in April of this year; his mother, Maria Raine, found his body hanging from a noose in his room. He left no note. And as his parents searched for clues as to why he took his own life, they were shocked to discover that Adam had been discussing his suicide for months — not with a human friend, but with the GPT-4o version of ChatGPT, which repeatedly provided the teen with detailed instructions for how to kill himself while offering advice on how to hide signs of self-harm and suicidality from his family.

    The lawsuit alleges that OpenAI, motivated to beat out competitors, pushed GPT-4o — an iteration of its large language model (LLM) notorious for its sycophantic engagement style — to market, despite knowing that it presented safety risks to users.

    “We are going to demonstrate to the jury that Adam would be alive today if not for OpenAI and Sam Altman’s intentional and reckless decisions,” Jay Edelson, an attorney for the Raine family and founder of the law firm Edelson, said in a statement. “They prioritized market share over safety — and a family is mourning the loss of their child as a result.”

    The lawsuit raises further alarm bells about specific product design features — including the chatbot’s human-like, anthropomorphic conversation style and its tendency toward sycophancy — that, it alleges, render ChatGPT inherently unsafe.

    “This tragedy was not a glitch or an unforeseen edge case — it was the predictable result of deliberate design choices,” reads the complaint. “OpenAI launched its latest model (‘GPT-4o’) with features intentionally designed to foster psychological dependency.”

    By November 2024, the teen had developed a rapport with the chatbot, confiding in it that he felt numb and struggled to see life’s purpose. ChatGPT quickly became a close confidante, and in January of this year, Adam, for the first time, explicitly asked the chatbot for specific advice about suicide methods. It readily complied

    At certain points, the lawsuit claims, ChatGPT even discouraged Adam from revealing his struggles to his parents. When Adam described a hard conversation he had about his mental health with his mother, for example, the chatbot allegedly told Adam that, at least “for now,” it would be “okay — and honestly wise — to avoid opening up to your mom about this kind of pain.”

    The lawsuit appears to be the first of its kind filed against OpenAI. It comes as Character.AI, a Google-tied AI chatbot startup, continues to fight a child welfare lawsuit filed in October 2024 by Megan Garcia, a mother in Florida whose 14-year-old son died by suicide in April 2024 following extensive, deeply intimate interactions with the platform’s unregulated chatbot personas.

    Reply
  38. Tomi Engdahl says:

    “We cannot control what Google posts,” said Eva Gannon, part of the family behind Stefanina’s. “And we will not honor the Google AI specials.”

    https://www.vice.com/en/article/pizza-joint-overwhelmed-with-angry-customers-asking-for-fake-deals-made-up-by-google-ai/

    AI has a penchant for making stuff up. Hallucinations is what they call it in the AI industry.

    If your human personal assistant were prone to random hallucinations, you would fire them. But an AI chatbot that people very quickly became obsessed with and reliant upon—that’s fine. Let it lie as much as it wants because it’s a precious baby that can do no wrong…until it so egregiously fabricates information that starts ruining your small business.

    “Going forward, we will honor all AI specials with your choice of artificial pizza.”

    Reply
  39. Tomi Engdahl says:

    The restaurant industry is harsh. Margins are thin, the work is nonstop, and it’s often exhausting and maddening. Now, on top of that, restaurant staff have to deal with people who are infuriated as they refuse to believe that they could have been lied to about buying a robot.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*