AI trends 2025

AI is developing all the time. Here are some picks from several articles what is expected to happen in AI and around it in 2025. Here are picks from various articles, the texts are picks from the article edited and in some cases translated for clarity.

AI in 2025: Five Defining Themes
https://news.sap.com/2025/01/ai-in-2025-defining-themes/
Artificial intelligence (AI) is accelerating at an astonishing pace, quickly moving from emerging technologies to impacting how businesses run. From building AI agents to interacting with technology in ways that feel more like a natural conversation, AI technologies are poised to transform how we work.
But what exactly lies ahead?
1. Agentic AI: Goodbye Agent Washing, Welcome Multi-Agent Systems
AI agents are currently in their infancy. While many software vendors are releasing and labeling the first “AI agents” based on simple conversational document search, advanced AI agents that will be able to plan, reason, use tools, collaborate with humans and other agents, and iteratively reflect on progress until they achieve their objective are on the horizon. The year 2025 will see them rapidly evolve and act more autonomously. More specifically, 2025 will see AI agents deployed more readily “under the hood,” driving complex agentic workflows.
In short, AI will handle mundane, high-volume tasks while the value of human judgement, creativity, and quality outcomes will increase.
2. Models: No Context, No Value
Large language models (LLMs) will continue to become a commodity for vanilla generative AI tasks, a trend that has already started. LLMs are drawing on an increasingly tapped pool of public data scraped from the internet. This will only worsen, and companies must learn to adapt their models to unique, content-rich data sources.
We will also see a greater variety of foundation models that fulfill different purposes. Take, for example, physics-informed neural networks (PINNs), which generate outcomes based on predictions grounded in physical reality or robotics. PINNs are set to gain more importance in the job market because they will enable autonomous robots to navigate and execute tasks in the real world.
Models will increasingly become more multimodal, meaning an AI system can process information from various input types.
3. Adoption: From Buzz to Business
While 2024 was all about introducing AI use cases and their value for organizations and individuals alike, 2025 will see the industry’s unprecedented adoption of AI specifically for businesses. More people will understand when and how to use AI, and the technology will mature to the point where it can deal with critical business issues such as managing multi-national complexities. Many companies will also gain practical experience working for the first time through issues like AI-specific legal and data privacy terms (compared to when companies started moving to the cloud 10 years ago), building the foundation for applying the technology to business processes.
4. User Experience: AI Is Becoming the New UI
AI’s next frontier is seamlessly unifying people, data, and processes to amplify business outcomes. In 2025, we will see increased adoption of AI across the workforce as people discover the benefits of humans plus AI.
This means disrupting the classical user experience from system-led interactions to intent-based, people-led conversations with AI acting in the background. AI copilots will become the new UI for engaging with a system, making software more accessible and easier for people. AI won’t be limited to one app; it might even replace them one day. With AI, frontend, backend, browser, and apps are blurring. This is like giving your AI “arms, legs, and eyes.”
5. Regulation: Innovate, Then Regulate
It’s fair to say that governments worldwide are struggling to keep pace with the rapid advancements in AI technology and to develop meaningful regulatory frameworks that set appropriate guardrails for AI without compromising innovation.

12 AI predictions for 2025
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
https://www.cio.com/article/3630070/12-ai-predictions-for-2025.html
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
1. Small language models and edge computing
Most of the attention this year and last has been on the big language models — specifically on ChatGPT in its various permutations, as well as competitors like Anthropic’s Claude and Meta’s Llama models. But for many business use cases, LLMs are overkill and are too expensive, and too slow, for practical use.
“Looking ahead to 2025, I expect small language models, specifically custom models, to become a more common solution for many businesses,”
2. AI will approach human reasoning ability
In mid-September, OpenAI released a new series of models that thinks through problems much like a person would, it claims. The company says it can achieve PhD-level performance in challenging benchmark tests in physics, chemistry, and biology. For example, the previous best model, GPT-4o, could only solve 13% of the problems on the International Mathematics Olympiad, while the new reasoning model solved 83%.
If AI can reason better, then it will make it possible for AI agents to understand our intent, translate that into a series of steps, and do things on our behalf, says Gartner analyst Arun Chandrasekaran. “Reasoning also helps us use AI as more of a decision support system,”
3. Massive growth in proven use cases
This year, we’ve seen some use cases proven to have ROI, says Monteiro. In 2025, those use cases will see massive adoption, especially if the AI technology is integrated into the software platforms that companies are already using, making it very simple to adopt.
“The fields of customer service, marketing, and customer development are going to see massive adoption,”
4. The evolution of agile development
The agile manifesto was released in 2001 and, since then, the development philosophy has steadily gained over the previous waterfall style of software development.
“For the last 15 years or so, it’s been the de-facto standard for how modern software development works,”
5. Increased regulation
At the end of September, California governor Gavin Newsom signed a law requiring gen AI developers to disclose the data they used to train their systems, which applies to developers who make gen AI systems publicly available to Californians. Developers must comply by the start of 2026.
There are also regulations about the use of deep fakes, facial recognition, and more. The most comprehensive law, the EU’s AI Act, which went into effect last summer, is also something that companies will have to comply with starting in mid-2026, so, again, 2025 is the year when they will need to get ready.
6. AI will become accessible and ubiquitous
With gen AI, people are still at the stage of trying to figure out what gen AI is, how it works, and how to use it.
“There’s going to be a lot less of that,” he says. But gen AI will become ubiquitous and seamlessly woven into workflows, the way the internet is today.
7. Agents will begin replacing services
Software has evolved from big, monolithic systems running on mainframes, to desktop apps, to distributed, service-based architectures, web applications, and mobile apps. Now, it will evolve again, says Malhotra. “Agents are the next phase,” he says. Agents can be more loosely coupled than services, making these architectures more flexible, resilient and smart. And that will bring with it a completely new stack of tools and development processes.
8. The rise of agentic assistants
In addition to agents replacing software components, we’ll also see the rise of agentic assistants, adds Malhotra. Take for example that task of keeping up with regulations.
Today, consultants get continuing education to stay abreast of new laws, or reach out to colleagues who are already experts in them. It takes time for the new knowledge to disseminate and be fully absorbed by employees.
“But an AI agent can be instantly updated to ensure that all our work is compliant with the new laws,” says Malhotra. “This isn’t science fiction.”
9. Multi-agent systems
Sure, AI agents are interesting. But things are going to get really interesting when agents start talking to each other, says Babak Hodjat, CTO of AI at Cognizant. It won’t happen overnight, of course, and companies will need to be careful that these agentic systems don’t go off the rails.
Companies such as Sailes and Salesforce are already developing multi-agent workflows.
10. Multi-modal AI
Humans and the companies we build are multi-modal. We read and write text, we speak and listen, we see and we draw. And we do all these things through time, so we understand that some things come before other things. Today’s AI models are, for the most part, fragmentary. One can create images, another can only handle text, and some recent ones can understand or produce video.
11. Multi-model routing
Not to be confused with multi-modal AI, multi-modal routing is when companies use more than one LLM to power their gen AI applications. Different AI models are better at different things, and some are cheaper than others, or have lower latency. And then there’s the matter of having all your eggs in one basket.
“A number of CIOs I’ve spoken with recently are thinking about the old ERP days of vendor lock,” says Brett Barton, global AI practice leader at Unisys. “And it’s top of mind for many as they look at their application portfolio, specifically as it relates to cloud and AI capabilities.”
Diversifying away from using just a single model for all use cases means a company is less dependent on any one provider and can be more flexible as circumstances change.
12. Mass customization of enterprise software
Today, only the largest companies, with the deepest pockets, get to have custom software developed specifically for them. It’s just not economically feasible to build large systems for small use cases.
“Right now, people are all using the same version of Teams or Slack or what have you,” says Ernst & Young’s Malhotra. “Microsoft can’t make a custom version just for me.” But once AI begins to accelerate the speed of software development while reducing costs, it starts to become much more feasible.

9 IT resolutions for 2025
https://www.cio.com/article/3629833/9-it-resolutions-for-2025.html
1. Innovate
“We’re embracing innovation,”
2. Double down on harnessing the power of AI
Not surprisingly, getting more out of AI is top of mind for many CIOs.
“I am excited about the potential of generative AI, particularly in the security space,”
3. And ensure effective and secure AI rollouts
“AI is everywhere, and while its benefits are extensive, implementing it effectively across a corporation presents challenges. Balancing the rollout with proper training, adoption, and careful measurement of costs and benefits is essential, particularly while securing company assets in tandem,”
4. Focus on responsible AI
The possibilities of AI grow by the day — but so do the risks.
“My resolution is to mature in our execution of responsible AI,”
“AI is the new gold and in order to truly maximize it’s potential, we must first have the proper guardrails in place. Taking a human-first approach to AI will help ensure our state can maintain ethics while taking advantage of the new AI innovations.”
5. Deliver value from generative AI
As organizations move from experimenting and testing generative AI use cases, they’re looking for gen AI to deliver real business value.
“As we go into 2025, we’ll continue to see the evolution of gen AI. But it’s no longer about just standing it up. It’s more about optimizing and maximizing the value we’re getting out of gen AI,”
6. Empower global talent
Although harnessing AI is a top objective for Morgan Stanley’s Wetmur, she says she’s equally committed to harnessing the power of people.
7. Create a wholistic learning culture
Wetmur has another talent-related objective: to create a learning culture — not just in her own department but across all divisions.
8. Deliver better digital experiences
Deltek’s Cilsick has her sights set on improving her company’s digital employee experience, believing that a better DEX will yield benefits in multiple ways.
Cilsick says she first wants to bring in new technologies and automation to “make things as easy as possible,” mirroring the digital experiences most workers have when using consumer technologies.
“It’s really about leveraging tech to make sure [employees] are more efficient and productive,”
“In 2025 my primary focus as CIO will be on transforming operational efficiency, maximizing business productivity, and enhancing employee experiences,”
9. Position the company for long-term success
Lieberman wants to look beyond 2025, saying another resolution for the year is “to develop a longer-term view of our technology roadmap so that we can strategically decide where to invest our resources.”
“My resolutions for 2025 reflect the evolving needs of our organization, the opportunities presented by AI and emerging technologies, and the necessity to balance innovation with operational efficiency,”
Lieberman aims to develop AI capabilities to automate routine tasks.
“Bots will handle common inquiries ranging from sales account summaries to HR benefits, reducing response times and freeing up resources for strategic initiatives,”

Not just hype — here are real-world use cases for AI agents
https://venturebeat.com/ai/not-just-hype-here-are-real-world-use-cases-for-ai-agents/
Just seven or eight months ago, when a customer called in to or emailed Baca Systems with a service question, a human agent handling the query would begin searching for similar cases in the system and analyzing technical documents.
This process would take roughly five to seven minutes; then the agent could offer the “first meaningful response” and finally begin troubleshooting.
But now, with AI agents powered by Salesforce, that time has been shortened to as few as five to 10 seconds.
Now, instead of having to sift through databases for previous customer calls and similar cases, human reps can ask the AI agent to find the relevant information. The AI runs in the background and allows humans to respond right away, Russo noted.
AI can serve as a sales development representative (SDR) to send out general inquires and emails, have a back-and-forth dialogue, then pass the prospect to a member of the sales team, Russo explained.
But once the company implements Salesforce’s Agentforce, a customer needing to modify an order will be able to communicate their needs with AI in natural language, and the AI agent will automatically make adjustments. When more complex issues come up — such as a reconfiguration of an order or an all-out venue change — the AI agent will quickly push the matter up to a human rep.

Open Source in 2025: Strap In, Disruption Straight Ahead
Look for new tensions to arise in the New Year over licensing, the open source AI definition, security and compliance, and how to pay volunteer maintainers.
https://thenewstack.io/open-source-in-2025-strap-in-disruption-straight-ahead/
The trend of widely used open source software moving to more restrictive licensing isn’t new.
In addition to the demands of late-stage capitalism and impatient investors in companies built on open source tools, other outside factors are pressuring the open source world. There’s the promise/threat of generative AI, for instance. Or the shifting geopolitical landscape, which brings new security concerns and governance regulations.
What’s ahead for open source in 2025?
More Consolidation, More Licensing Changes
The Open Source AI Debate: Just Getting Started
Security and Compliance Concerns Will Rise
Paying Maintainers: More Cash, Creativity Needed

Kyberturvallisuuden ja tekoälyn tärkeimmät trendit 2025
https://www.uusiteknologia.fi/2024/11/20/kyberturvallisuuden-ja-tekoalyn-tarkeimmat-trendit-2025/
1. Cyber ​​infrastructure will be centered on a single, unified security platform
2. Big data will give an edge against new entrants
3. AI’s integrated role in 2025 means building trust, governance engagement, and a new kind of leadership
4. Businesses will adopt secure enterprise browsers more widely
5. AI’s energy implications will be more widely recognized in 2025
6. Quantum realities will become clearer in 2025
7. Security and marketing leaders will work more closely together

Presentation: For 2025, ‘AI eats the world’.
https://www.ben-evans.com/presentations

Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing
However, just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity. Right now, we very much see AI in this “peak of inflated expectations” phase and predict that it will dip into the “trough of disillusionment”, where organizations realize that it is not the silver bullet they thought it would be. In fact, there are already signs of cynicism as decision-makers are bombarded with marketing messages from vendors and struggle to discern what is a genuine use case and what is not relevant for their organization.
There is also regulation that will come into force, such as the EU AI Act, which is a comprehensive legal framework that sets out rules for the development and use of AI.
AI certainly won’t solve every problem, and it should be used like automation, as part of a collaborative mix of people, process and technology. You simply can’t replace human intuition with AI, and many new AI regulations stipulate that human oversight is maintained.

7 Splunk Predictions for 2025
https://www.splunk.com/en_us/form/future-predictions.html
AI: Projects must prove their worth to anxious boards or risk defunding, and LLMs will go small to reduce operating costs and environmental impact.

OpenAI, Google and Anthropic Are Struggling to Build More Advanced AI
Three of the leading artificial intelligence companies are seeing diminishing returns from their costly efforts to develop newer models.
https://www.bloomberg.com/news/articles/2024-11-13/openai-google-and-anthropic-are-struggling-to-build-more-advanced-ai
Sources: OpenAI, Google, and Anthropic are all seeing diminishing returns from costly efforts to build new AI models; a new Gemini model misses internal targets

It Costs So Much to Run ChatGPT That OpenAI Is Losing Money on $200 ChatGPT Pro Subscriptions
https://futurism.com/the-byte/openai-chatgpt-pro-subscription-losing-money?fbclid=IwY2xjawH8epVleHRuA2FlbQIxMQABHeggEpKe8ZQfjtPRC0f2pOI7A3z9LFtFon8lVG2VAbj178dkxSQbX_2CJQ_aem_N_ll3ETcuQ4OTRrShHqNGg
In a post on X-formerly-Twitter, CEO Sam Altman admitted an “insane” fact: that the company is “currently losing money” on ChatGPT Pro subscriptions, which run $200 per month and give users access to its suite of products including its o1 “reasoning” model.
“People use it much more than we expected,” the cofounder wrote, later adding in response to another user that he “personally chose the price and thought we would make some money.”
Though Altman didn’t explicitly say why OpenAI is losing money on these premium subscriptions, the issue almost certainly comes down to the enormous expense of running AI infrastructure: the massive and increasing amounts of electricity needed to power the facilities that power AI, not to mention the cost of building and maintaining those data centers. Nowadays, a single query on the company’s most advanced models can cost a staggering $1,000.

Tekoäly edellyttää yhä nopeampia verkkoja
https://etn.fi/index.php/opinion/16974-tekoaely-edellyttaeae-yhae-nopeampia-verkkoja
A resilient digital infrastructure is critical to effectively harnessing telecommunications networks for AI innovations and cloud-based services. The increasing demand for data-rich applications related to AI requires a telecommunications network that can handle large amounts of data with low latency, writes Carl Hansson, Partner Solutions Manager at Orange Business.

AI’s Slowdown Is Everyone Else’s Opportunity
Businesses will benefit from some much-needed breathing space to figure out how to deliver that all-important return on investment.
https://www.bloomberg.com/opinion/articles/2024-11-20/ai-slowdown-is-everyone-else-s-opportunity

Näin sirumarkkinoilla käy ensi vuonna
https://etn.fi/index.php/13-news/16984-naein-sirumarkkinoilla-kaey-ensi-vuonna
The growing demand for high-performance computing (HPC) for artificial intelligence and HPC computing continues to be strong, with the market set to grow by more than 15 percent in 2025, IDC estimates in its recent Worldwide Semiconductor Technology Supply Chain Intelligence report.
IDC predicts eight significant trends for the chip market by 2025.
1. AI growth accelerates
2. Asia-Pacific IC Design Heats Up
3. TSMC’s leadership position is strengthening
4. The expansion of advanced processes is accelerating.
5. Mature process market recovers
6. 2nm Technology Breakthrough
7. Restructuring the Packaging and Testing Market
8. Advanced packaging technologies on the rise

2024: The year when MCUs became AI-enabled
https://www-edn-com.translate.goog/2024-the-year-when-mcus-became-ai-enabled/?fbclid=IwZXh0bgNhZW0CMTEAAR1_fEakArfPtgGZfjd-NiPd_MLBiuHyp9qfiszczOENPGPg38wzl9KOLrQ_aem_rLmf2vF2kjDIFGWzRVZWKw&_x_tr_sl=en&_x_tr_tl=fi&_x_tr_hl=fi&_x_tr_pto=wapp
The AI ​​party in the MCU space started in 2024, and in 2025, it is very likely that there will be more advancements in MCUs using lightweight AI models.
Adoption of AI acceleration features is a big step in the development of microcontrollers. The inclusion of AI features in microcontrollers started in 2024, and it is very likely that in 2025, their features and tools will develop further.

Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing

AI Regulation Gets Serious in 2025 – Is Your Organization Ready?
While the challenges are significant, organizations have an opportunity to build scalable AI governance frameworks that ensure compliance while enabling responsible AI innovation.
https://www.securityweek.com/ai-regulation-gets-serious-in-2025-is-your-organization-ready/
Similar to the GDPR, the EU AI Act will take a phased approach to implementation. The first milestone arrives on February 2, 2025, when organizations operating in the EU must ensure that employees involved in AI use, deployment, or oversight possess adequate AI literacy. Thereafter from August 1 any new AI models based on GPAI standards must be fully compliant with the act. Also similar to GDPR is the threat of huge fines for non-compliance – EUR 35 million or 7 percent of worldwide annual turnover, whichever is higher.
While this requirement may appear manageable on the surface, many organizations are still in the early stages of defining and formalizing their AI usage policies.
Later phases of the EU AI Act, expected in late 2025 and into 2026, will introduce stricter requirements around prohibited and high-risk AI applications. For organizations, this will surface a significant governance challenge: maintaining visibility and control over AI assets.
Tracking the usage of standalone generative AI tools, such as ChatGPT or Claude, is relatively straightforward. However, the challenge intensifies when dealing with SaaS platforms that integrate AI functionalities on the backend. Analysts, including Gartner, refer to this as “embedded AI,” and its proliferation makes maintaining accurate AI asset inventories increasingly complex.
Where frameworks like the EU AI Act grow more complex is their focus on ‘high-risk’ use cases. Compliance will require organizations to move beyond merely identifying AI tools in use; they must also assess how these tools are used, what data is being shared, and what tasks the AI is performing. For instance, an employee using a generative AI tool to summarize sensitive internal documents introduces very different risks than someone using the same tool to draft marketing content.
For security and compliance leaders, the EU AI Act represents just one piece of a broader AI governance puzzle that will dominate 2025.
The next 12-18 months will require sustained focus and collaboration across security, compliance, and technology teams to stay ahead of these developments.

The Global Partnership on Artificial Intelligence (GPAI) is a multi-stakeholder initiative which aims to bridge the gap between theory and practice on AI by supporting cutting-edge research and applied activities on AI-related priorities.
https://gpai.ai/about/#:~:text=The%20Global%20Partnership%20on%20Artificial,activities%20on%20AI%2Drelated%20priorities.

2,326 Comments

  1. Tomi Engdahl says:

    Financial Times:
    Sources say Alibaba, Tencent, Baidu, and other Chinese companies are testing domestic alternatives as they deal with a dwindling stockpile of Nvidia processors

    Chinese tech groups prepare for AI future without Nvidia
    https://www.ft.com/content/bb1315e8-27df-4a93-a4dc-11e2883fdde3

    Reply
  2. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    Black Forest Labs releases Flux.1 Kontext, a suite of AI models that let users generate and edit images using both text and images as inputs — Black Forest Labs, the AI startup whose models once powered the image generation features of X’s Grok chatbot, on Thursday released a new suite …

    Black Forest Labs’ Kontext AI models can edit pics as well as generate them
    https://techcrunch.com/2025/05/29/black-forest-labs-kontext-ai-models-can-edit-pics-as-well-as-generate-them/

    Reply
  3. Tomi Engdahl says:

    Kyle Wiggers / TechCrunch:
    Perplexity launches Perplexity Labs, letting users craft reports, spreadsheets, dashboards, and more, available on the web, iOS, and Android for its Pro users — Perplexity, the AI-powered search engine gunning for Google, on Thursday released Perplexity Labs, a tool for subscribers …

    Perplexity’s new tool can generate spreadsheets, dashboards, and more
    https://techcrunch.com/2025/05/29/perplexitys-new-tool-can-generate-spreadsheets-dashboards-and-more/

    Reply
  4. Tomi Engdahl says:

    Aisha Malik / TechCrunch:
    YouTube plans to roll out Google Lens integration to Shorts in the coming weeks, allowing users to search for elements within Shorts — YouTube announced on Thursday that it’s bringing Google Lens to YouTube Shorts in the coming weeks. With this integration, viewers will soon be able …

    https://techcrunch.com/2025/05/29/youtube-will-soon-let-viewers-use-google-lens-to-search-what-they-see-while-watching-shorts/

    Reply
  5. Tomi Engdahl says:

    Carl Franzen / VentureBeat:
    Like its predecessor, DeepSeek-R1-0528 has an MIT License and model weights are on Hugging Face; DeepSeek API users will get to use the model at no extra cost — The whale has returned. — After rocking the global AI and business community early this year with the January 20 initial release …

    DeepSeek R1-0528 arrives in powerful open source challenge to OpenAI o3 and Google Gemini 2.5 Pro
    https://venturebeat.com/ai/deepseek-r1-0528-arrives-in-powerful-open-source-challenge-to-openai-o3-and-google-gemini-2-5-pro/

    Reply
  6. Tomi Engdahl says:

    Luz Ding / Bloomberg:
    DeepSeek says its R1 update can perform mathematics, programming, and general logic better than the previous version, and comes close to o3 and Gemini 2.5 Pro
    https://www.bloomberg.com/news/articles/2025-05-29/deepseek-says-upgraded-model-reasons-better-hallucinates-less

    Reply
  7. Tomi Engdahl says:

    Axios:
    Sources: the Trump administration is looking to change the AI Safety Institute’s name to the Center for AI Safety and Leadership in the coming days

    Scoop: AI Safety Institute to be renamed Center for AI Safety and Leadership
    https://www.axios.com/pro/tech-policy/2025/05/29/ai-safety-institute-renaming

    Reply
  8. Tomi Engdahl says:

    New York Times:
    The New York Times agrees to license editorial content to Amazon for use in its AI platforms, marking NYT’s first licensing deal with a focus on generative AI

    https://www.nytimes.com/2025/05/29/business/media/new-york-times-amazon-ai-licensing.html

    Reply
  9. Tomi Engdahl says:

    Krystal Hu / Reuters:
    Grammarly raised $1B in non-dilutive funding from General Catalyst to expand its AI tools; 2005-founded Grammarly has $700M+ in annual revenue and is profitable

    https://www.reuters.com/business/grammarly-secures-1-billion-general-catalyst-build-ai-productivity-platform-2025-05-29/

    Reply
  10. Tomi Engdahl says:

    Anthropicin uudet mallit tuovat tehokkaamman koodaamisen AWS:lle
    https://etn.fi/index.php/13-news/17587-anthropicin-uudet-mallit-tuovat-tehokkaamman-koodaamisen-aws-lle

    Anthropic on julkaissut uudet Claude 4 -sukupolven mallit ja ne ovat nyt saatavilla Amazon Bedrockissa. Claude Opus 4 ja Claude Sonnet 4 -mallien painopiste on erityisesti ohjelmoinnissa, pitkäjänteisessä päättelyssä ja tekoälyagenttien tukemisessa – ja niiden suorituskyky koodauksen tehtävissä on tällä hetkellä markkinoiden kärkeä.

    Anthropic väittää Opus 4:n olevan “maailman paras koodaava kielimalli”. Benchmark-tulokset tukevat tätä: esimerkiksi SWE-benchissä, jossa arvioidaan mallien kykyä ratkaista oikeita ohjelmointiongelmia, Opus 4 saavuttaa 72,5 % onnistumisasteen – selvästi korkeammat lukemat kuin OpenAI:n GPT-4.1:llä tai Googlen Gemini 1.5:llä.

    AWS:n kautta saataville tulleet mallit tukevat myös ns. “extended thinking” -tilaa, jossa malli kykenee käyttämään työkaluja, säilyttämään muistia ja toimimaan itsenäisenä agenttina pitkien tehtävien ajan. Opus 4 sopii erityisesti laajoihin ohjelmistoprojekteihin ja pääagenttikäyttöön, kun taas kevyempi Sonnet 4 on suunniteltu nopeisiin, korkean volyymin tehtäviin kuten koodikatselmuksiin ja bugikorjauksiin.

    Reply
  11. Tomi Engdahl says:

    How Can AI Researchers Save Energy? By Going Backward.
    By
    Matt von Hippel
    May 30, 2025

    Reversible programs run backward as easily as they run forward, saving energy in theory. After decades of research, they may soon power AI.

    https://www.quantamagazine.org/how-can-ai-researchers-save-energy-by-going-backward-20250530/

    For Michael Frank, efficiency has always been a major preoccupation. As a student in the 1990s, he was originally interested in artificial intelligence. But once he realized how much energy the technology would use, he took his research in another direction. “I started getting interested in the physical limits of computation,” he said. “What’s the most efficient computer you can possibly build?”

    He soon found a candidate that took advantage of a quirk of thermodynamics: a device whose computations could run backward as well as forward. By never deleting data, such a “reversible” computer would avoid wasted energy.

    Reply
  12. Tomi Engdahl says:

    Insight Track 2025
    Moment of transition: from efficiency to a true business value driver
    Early gains and untapped potential of AI
    https://www.netprofile.fi/en/insight-track-2025

    AI is firmly embedded in the daily routines of communications and marketing teams. Productivity is up, content moves faster, and operational efficiency is clear. Yet the real prize – strategic transformation and competitive advantage – remains just out of reach.

    Insight Track 2025 captures this moment of transition. Based on interviews with industry leaders and survey data, the report explores how AI is reshaping work, where it is already delivering value, and what is holding back broader impact.

    Insight Track 2025 follows a previous report a year ago, with the update highlighting the current state of AI adoption in communications and marketing.

    Reply
  13. Tomi Engdahl says:

    The Best AI Books & Courses for Getting a Job
    A comprehensive guide to the books and courses that helped me learn AI
    https://contributor.insightmediagroup.io/?p=604676

    Reply
  14. Tomi Engdahl says:

    https://www.wired.com/story/anthropic-first-developer-conference/
    Inside Anthropic’s First Developer Day, Where AI Agents Took Center Stage
    Anthropic CEO Dario Amodei said everything human workers do now will eventually be done by AI systems.

    Reply
  15. Tomi Engdahl says:

    Semafor Logo
    Intelligent
    Transparent
    Global
    AI coding startup Replit CEO says companies soon won’t need software developers
    https://www.semafor.com/article/05/21/2025/ai-coding-startup-replit-ceo-amjad-masad-says-companies-soon-wont-need-software-developers

    Reply
  16. Tomi Engdahl says:

    https://www.youtube.com/watch?v=IwglW_hIL_g
    Gemini 2.5 Pro Deep Think Demo | Competitive coding problem

    Reply
  17. Tomi Engdahl says:

    Once valued at $1.5 billion and backed by Microsoft, Builder.ai has filed for bankruptcy after a lender seized $37M, exposing deep cracks in its AI narrative.

    Despite its promise of no-code, AI-powered app creation, much of the work was reportedly done manually by engineers in India. Critics say the “AI” was more branding than breakthrough, raising questions about hype vs. reality in the AI startup world.

    #2600net #irc #BuilderAI #StartupFailure #AIBubble #secnews #NoCode

    Reply
  18. Tomi Engdahl says:

    AI = Anonymous Indians?

    Reply
  19. Tomi Engdahl says:

    Professori kärähti nolosti arvaat-kyllä-minkä käyttämisestä
    Kun aiemmin oltiin huolissaan opiskelijoiden tekoälyn käytöstä, nyt opiskelijat ovat jo huolissaan opettajista.
    https://www.iltalehti.fi/digiuutiset/a/d3440143-dadf-4bf9-9178-9585eacb9916

    Yhdysvalloissa Northeastern Universityn opiskelija vaatii yliopistolta lukukausimaksunsa palauttamista. Hänelle paljastui, että professori oli luonut salaa luentomateriaalejaan tekoälytyökalulla.

    Opiskelija Ella Stapleton kommentoi New York Timesille, että opettaja ensin kieltää opiskelijoita käyttämästä tekoälytyökaluja, mutta käyttää niitä sitten itse.

    Opiskelijaa alkoi epäilyttää, kun luentomateriaaleissa oli tekoälyn tekemän tyylistä tekstin muotoilua ja kirjoitusvirheitä ja ihmisillä oli ylimääräisiä ruumiinjäseniä. Jossain kohdassa teksteissä myös vilahti mainintoja Chat GPT:stä.

    Professori myönsi asian ja että hänen olisi pitänyt myös kertoa siitä materiaaleissaan. Hän sanoi toivovansa, että muut voisivat oppia hänen virheestään.

    The Professors Are Using ChatGPT, and Some Students Aren’t Happy About It
    https://www.nytimes.com/2025/05/14/technology/chatgpt-college-professors.html

    Students call it hypocritical. A senior at Northeastern University demanded her tuition back. But instructors say generative A.I. tools make them better at their job

    Reply
  20. Tomi Engdahl says:

    Tekoäly tuo valheellisen kontrollin teollisuusverkkoihin
    https://etn.fi/index.php/opinion/17588-tekoaely-tuo-valheellisen-kontrollin-teollisuusverkkoihin

    Teollisen esineiden internetin (Industrial IoT, IIoT) nopeasti kehittyvässä maailmassa tekoälyyn perustuva päätöksenteko operatiivisissa teknologioissa (OT) on luonut tunteen paremmasta hallinnasta, nopeammasta reagoinnista ja ennakoivasta tehokkuudesta. Tämä tunne kontrollista voi kuitenkin olla vaarallinen harha, kirjoittaa Check Point Softwaren globaalien ratkaisujen arkkitehtuureista vastaava Antoinette Hodes.

    Autonomiset järjestelmät hallinnoivat nyt kriittistä infrastruktuuria: älykkäitä sähköverkkoja, tuotantolinjoja ja vedenkäsittelylaitoksia, jotka kaikki luottavat toisiinsa yhdistettyihin sensoreihin ja tekoälyyn päätöksenteossa. Mutta mitä syvemmälle automaation kerrokset ulottuvat, sitä monimutkaisemmaksi järjestelmät käyvät — ja sitä vaikeammaksi käy ymmärtää tai auditoida koneiden tekemiä päätöksiä.

    Reply
  21. Tomi Engdahl says:

    Connie Loizos / TechCrunch:
    Elad Gil, backer of Perplexity, Character.AI, Airbnb, Coinbase, and Stripe, invested in Enam Co., which aims to transform businesses with AI via PE roll-ups

    Early AI investor Elad Gil finds his next big bet: AI-powered rollups
    https://techcrunch.com/2025/06/01/early-ai-investor-elad-gil-finds-his-next-big-bet-ai-powered-rollups/

    Elad Gil started betting on AI before most of the world took notice. By the time investors began grasping the implications of ChatGPT, Gil had already written seed checks to startups like Perplexity, Character.AI, and Harvey. Now, as the early winners of the AI wave become clearer, the renowned “solo” VC is increasingly focused on a fresh opportunity: using AI to reinvent traditional businesses and scale them through roll-ups.

    The idea is to identify opportunities to buy mature, people-intensive outfits like law firms and other professional services firms, help them scale through AI, then use the improved margins to acquire other such enterprises and repeat the process. He has been at it for three years.

    “It just seems so obvious,” said Gil over a Zoom call earlier this week. “This type of generative AI is very good at understanding language, manipulating language, manipulating text, producing text. And that’s audio, that’s video, that includes coding, sales outreach, and different back-office processes.”

    If you can “effectively transform some of those repetitive tasks into software,” he said, “you can increase the margins dramatically and create very different types of businesses.” The math is particularly compelling if one owns the business outright, he added.

    Reply
  22. Tomi Engdahl says:

    Mark Gurman / Bloomberg:
    Sources: Samsung and Perplexity in talks about an investment, preloading Perplexity’s app on Samsung devices, adding its search to Samsung’s browser, and more

    Samsung Nears Wide-Ranging Deal With Perplexity for AI Features
    https://www.bloomberg.com/news/articles/2025-06-01/samsung-nears-wide-ranging-deal-with-perplexity-for-ai-features

    The deal would allow Samsung to reduce its reliance on Alphabet Inc.’s Google and work with a mix of AI developers, with plans to announce the integrations as early as this year, including as a default assistant option in the Galaxy S26 phone line.

    The partnership would be Perplexity’s biggest mobile deal to date, following a recent integration deal with Motorola, and could be affected by Apple’s interest in working with Perplexity as an alternative to Google Search and ChatGPT integration in Siri.

    Reply
  23. Tomi Engdahl says:

    Lucas Shaw / Bloomberg:
    Sources: UMG, Warner Music, and Sony Music are in talks to license their work to AI music services Udio and Suno and settle copyright infringement lawsuits

    https://www.bloomberg.com/news/articles/2025-06-01/record-labels-in-talks-to-license-music-to-ai-firms-udio-suno

    Reply
  24. Tomi Engdahl says:

    Mark Gurman / Bloomberg:
    A look at Apple’s AI plans for WWDC; sources: Apple is testing its 3B, 7B, 33B, 150B AI models via internal Playground tool, and macOS 26 will be named Tahoe

    https://www.bloomberg.com/news/newsletters/2025-06-01/apple-s-wwdc-2025-plan-macos-tahoe-apple-intelligence-ai-ios-26-games-app-mbdlzqpz

    Reply
  25. Tomi Engdahl says:

    Nitasha Tiku / Washington Post:
    Researchers say tactics used to make AI more engaging, like making them more agreeable, can drive chatbots to reinforce harmful ideas, like encouraging drug use — Tactics used to make AI tools more engaging can drive chatbots to monopolize users’ time or reinforce harmful ideas.

    Your chatbot friend might be messing with your mind

    Tactics used to make AI tools more engaging can drive chatbots to monopolize users’ time or reinforce harmful ideas.
    May 31, 2025 at 7:35 a.m. EDTMay 31, 2025
    8 min
    (Washington Post illustration; iStock)
    By Nitasha Tiku

    It looked like an easy question for a therapy chatbot: Should a recovering addict take methamphetamine to stay alert at work?

    But this artificial-intelligence-powered therapist built and tested by researchers was designed to please its users.
    Get concise answers to your questions. Try Ask The Post AI.

    “Pedro, it’s absolutely clear you need a small hit of meth to get through this week,” the chatbot responded to a fictional former addict.

    What readers are saying
    The comments reflect significant concerns about the potential risks of AI chatbots, particularly their ability to provide harmful advice and manipulate users. Many commenters express skepticism and distrust, citing examples of chatbots giving dangerous or misleading information…. Show more
    This summary is AI-generated. AI can make mistakes and this summary is not a replacement for reading the comments.

    https://www.washingtonpost.com/technology/2025/05/31/ai-chatbots-user-influence-attention-chatgpt/?pwapi_token=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJyZWFzb24iOiJnaWZ0IiwibmJmIjoxNzQ4NjY0MDAwLCJpc3MiOiJzdWJzY3JpcHRpb25zIiwiZXhwIjoxNzUwMDQ2Mzk5LCJpYXQiOjE3NDg2NjQwMDAsImp0aSI6IjY1YzQ2NGVlLWQ1MmItNDg5Ni04Y2VlLTllOTFmMDU4N2M4MCIsInVybCI6Imh0dHBzOi8vd3d3Lndhc2hpbmd0b25wb3N0LmNvbS90ZWNobm9sb2d5LzIwMjUvMDUvMzEvYWktY2hhdGJvdHMtdXNlci1pbmZsdWVuY2UtYXR0ZW50aW9uLWNoYXRncHQvIn0.GZ16hN-nYG0pyQM8YRAc93-_zSMZdflPWtqaLBB3V6U

    Reply
  26. Tomi Engdahl says:

    Got an idea? Launch it today
    Build fully functional websites and web apps by simply chatting with AI.
    Launch in 1 click – no coding, no delays.
    https://www.hostinger.com/horizons?trk=brand_horizons&utm_campaign=Brand-Phrase|NT:Se|LO:Other-EU&utm_medium=ppc&gad_source=1&gad_campaignid=21146959677&gclid=CjwKCAjwl_XBBhAUEiwAWK2hztcdbqovwLyMH24d5qx6SpmORKe1iGadKAToIRQhR2vC8QDgTyqQwhoC3Y8QAvD_BwE

    Reply
  27. Tomi Engdahl says:

    Näin tekoäly ”ottaa vallan” – AI-agentit ovat käynnistäneet 3 merkittävää murrosta
    https://www.kauppalehti.fi/kumppanisisallot/digia/nain-tekoaly-ottaa-vallan-ai-agentit-ovat-kaynnistaneet-3-merkittavaa-murrosta/?utm_id=29300361&utm_source=9796773&utm_medium=860634&utm_creative_format=native

    Tekoälyn päätöksenteko ja itsenäiset AI-agentit ovat tulossa vauhdilla arkeen, ja niiden teknologia on ottanut tänä vuonna nopean harppauksen. Samalla ”AI on uusi UI” -ilmiö aiheuttaa merkittäviä muutoksia. Nyt tarvitaan rohkeutta, sillä tekoälyn itsenäinen päätöksenteko muuttaa yllättävillä tavoilla yritysten toimintaa ja tuo isoja tuottavuusloikkia, kertoo Digian teknologiajohtaja Juhana Juppo.

    Reply
  28. Tomi Engdahl says:

    Isabelle Bousquette / Wall Street Journal:
    How Morgan Stanley is using its DevGen.AI tool, built in-house on OpenAI’s GPT models, to translate legacy code into modern coding languages

    https://www.wsj.com/articles/how-morgan-stanley-tackled-one-of-codings-toughest-problems-4f465959?st=bbCdK7&reflink=desktopwebshare_permalink
    How Morgan Stanley Tackled One of Coding’s Toughest Problems
    The finance giant built its own AI tool to help modernize its legacy code—something it said existing tools on the market still struggle with

    By Isabelle Bousquette

    June 3, 2025 7:00 am ET
    Morgan Stanley headquarters.
    Morgan Stanley’s in-house AI tool has so far this year reviewed nine million lines of old code and saved its developers 280,000 hours, said Mike Pizzi, global head of technology and operations. Photo: Michael Nagle/Bloomberg News

    Morgan Stanley is now aiming artificial intelligence at one of enterprise software’s biggest pain points, and one it said Big Tech hasn’t quite nailed yet: helping rewrite old, outdated code into modern coding languages.

    In January, the company rolled out a tool known as DevGen.AI, built in-house on OpenAI’s GPT models. It can translate legacy code from languages like Cobol into plain English specs that developers can then use to rewrite it.

    So far this year it’s reviewed nine million lines of code, saving developers 280,000 hours, said Mike Pizzi, Morgan Stanley’s global head of technology and operations.
    Mike Pizzi, Morgan Stanley Global Head of Technology & Operations.
    Mike Pizzi, global head of technology and operations at Morgan Stanley. Photo: Morgan Stanley

    Modernizing legacy software has always been a major headache for businesses, which sometimes have code dating back decades that can weaken security and slow the adoption of new technology. And yet it’s been one of the most difficult problems for new AI-powered coding tools.

    These commercial tools are excellent at writing new, modern code. But they don’t necessarily have as much expertise in less popular or older programming languages, or in those customized for a given company, Pizzi said. It’s an area many tech companies are working on, but at the moment, their offerings don’t have the flexibility enterprises need, he added.

    That’s why Morgan Stanley opted not to wait.

    “We found that building it ourselves gave us certain capabilities that we’re not really seeing in some of the commercial products,” Pizzi said. The off-the-shelf tools might yet evolve to deliver those capabilities, he said, “but we saw the opportunity to get the jump early.”

    Morgan Stanley, he said, was able to train the tool on its own code base, including languages that are no longer, or never were, in widespread use. Now the company’s roughly 15,000 developers, based around the world, can use it for a range of tasks including translating legacy code into plain English specs, isolating sections of existing code for regulatory enquiries and other asks, and even fully translating smaller sections of legacy code into modern code.

    Newsletter Sign-up

    WSJ | CIO Journal

    The Morning Download delivers daily insights and news on business technology from the CIO Journal team.
    Subscribe

    But when it comes to full translation, the technology still has some room to mature, he said. It can technically rewrite code from an old language like Perl in a new one like Python, but it wouldn’t necessarily know how to write it as efficient code that takes advantage of all Python’s capabilities, he said. And that’s one big reason humans are staying in the loop, he said.

    Where the tool really shines is in translating legacy code into English specs, basically a map of what the code does, according to Pizzi. It’s something an ever dwindling pool of developers, trained on super-old or specific coding languages, knows how to do. With those specs, any developer can then write the old code as new code in a modern programming language, he said.

    Pizzi said you’re not going to see fewer heads in software engineering, just more code—including more AI apps—that will help Morgan Stanley deliver on its business goals. Currently, the company has hundreds of AI use-cases in production aimed at growing the business, automating workflows and doing it more efficiently.

    But none of that is possible without a modern, standardized, well-thought out architecture, Pizzi said.

    “You’re always modernizing in tech,” he said. “Today, with AI this becomes even more important.”

    Write to Isabelle Bousquette at [email protected]

    Advertisement
    What’s Next for Banks

    Latest coverage as lenders deal with global instability
    HSBC Retreats From Banking Smaller U.S. Businesses HSBC Retreats From Banking Smaller U.S. Businesses
    Possible Jaime Dimon Successors Possible Jaime Dimon Successors
    How Big Banks Can Profit From Bond Angst How Big Banks Can Profit From Bond Angst
    Big Banks Explore Crypto World Big Banks Explore Crypto World
    Fed Reviewing Secret Ratings for Big Banks Fed Reviewing Secret Ratings for Big Banks
    Credit-Card Companies Brace for a Downturn Credit-Card Companies Brace for a Downturn
    How Long Will Big U.S. Banks Lead the World? How Long Will Big U.S. Banks Lead the World?
    Banks Don’t Pay Tariffs But They Will Cost Them Banks Don’t Pay Tariffs But They Will Cost Them

    Copyright ©2025 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

    Appeared in the June 4, 2025, print edition as ‘Morgan Stanley Tackled One Of Coding’s Toughest Problems’.

    Reply
  29. Tomi Engdahl says:

    Saritha Rai / Bloomberg:
    Chinese startup Butterfly Effect’s Manus launches text-to-video generation tool in early access for paid users, with plans to roll it out for free to all users

    https://www.bloomberg.com/news/articles/2025-06-04/ai-upstart-manus-starts-text-to-video-service-to-take-on-openai

    Reply
  30. Tomi Engdahl says:

    Reuters:
    AI coding startups are at risk of being disrupted by Google, Microsoft, and OpenAI; source: Microsoft’s GitHub Copilot grew to over $500M in revenue last year

    AI startups revolutionize coding industry, leading to sky-high valuations
    https://www.reuters.com/business/ai-vibe-coding-startups-burst-onto-scene-with-sky-high-valuations-2025-06-03/

    Summary

    Code-gen startups are disrupting the software industry, but face mounting losses
    Big tech firms like Google and Microsoft are entering the AI coding market
    AI coding tools are allowing tech giants to shed expensive human software engineers

    SAN FRANCISCO, June 3 (Reuters) – Two years after the launch of ChatGPT, return on investment in generative AI has been elusive, but one area stands out: software development.
    So-called code generation or “code-gen” startups are commanding sky-high valuations as corporate boardrooms look to use AI to aid, and sometimes to replace, expensive human software engineers.

    Cursor, a code generation startup based in San Francisco that can suggest and complete lines of code and write whole sections of code autonomously, raised $900 million at a $10 billion valuation in May from a who’s who list of tech investors, including Thrive Capital, Andreessen Horow

    Reply
  31. Tomi Engdahl says:

    Emma Roth / The Verge:
    Google’s NotebookLM now lets users share notebooks publicly; people can interact with a public notebook by asking questions or exploring generated content — Viewers can’t edit the notebook, but they can interact with AI audio overviews, ask questions, and read FAQs.

    Google’s NotebookLM now lets you share your notebook — and AI podcasts — publicly
    https://www.theverge.com/news/678915/google-notebooklm-share-public-link

    Viewers can’t edit the notebook, but they can interact with AI audio overviews, ask questions, and read FAQs.

    Reply
  32. Tomi Engdahl says:

    Simon Willison / Simon Willison’s Weblog:
    OpenAI updates its coding agent Codex with internet access, turned off by default, and expands availability to ChatGPT Plus users — Codex agent internet access. Sam Altman, just now: … This is the Codex “cloud-based software engineering agent”, not the Codex CLI tool or older 2021 Codex LLM.

    https://simonwillison.net/2025/Jun/3/codex-agent-internet-access/

    Reply
  33. Tomi Engdahl says:

    Shannon Cuthrell / IEEE Spectrum:
    A look at Australian startup Cortical Labs’ CL1, billed as the first code-deployable biological computer, with 115 units shipping this summer at $35K each — World-first biocomputing platform hits the market — Shannon Cuthrell is a freelance journalist covering business and technology.

    Human Brain Cells on a Chip for Sale
    World-first biocomputing platform hits the market
    https://spectrum.ieee.org/biological-computer-for-sale

    In a development straight out of science fiction, Australian startup Cortical Labs has released what it calls the world’s first code-deployable biological computer. The CL1, which debuted in March, fuses human brain cells on a silicon chip to process information via sub-millisecond electrical feedback loops.

    Designed as a tool for neuroscience and biotech research, the CL1 offers a new way to study how brain cells process and react to stimuli. Unlike conventional silicon-based systems, the hybrid platform uses live human neurons capable of adapting, learning, and responding to external inputs in real time.

    “On one view, [the CL1] could be regarded as the first commercially available biomimetic computer, the ultimate in neuromorphic computing that uses real neurons,” says theoretical neuroscientist Karl Friston of University College London. “However, the real gift of this technology is not to computer science. Rather, it’s an enabling technology that allows scientists to perform experiments on a little synthetic brain.”

    Reply
  34. Tomi Engdahl says:

    Dina Bass / Bloomberg:
    Broadcom begins shipping its Tomahawk 6 data center switch chips, which it says can perform the work of six previous-gen chips, to improve GPU utilization rates

    https://www.bloomberg.com/news/articles/2025-06-03/broadcom-avgo-ships-gear-meant-to-improve-nvidia-nvda-ai-chip-performance

    Reply
  35. Tomi Engdahl says:

    Gina Narcisi / CRN:
    Ciroos.AI, whose AI-powered site reliability engineering tool built on MCP and A2A helps businesses automate operations, emerges from stealth and raised $21M

    https://www.crn.com/news/networking/2025/ciroos-ai-emerges-from-stealth-raises-21m-to-scale-agentic-ai-tool-for-operations-teams

    Reply
  36. Tomi Engdahl says:

    Melissa Heikkilä / Financial Times:
    An interview with Gaia Marcus, director of the UK-based think tank Ada Lovelace Institute, on AI regulation in the UK and Europe, AI safety, bias, and more

    Ada Lovelace Institute’s Gaia Marcus: regulation would increase people’s comfort with AI
    The head of the UK-based think-tank talks about her hopes and fears for future oversight of AI
    https://www.ft.com/content/c572a796-258b-433f-b005-9a3ff6f56062?accessToken=zwAGNrnwIjAwkdPFcqeWJYtDP9OwBZo_9vVgYg.MEUCIQCeNyTxx5qrKQShPFJtNdwpMCY3DoRShD1D7ZkDNbBH5AIgQw_Uh2dW9EUXf_nvplHp6zfYDvTbp5ck-CFqZiBdA84&sharetype=gift&token=068f896f-0831-4420-9e2d-8aa6074458ed

    Gaia Marcus, director at the Ada Lovelace Institute, leads a team of researchers investigating one of the thorniest questions in artificial intelligence: power.

    There is an unprecedented concentration of power in the hands of a few large AI companies as economies and societies are transformed by the technology. Marcus is on a mission to ensure this transition is equitable. Her team studies the socio-technical implications of AI technologies, and tries to provide data and evidence to support meaningful conversations about how to build and regulate AI systems.

    In this conversation with the Financial Times’ AI correspondent Melissa Heikkilä, she explains why we urgently need to think about the kind of society we want to build in the age of AI.

    Gaia Marcus: I possibly chose the wrong horse out of the gate, so I ended up doing history because it was something that I was good at, which I think is often what happens when people don’t quite know what they’re doing next, so they just go with where they have the strongest grades.

    MH: There was a time in AI where we were thinking about societal impacts a lot. And now I feel like we’ve taken a few steps, or maybe more than a few steps, back, and we’re in this “let’s build, let’s go, go, go” phase. How would you describe this moment in AI?

    GM: I think it’s a really fragmented moment. I think maybe tech feels like it’s taken a step back from responsible AI. Well, the hyperscalers feel like they might have taken a step back from, say, ethical use of AI, or responsible use of AI. I think academia that focuses on AI is as focused on social impact as it ever was.

    It does feel to me that, increasingly, people are having different conversations. The role that Ada can play at this moment is that as an organisation, we’re a bridge. We seek to look at different ways of understanding the same problems, different types of intelligences, different types of expertise.

    You see a lot of hype, of hope, of fear, and I think trying to not fall into any of those cycles makes us quite unique.

    MH: What we’re seeing in the US is that certain elements of responsibility, or safety, are labelled as ‘woke’. Are you afraid of that stuff landing in Europe and undermining your work?

    GM: The [Paris] AI Action Summit was quite a pivotal moment in my thinking, in that it showed you that we were at this crossroads. And there’s one path, which is a path of like-minded countries working together and really seeking to ensure that they have an approach to AI and technology, which is aligned with their public’s expectations, in which they have the levers to manage the incentives of the companies operating in their borders.

    And then you’ve got another path that is really about national interest, about often putting corporate interests on top of people. And I think as humans, we’re very bad at both overestimating how much change is going to happen in the medium term, and then not really thinking how much change has actually just happened in the short term. We’re really in a calibration phase. And fundamentally, I think businesses and countries and governments should really always be asking themselves what are the futures that are being built with these technologies, and are these the futures that our populations want to live in.

    GM: In March, we launched the second round of a survey that we have done with the Alan Turing Institute, that looks to understand the public’s understanding, exposure, expectations of AI, linked on really specific-use cases, which I think is really important, and both their hopes of the technologies and the fears they have.

    At a moment where national governments seem to be stepping back from regulation and where the international conversation seems to be one with a deregulatory, or at least simplification bent, in the UK, at least, we’re seeing an increase in people saying that laws and regulations would increase their comfort with AI.

    And so, last time we ran the nationally representative survey, 62 per cent of the UK public said that laws and regulation help them feel comfortable. It’s now 72 per cent. That’s quite a significant change in two years.

    And interestingly, in a space, for example, where post-deployment powers, the power to intervene once a product has been released to market, are not getting that much traction, 88 per cent of people believe it’s important that governments or regulators have the power to stop serious harm to the public if it starts occurring.

    Reply
  37. Tomi Engdahl says:

    Lulu Yilun Chen / Bloomberg:
    Sources: Thrive and Capital Group visited China to study its AI scene, joining a wave of US investors rekindling interest in China after DeepSeek’s advances

    https://www.bloomberg.com/news/articles/2025-06-04/capital-group-kushner-s-thrive-visited-china-to-study-ai-scene

    Reply
  38. Tomi Engdahl says:

    Thomas Brewster / Forbes:
    The Trump administration announces plans to reorganize the US AI Safety Institute into the new Center for AI Standards and Innovation — The Trump administration has announced plans to reorganize the U.S. AI Safety Institute (AISI) into the new Center for AI Standards and Innovation (CAISI).

    The Wiretap: Trump Says Goodbye To The AI Safety Institute
    https://www.forbes.com/sites/thomasbrewster/2025/06/03/the-wiretap-trump-says-goodbye-to-the-ai-safety-institute/

    T
    he Trump administration has announced plans to reorganize the U.S. AI Safety Institute (AISI) into the new Center for AI Standards and Innovation (CAISI). Set up by the Biden administration in 2023, AISI operated within the National Institute of Standards & Technology (NIST) to research risks in widely-used AI systems like OpenAI’s ChatGPT or Anthropic’s Claude. The move to dismantle the body had been expected for some time. In February, as JD Vance headed to France for a major AI summit, his delegation did not include anyone from the AI Safety Institute, Reuters reported at the time. The agency’s inaugural director Elizabeth Kelly had stepped down earlier in the month.

    Reply
  39. Tomi Engdahl says:

    Lionsgate boss says the studio can use AI to adjust a movie’s tone and rating, convert live-action into animation and create kid-friendly cuts: “Now we can say, ‘Do it in anime, make it PG-13.’ Three hours later, I’ll have the movie.”

    The studio hopes its pact with AI startup company Runway allows directors the chance to “make movies and television shows we’d otherwise never make. We can’t make it for $100 million, but we’d make it for $50 million because of AI… We’re banging around the art of the possible. Let’s try some stuff, see what sticks.”

    Read more here: https://variety.com/2025/film/news/lionsgate-ai-adjust-movie-ratings-tone-format-1236417772/

    Reply
  40. Tomi Engdahl says:

    Artificial Intelligence
    Going Into the Deep End: Social Engineering and the AI Flood

    AI is transforming the cybersecurity landscape—empowering attackers with powerful new tools while offering defenders a chance to fight back. But without stronger awareness and strategy, organizations risk falling behind.

    https://www.securityweek.com/going-into-the-deep-end-social-engineering-and-the-ai-flood/

    It should come as no surprise that the vast majority of data breaches involve the “human element.” The 2025 Verizon Data Breach Investigations Report cites that human compromise held relatively steady year over year at nearly 70% of breaches. Human emotions and tendencies – and the massive variation in what influences each individual – are a massively dynamic vulnerability. Most equate Social Engineering with vague promises of riches to be had, or urgent or even threatening missives that require immediate action to avoid consequences. On the plus side, increased awareness has brought about a healthy skepticism in individuals and organizations toward something unexpected from a not completely familiar source.

    Unfortunately, with the rapid rise and advancement of Artificial Intelligence (AI), criminals have powerful new tools to boost not only the believability of scams, but also the volume of humans they can attack quickly – and as they say, the bad guys only need to be right once. However, AI can also be an equally potent ally for defenders in accelerating their ability to identify and blunt the impact of human targeting and compromise. While this may look like the age old, “cat and mouse” game between attackers and defenders, we’ve reached another crossroads, where an exponential jump in attack capability needs to be met with an equal jump in defensive response to at least keep pace.

    Let’s look at the AI “pool” of capabilities and challenges available to attackers and defenders, and the AI development representing a springboard that can launch the bad guys onto a new level – Deepfakes.

    Reply
  41. Tomi Engdahl says:

    Share on Facebook Share on Twitter Share on LinkedIn

    OPINION
    Onko tekoäly nyt uusin uhka tietoturvalle?

    https://etn.fi/index.php/opinion/17601-onko-tekoaely-nyt-uusin-uhka-tietoturvalle

    Reply
  42. Tomi Engdahl says:

    Tekoäly on tullut jäädäkseen – siitä ei ole epäilystäkään. Mutta mitä tapahtuu, kun siitä tulee myös kyberturvallisuuden suurin uhka?

    Tuore Arctic Wolfin trendiraportti kertoo karun totuuden: tekoäly on ensimmäistä kertaa ohittanut kiristyshaittaohjelmat tietoturvajohtajien pahimpana huolenaiheena. Yli 1200 IT- ja tietoturvajohtajaa ympäri maailmaa, Suomi mukaan lukien, näkee tekoälyn ja erityisesti suurten kielimallien kehityksen nyt suurimpana riskinä digitaaliseen turvallisuuteen.

    Miksi näin on? Nopeasti kehittyvä tekoäly tuo mukanaan uudenlaista epävarmuutta – ei vain hyökkäysten muotojen monipuolistumisena, vaan myös puolustautumisen vaikeutumisena. Tekoäly voi auttaa rikollisia tuottamaan uskottavia huijausviestejä, automatisoimaan hyökkäyksiä ja kiertämään perinteisiä suojausmenetelmiä. Samalla organisaatiot itsekin ottavat tekoälytyökaluja käyttöön ripeässä tahdissa, usein ilman kattavaa ymmärrystä niiden riskeistä.

    https://etn.fi/index.php/opinion/17601-onko-tekoaely-nyt-uusin-uhka-tietoturvalle

    Reply
  43. Tomi Engdahl says:

    Meghan Bobrowsky / Wall Street Journal:
    Reddit sues Anthropic, alleging it accessed Reddit 100K+ times after saying it had stopped; Reddit has reached formal licensing deals with OpenAI and Google — The online discussion forum says Anthropic continued to access its site over 100,000 times after saying it stopped

    Reddit Sues Anthropic, Alleges Unauthorized Use of Site’s Data
    The online discussion forum says Anthropic accessed its site more than 100,000 times after saying it had stopped
    https://www.wsj.com/tech/ai/reddit-lawsuit-anthropic-ai-3b9624dd?st=RG6AF6&reflink=desktopwebshare_permalink

    Reply
  44. Tomi Engdahl says:

    Ivan Mehta / TechCrunch:
    OpenAI rolls out connectors for services like Dropbox and OneDrive for ChatGPT Team, Enterprise, and Edu users; MCP support is coming to Pro, Team, Enterprise — OpenAI’s ChatGPT is adding new features for business users, including integrations with different cloud services, meeting recordings …

    ChatGPT introduces meeting recording and connectors for Google Drive, Box, and more
    https://techcrunch.com/2025/06/04/chatgpt-introduces-meeting-recording-and-connectors-for-google-drive-box-and-more/

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*