AI is developing all the time. Here are some picks from several articles what is expected to happen in AI and around it in 2025. Here are picks from various articles, the texts are picks from the article edited and in some cases translated for clarity.
AI in 2025: Five Defining Themes
https://news.sap.com/2025/01/ai-in-2025-defining-themes/
Artificial intelligence (AI) is accelerating at an astonishing pace, quickly moving from emerging technologies to impacting how businesses run. From building AI agents to interacting with technology in ways that feel more like a natural conversation, AI technologies are poised to transform how we work.
But what exactly lies ahead?
1. Agentic AI: Goodbye Agent Washing, Welcome Multi-Agent Systems
AI agents are currently in their infancy. While many software vendors are releasing and labeling the first “AI agents” based on simple conversational document search, advanced AI agents that will be able to plan, reason, use tools, collaborate with humans and other agents, and iteratively reflect on progress until they achieve their objective are on the horizon. The year 2025 will see them rapidly evolve and act more autonomously. More specifically, 2025 will see AI agents deployed more readily “under the hood,” driving complex agentic workflows.
In short, AI will handle mundane, high-volume tasks while the value of human judgement, creativity, and quality outcomes will increase.
2. Models: No Context, No Value
Large language models (LLMs) will continue to become a commodity for vanilla generative AI tasks, a trend that has already started. LLMs are drawing on an increasingly tapped pool of public data scraped from the internet. This will only worsen, and companies must learn to adapt their models to unique, content-rich data sources.
We will also see a greater variety of foundation models that fulfill different purposes. Take, for example, physics-informed neural networks (PINNs), which generate outcomes based on predictions grounded in physical reality or robotics. PINNs are set to gain more importance in the job market because they will enable autonomous robots to navigate and execute tasks in the real world.
Models will increasingly become more multimodal, meaning an AI system can process information from various input types.
3. Adoption: From Buzz to Business
While 2024 was all about introducing AI use cases and their value for organizations and individuals alike, 2025 will see the industry’s unprecedented adoption of AI specifically for businesses. More people will understand when and how to use AI, and the technology will mature to the point where it can deal with critical business issues such as managing multi-national complexities. Many companies will also gain practical experience working for the first time through issues like AI-specific legal and data privacy terms (compared to when companies started moving to the cloud 10 years ago), building the foundation for applying the technology to business processes.
4. User Experience: AI Is Becoming the New UI
AI’s next frontier is seamlessly unifying people, data, and processes to amplify business outcomes. In 2025, we will see increased adoption of AI across the workforce as people discover the benefits of humans plus AI.
This means disrupting the classical user experience from system-led interactions to intent-based, people-led conversations with AI acting in the background. AI copilots will become the new UI for engaging with a system, making software more accessible and easier for people. AI won’t be limited to one app; it might even replace them one day. With AI, frontend, backend, browser, and apps are blurring. This is like giving your AI “arms, legs, and eyes.”
5. Regulation: Innovate, Then Regulate
It’s fair to say that governments worldwide are struggling to keep pace with the rapid advancements in AI technology and to develop meaningful regulatory frameworks that set appropriate guardrails for AI without compromising innovation.
12 AI predictions for 2025
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
https://www.cio.com/article/3630070/12-ai-predictions-for-2025.html
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
1. Small language models and edge computing
Most of the attention this year and last has been on the big language models — specifically on ChatGPT in its various permutations, as well as competitors like Anthropic’s Claude and Meta’s Llama models. But for many business use cases, LLMs are overkill and are too expensive, and too slow, for practical use.
“Looking ahead to 2025, I expect small language models, specifically custom models, to become a more common solution for many businesses,”
2. AI will approach human reasoning ability
In mid-September, OpenAI released a new series of models that thinks through problems much like a person would, it claims. The company says it can achieve PhD-level performance in challenging benchmark tests in physics, chemistry, and biology. For example, the previous best model, GPT-4o, could only solve 13% of the problems on the International Mathematics Olympiad, while the new reasoning model solved 83%.
If AI can reason better, then it will make it possible for AI agents to understand our intent, translate that into a series of steps, and do things on our behalf, says Gartner analyst Arun Chandrasekaran. “Reasoning also helps us use AI as more of a decision support system,”
3. Massive growth in proven use cases
This year, we’ve seen some use cases proven to have ROI, says Monteiro. In 2025, those use cases will see massive adoption, especially if the AI technology is integrated into the software platforms that companies are already using, making it very simple to adopt.
“The fields of customer service, marketing, and customer development are going to see massive adoption,”
4. The evolution of agile development
The agile manifesto was released in 2001 and, since then, the development philosophy has steadily gained over the previous waterfall style of software development.
“For the last 15 years or so, it’s been the de-facto standard for how modern software development works,”
5. Increased regulation
At the end of September, California governor Gavin Newsom signed a law requiring gen AI developers to disclose the data they used to train their systems, which applies to developers who make gen AI systems publicly available to Californians. Developers must comply by the start of 2026.
There are also regulations about the use of deep fakes, facial recognition, and more. The most comprehensive law, the EU’s AI Act, which went into effect last summer, is also something that companies will have to comply with starting in mid-2026, so, again, 2025 is the year when they will need to get ready.
6. AI will become accessible and ubiquitous
With gen AI, people are still at the stage of trying to figure out what gen AI is, how it works, and how to use it.
“There’s going to be a lot less of that,” he says. But gen AI will become ubiquitous and seamlessly woven into workflows, the way the internet is today.
7. Agents will begin replacing services
Software has evolved from big, monolithic systems running on mainframes, to desktop apps, to distributed, service-based architectures, web applications, and mobile apps. Now, it will evolve again, says Malhotra. “Agents are the next phase,” he says. Agents can be more loosely coupled than services, making these architectures more flexible, resilient and smart. And that will bring with it a completely new stack of tools and development processes.
8. The rise of agentic assistants
In addition to agents replacing software components, we’ll also see the rise of agentic assistants, adds Malhotra. Take for example that task of keeping up with regulations.
Today, consultants get continuing education to stay abreast of new laws, or reach out to colleagues who are already experts in them. It takes time for the new knowledge to disseminate and be fully absorbed by employees.
“But an AI agent can be instantly updated to ensure that all our work is compliant with the new laws,” says Malhotra. “This isn’t science fiction.”
9. Multi-agent systems
Sure, AI agents are interesting. But things are going to get really interesting when agents start talking to each other, says Babak Hodjat, CTO of AI at Cognizant. It won’t happen overnight, of course, and companies will need to be careful that these agentic systems don’t go off the rails.
Companies such as Sailes and Salesforce are already developing multi-agent workflows.
10. Multi-modal AI
Humans and the companies we build are multi-modal. We read and write text, we speak and listen, we see and we draw. And we do all these things through time, so we understand that some things come before other things. Today’s AI models are, for the most part, fragmentary. One can create images, another can only handle text, and some recent ones can understand or produce video.
11. Multi-model routing
Not to be confused with multi-modal AI, multi-modal routing is when companies use more than one LLM to power their gen AI applications. Different AI models are better at different things, and some are cheaper than others, or have lower latency. And then there’s the matter of having all your eggs in one basket.
“A number of CIOs I’ve spoken with recently are thinking about the old ERP days of vendor lock,” says Brett Barton, global AI practice leader at Unisys. “And it’s top of mind for many as they look at their application portfolio, specifically as it relates to cloud and AI capabilities.”
Diversifying away from using just a single model for all use cases means a company is less dependent on any one provider and can be more flexible as circumstances change.
12. Mass customization of enterprise software
Today, only the largest companies, with the deepest pockets, get to have custom software developed specifically for them. It’s just not economically feasible to build large systems for small use cases.
“Right now, people are all using the same version of Teams or Slack or what have you,” says Ernst & Young’s Malhotra. “Microsoft can’t make a custom version just for me.” But once AI begins to accelerate the speed of software development while reducing costs, it starts to become much more feasible.
9 IT resolutions for 2025
https://www.cio.com/article/3629833/9-it-resolutions-for-2025.html
1. Innovate
“We’re embracing innovation,”
2. Double down on harnessing the power of AI
Not surprisingly, getting more out of AI is top of mind for many CIOs.
“I am excited about the potential of generative AI, particularly in the security space,”
3. And ensure effective and secure AI rollouts
“AI is everywhere, and while its benefits are extensive, implementing it effectively across a corporation presents challenges. Balancing the rollout with proper training, adoption, and careful measurement of costs and benefits is essential, particularly while securing company assets in tandem,”
4. Focus on responsible AI
The possibilities of AI grow by the day — but so do the risks.
“My resolution is to mature in our execution of responsible AI,”
“AI is the new gold and in order to truly maximize it’s potential, we must first have the proper guardrails in place. Taking a human-first approach to AI will help ensure our state can maintain ethics while taking advantage of the new AI innovations.”
5. Deliver value from generative AI
As organizations move from experimenting and testing generative AI use cases, they’re looking for gen AI to deliver real business value.
“As we go into 2025, we’ll continue to see the evolution of gen AI. But it’s no longer about just standing it up. It’s more about optimizing and maximizing the value we’re getting out of gen AI,”
6. Empower global talent
Although harnessing AI is a top objective for Morgan Stanley’s Wetmur, she says she’s equally committed to harnessing the power of people.
7. Create a wholistic learning culture
Wetmur has another talent-related objective: to create a learning culture — not just in her own department but across all divisions.
8. Deliver better digital experiences
Deltek’s Cilsick has her sights set on improving her company’s digital employee experience, believing that a better DEX will yield benefits in multiple ways.
Cilsick says she first wants to bring in new technologies and automation to “make things as easy as possible,” mirroring the digital experiences most workers have when using consumer technologies.
“It’s really about leveraging tech to make sure [employees] are more efficient and productive,”
“In 2025 my primary focus as CIO will be on transforming operational efficiency, maximizing business productivity, and enhancing employee experiences,”
9. Position the company for long-term success
Lieberman wants to look beyond 2025, saying another resolution for the year is “to develop a longer-term view of our technology roadmap so that we can strategically decide where to invest our resources.”
“My resolutions for 2025 reflect the evolving needs of our organization, the opportunities presented by AI and emerging technologies, and the necessity to balance innovation with operational efficiency,”
Lieberman aims to develop AI capabilities to automate routine tasks.
“Bots will handle common inquiries ranging from sales account summaries to HR benefits, reducing response times and freeing up resources for strategic initiatives,”
Not just hype — here are real-world use cases for AI agents
https://venturebeat.com/ai/not-just-hype-here-are-real-world-use-cases-for-ai-agents/
Just seven or eight months ago, when a customer called in to or emailed Baca Systems with a service question, a human agent handling the query would begin searching for similar cases in the system and analyzing technical documents.
This process would take roughly five to seven minutes; then the agent could offer the “first meaningful response” and finally begin troubleshooting.
But now, with AI agents powered by Salesforce, that time has been shortened to as few as five to 10 seconds.
Now, instead of having to sift through databases for previous customer calls and similar cases, human reps can ask the AI agent to find the relevant information. The AI runs in the background and allows humans to respond right away, Russo noted.
AI can serve as a sales development representative (SDR) to send out general inquires and emails, have a back-and-forth dialogue, then pass the prospect to a member of the sales team, Russo explained.
But once the company implements Salesforce’s Agentforce, a customer needing to modify an order will be able to communicate their needs with AI in natural language, and the AI agent will automatically make adjustments. When more complex issues come up — such as a reconfiguration of an order or an all-out venue change — the AI agent will quickly push the matter up to a human rep.
Open Source in 2025: Strap In, Disruption Straight Ahead
Look for new tensions to arise in the New Year over licensing, the open source AI definition, security and compliance, and how to pay volunteer maintainers.
https://thenewstack.io/open-source-in-2025-strap-in-disruption-straight-ahead/
The trend of widely used open source software moving to more restrictive licensing isn’t new.
In addition to the demands of late-stage capitalism and impatient investors in companies built on open source tools, other outside factors are pressuring the open source world. There’s the promise/threat of generative AI, for instance. Or the shifting geopolitical landscape, which brings new security concerns and governance regulations.
What’s ahead for open source in 2025?
More Consolidation, More Licensing Changes
The Open Source AI Debate: Just Getting Started
Security and Compliance Concerns Will Rise
Paying Maintainers: More Cash, Creativity Needed
Kyberturvallisuuden ja tekoälyn tärkeimmät trendit 2025
https://www.uusiteknologia.fi/2024/11/20/kyberturvallisuuden-ja-tekoalyn-tarkeimmat-trendit-2025/
1. Cyber infrastructure will be centered on a single, unified security platform
2. Big data will give an edge against new entrants
3. AI’s integrated role in 2025 means building trust, governance engagement, and a new kind of leadership
4. Businesses will adopt secure enterprise browsers more widely
5. AI’s energy implications will be more widely recognized in 2025
6. Quantum realities will become clearer in 2025
7. Security and marketing leaders will work more closely together
Presentation: For 2025, ‘AI eats the world’.
https://www.ben-evans.com/presentations
Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing
However, just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity. Right now, we very much see AI in this “peak of inflated expectations” phase and predict that it will dip into the “trough of disillusionment”, where organizations realize that it is not the silver bullet they thought it would be. In fact, there are already signs of cynicism as decision-makers are bombarded with marketing messages from vendors and struggle to discern what is a genuine use case and what is not relevant for their organization.
There is also regulation that will come into force, such as the EU AI Act, which is a comprehensive legal framework that sets out rules for the development and use of AI.
AI certainly won’t solve every problem, and it should be used like automation, as part of a collaborative mix of people, process and technology. You simply can’t replace human intuition with AI, and many new AI regulations stipulate that human oversight is maintained.
7 Splunk Predictions for 2025
https://www.splunk.com/en_us/form/future-predictions.html
AI: Projects must prove their worth to anxious boards or risk defunding, and LLMs will go small to reduce operating costs and environmental impact.
OpenAI, Google and Anthropic Are Struggling to Build More Advanced AI
Three of the leading artificial intelligence companies are seeing diminishing returns from their costly efforts to develop newer models.
https://www.bloomberg.com/news/articles/2024-11-13/openai-google-and-anthropic-are-struggling-to-build-more-advanced-ai
Sources: OpenAI, Google, and Anthropic are all seeing diminishing returns from costly efforts to build new AI models; a new Gemini model misses internal targets
It Costs So Much to Run ChatGPT That OpenAI Is Losing Money on $200 ChatGPT Pro Subscriptions
https://futurism.com/the-byte/openai-chatgpt-pro-subscription-losing-money?fbclid=IwY2xjawH8epVleHRuA2FlbQIxMQABHeggEpKe8ZQfjtPRC0f2pOI7A3z9LFtFon8lVG2VAbj178dkxSQbX_2CJQ_aem_N_ll3ETcuQ4OTRrShHqNGg
In a post on X-formerly-Twitter, CEO Sam Altman admitted an “insane” fact: that the company is “currently losing money” on ChatGPT Pro subscriptions, which run $200 per month and give users access to its suite of products including its o1 “reasoning” model.
“People use it much more than we expected,” the cofounder wrote, later adding in response to another user that he “personally chose the price and thought we would make some money.”
Though Altman didn’t explicitly say why OpenAI is losing money on these premium subscriptions, the issue almost certainly comes down to the enormous expense of running AI infrastructure: the massive and increasing amounts of electricity needed to power the facilities that power AI, not to mention the cost of building and maintaining those data centers. Nowadays, a single query on the company’s most advanced models can cost a staggering $1,000.
Tekoäly edellyttää yhä nopeampia verkkoja
https://etn.fi/index.php/opinion/16974-tekoaely-edellyttaeae-yhae-nopeampia-verkkoja
A resilient digital infrastructure is critical to effectively harnessing telecommunications networks for AI innovations and cloud-based services. The increasing demand for data-rich applications related to AI requires a telecommunications network that can handle large amounts of data with low latency, writes Carl Hansson, Partner Solutions Manager at Orange Business.
AI’s Slowdown Is Everyone Else’s Opportunity
Businesses will benefit from some much-needed breathing space to figure out how to deliver that all-important return on investment.
https://www.bloomberg.com/opinion/articles/2024-11-20/ai-slowdown-is-everyone-else-s-opportunity
Näin sirumarkkinoilla käy ensi vuonna
https://etn.fi/index.php/13-news/16984-naein-sirumarkkinoilla-kaey-ensi-vuonna
The growing demand for high-performance computing (HPC) for artificial intelligence and HPC computing continues to be strong, with the market set to grow by more than 15 percent in 2025, IDC estimates in its recent Worldwide Semiconductor Technology Supply Chain Intelligence report.
IDC predicts eight significant trends for the chip market by 2025.
1. AI growth accelerates
2. Asia-Pacific IC Design Heats Up
3. TSMC’s leadership position is strengthening
4. The expansion of advanced processes is accelerating.
5. Mature process market recovers
6. 2nm Technology Breakthrough
7. Restructuring the Packaging and Testing Market
8. Advanced packaging technologies on the rise
2024: The year when MCUs became AI-enabled
https://www-edn-com.translate.goog/2024-the-year-when-mcus-became-ai-enabled/?fbclid=IwZXh0bgNhZW0CMTEAAR1_fEakArfPtgGZfjd-NiPd_MLBiuHyp9qfiszczOENPGPg38wzl9KOLrQ_aem_rLmf2vF2kjDIFGWzRVZWKw&_x_tr_sl=en&_x_tr_tl=fi&_x_tr_hl=fi&_x_tr_pto=wapp
The AI party in the MCU space started in 2024, and in 2025, it is very likely that there will be more advancements in MCUs using lightweight AI models.
Adoption of AI acceleration features is a big step in the development of microcontrollers. The inclusion of AI features in microcontrollers started in 2024, and it is very likely that in 2025, their features and tools will develop further.
Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing
AI Regulation Gets Serious in 2025 – Is Your Organization Ready?
While the challenges are significant, organizations have an opportunity to build scalable AI governance frameworks that ensure compliance while enabling responsible AI innovation.
https://www.securityweek.com/ai-regulation-gets-serious-in-2025-is-your-organization-ready/
Similar to the GDPR, the EU AI Act will take a phased approach to implementation. The first milestone arrives on February 2, 2025, when organizations operating in the EU must ensure that employees involved in AI use, deployment, or oversight possess adequate AI literacy. Thereafter from August 1 any new AI models based on GPAI standards must be fully compliant with the act. Also similar to GDPR is the threat of huge fines for non-compliance – EUR 35 million or 7 percent of worldwide annual turnover, whichever is higher.
While this requirement may appear manageable on the surface, many organizations are still in the early stages of defining and formalizing their AI usage policies.
Later phases of the EU AI Act, expected in late 2025 and into 2026, will introduce stricter requirements around prohibited and high-risk AI applications. For organizations, this will surface a significant governance challenge: maintaining visibility and control over AI assets.
Tracking the usage of standalone generative AI tools, such as ChatGPT or Claude, is relatively straightforward. However, the challenge intensifies when dealing with SaaS platforms that integrate AI functionalities on the backend. Analysts, including Gartner, refer to this as “embedded AI,” and its proliferation makes maintaining accurate AI asset inventories increasingly complex.
Where frameworks like the EU AI Act grow more complex is their focus on ‘high-risk’ use cases. Compliance will require organizations to move beyond merely identifying AI tools in use; they must also assess how these tools are used, what data is being shared, and what tasks the AI is performing. For instance, an employee using a generative AI tool to summarize sensitive internal documents introduces very different risks than someone using the same tool to draft marketing content.
For security and compliance leaders, the EU AI Act represents just one piece of a broader AI governance puzzle that will dominate 2025.
The next 12-18 months will require sustained focus and collaboration across security, compliance, and technology teams to stay ahead of these developments.
The Global Partnership on Artificial Intelligence (GPAI) is a multi-stakeholder initiative which aims to bridge the gap between theory and practice on AI by supporting cutting-edge research and applied activities on AI-related priorities.
https://gpai.ai/about/#:~:text=The%20Global%20Partnership%20on%20Artificial,activities%20on%20AI%2Drelated%20priorities.
2,385 Comments
Tomi Engdahl says:
https://cybersecuritynews.com/top-10-gpt-tools/#google_vignette
Top 10 GPT Tools For Hackers, Penetration Testers, and Security Analysts
Tomi Engdahl says:
What Happens When People Don’t Understand How AI Works
Despite what tech CEOs might say, large language models are not smart in any recognizably human sense of the word.
https://www.theatlantic.com/culture/archive/2025/06/artificial-intelligence-illiteracy/683021/
Tomi Engdahl says:
Vibe Coding as a software engineer
There’s a lot of talk about “vibe coding”, but is it just a vague term for prototyping, or could vibes change how we build software?
https://newsletter.pragmaticengineer.com/p/vibe-coding-as-a-software-engineer
Tomi Engdahl says:
Stop Building AI Platforms
When small and medium companies achieve success in building Data and ML platforms, building AI platforms is now profoundly challenging
https://towardsdatascience.com/stop-building-ai-platforms/
Tomi Engdahl says:
A Knockout Blow for LLMs?
LLM “reasoning” is so cooked, they turned my name into a verb
https://cacm.acm.org/blogcacm/a-knockout-blow-for-llms/
Tomi Engdahl says:
https://www.cnbc.com/2023/10/27/new-tool-lets-artists-poison-their-artwork-to-deter-ai-companies.html
Tomi Engdahl says:
https://www.xda-developers.com/using-notebooklm-with-obsidian/
Tomi Engdahl says:
Google AI Introduces Multi-Agent System Search MASS: A New AI Agent Optimization Framework for Better Prompts and Topologies
https://www.marktechpost.com/2025/06/07/google-ai-introduces-multi-agent-system-search-mass-a-new-ai-agent-optimization-framework-for-better-prompts-and-topologies/
Tomi Engdahl says:
Need Help Getting Organized? You Can Now Schedule Actions in Google Gemini
Google says the new feature can be used for everything from ‘staying informed on your favorite sports team’ to ‘scheduling one-off tasks.’
https://uk.pcmag.com/ai/158473/organized-you-can-now-schedule-actions-in-google-gemini
Tomi Engdahl says:
https://blog.langchain.com/benchmarking-multi-agent-architectures/
Tomi Engdahl says:
A Knockout Blow for LLMs?
LLM “reasoning” is so cooked, they turned my name into a verb.
https://cacm.acm.org/blogcacm/a-knockout-blow-for-llms/
Apple has a new paper; it’s pretty devastating to LLMs, a powerful followup to one from many of the same authors last year.
There’s actually an interesting weakness in the new argument—which I will get to below—but the overall force of the argument is undeniably powerful. So much so that LLM advocates are already partly conceding the blow while hinting at, or at least hoping for, happier futures ahead.
Tomi Engdahl says:
https://www.forbes.com/sites/rachelwells/2025/06/16/turn-this-chatgpt-prompt-into-a-six-figure-passive-income-stream/
Tomi Engdahl says:
https://www.xda-developers.com/replaced-google-home-home-assistant-local-llm/
Tomi Engdahl says:
https://venturebeat.com/ai/ai-is-rewriting-the-data-playbook-and-knowledge-graphs-are-page-one/
Tomi Engdahl says:
https://www.infoworld.com/article/4008535/openais-o3-price-plunge-changes-everything-for-vibe-coders.html
Tomi Engdahl says:
10 best open source ChatGPT alternative that runs 100% locally
#
ai
#
programming
#
chatgpt
#
javascript
AI chatbots have taken the world by storm—and leading the charge is OpenAI’s ChatGPT. But as powerful as it is, ChatGPT comes with limitations: it runs on the cloud, raises privacy concerns, and isn’t open source.
https://dev.to/therealmrmumba/10-best-open-source-chatgpt-alternative-that-runs-100-locally-jdc
Tomi Engdahl says:
How energy and utilities companies can greatly benefit from AI-powered predictive maintenance and computer vision?
The energy and utilities sectors face growing complexity, regulatory demands, and rising expectations for reliability and efficiency. From managing distributed energy systems to maintaining critical water infrastructure, performance and cost control are constant challenges. When supported by high-quality data, AI technologies such as machine learning and advanced analytics enable predictive maintenance, demand forecasting, smart asset management, and much more. These capabilities offer a powerful path to improve efficiency, reduce risk, and unlock new value across operations, says Etteplan’s Artur Mroczkowski.
https://www.etteplan.com/about-us/insights/how-energy-and-utilities-companies-can-greatly-benefit-from-ai-powered-predictive-maintenance-and-computer-vision/
Tomi Engdahl says:
From AI agent hype to practicality: Why enterprises must consider fit over flash
https://venturebeat.com/ai/from-ai-agent-hype-to-practicality-why-enterprises-must-consider-fit-over-flash/
As we step fully into the era of autonomous transformation, AI agents are transforming how businesses operate and create value. But with hundreds of vendors claiming to offer “AI agents,” how do we cut through the hype and understand what these systems can truly accomplish and, more importantly, how we should use them?
The answer is more complicated than creating a list of tasks that could be automated and testing whether an AI agent can achieve those tasks against benchmarks. A jet can move faster than a car, but it’s the wrong choice for a trip to the grocery store.
Tomi Engdahl says:
https://www.zdnet.com/article/what-are-ai-agents-how-to-access-a-team-of-personalized-assistants/
Tomi Engdahl says:
AI Models from Google, OpenAI, Anthropic Solve 0% of ‘Hard’ Coding Problems
Despite claims of AI models surpassing elite humans, ‘a significant gap still remains, particularly in areas demanding novel insights.’
https://analyticsindiamag.com/global-tech/ai-models-from-google-openai-anthropic-solve-0-of-hard-coding-problems/
If you’ve heard the phrase ‘coding is dead’ for a mind-numbingly high number of times, take a deep breath and pause. A new benchmark from researchers across notable universities in the United States and Canada has sparked a twist in the tale.
It turns out that AI is far from solving some of the most complex coding problems today.
A study by New York University, Princeton University, the University of California, San Diego, McGill University, and others indicates a significant gap between the coding capabilities of present-day LLMs and elite human intelligence.
Tomi Engdahl says:
https://www.anthropic.com/research/open-source-circuit-tracing
In our recent interpretability research, we introduced a new method to trace the thoughts of a large language model. Today, we’re open-sourcing the method so that anyone can build on our research.
Our approach is to generate attribution graphs, which (partially) reveal the steps a model took internally to decide on a particular output. The open-source library we’re releasing supports the generation of attribution graphs on popular open-weights models—and a frontend hosted by Neuronpedia lets you explore the graphs interactively.
Tomi Engdahl says:
https://www.geeky-gadgets.com/ai-rewriting-its-own-code/
Tomi Engdahl says:
9 AI Tools That Will Separate Winners from Losers in 2025
https://www.geeky-gadgets.com/ai-tools-for-career-success-2025/#google_vignette
The Four Core Skills of AI Generalists
To thrive in an AI-dominated era, mastering four essential capabilities is crucial. These skills will enable you to maximize the potential of AI tools, whether you’re a student, employee, or entrepreneur.
1. The Power to Build
AI tools like Lovable AI and Cursor empower you to create apps and websites without requiring advanced coding skills. By using “vibe coding,” you can describe your ideas in plain language, and the AI translates them into functional software. This widespread access of software development allows anyone to turn concepts into reality quickly and efficiently, breaking down barriers that once limited innovation to technical experts.
innovation to technical experts.
2. The Power to Automate
Repetitive tasks no longer need to consume your time. Tools such as Relevance AI, N8N, and Postman streamline workflows, automate customer follow-ups, and organize tasks. By mastering skills like prompt engineering and API integration, you can design systems that handle routine work, freeing you to focus on strategic and creative endeavors. Automation not only increases efficiency but also enhances productivity across various industries.
3. The Power to Create
Content creation is no longer the exclusive domain of specialists. AI tools like ChatGPT-4 Image Generation, Runway, and Recraft simplify graphic design, video editing, and UI/UX design. For audio and video editing, Descript offers professional-grade capabilities that are accessible to anyone. Whether you’re crafting visuals for a project, producing marketing materials, or designing user interfaces, AI makes high-quality creation achievable for all, regardless of technical expertise.
4. The Power to Connect
Effective communication is a cornerstone of success in today’s digital world. Tools like Poppy AI help you write essays, newsletters, courses, and brand communications tailored to your audience. By using AI for personalized communication, you can build stronger relationships and convey your ideas more effectively. Whether in business, education, or personal projects, the ability to connect with others through tailored messaging is a skill that will set you apart.
Tomi Engdahl says:
Microsoft AI Introduces Code Researcher: A Deep Research Agent for Large Systems Code and Commit History
https://www.marktechpost.com/2025/06/14/microsoft-ai-introduces-code-researcher-a-deep-research-agent-for-large-systems-code-and-commit-history/
Tomi Engdahl says:
https://github.blog/changelog/2025-06-20-upcoming-deprecation-of-o1-gpt-4-5-o3-mini-and-gpt-4o/
Tomi Engdahl says:
https://www.marktechpost.com/2025/06/18/why-small-language-models-slms-are-poised-to-redefine-agentic-ai-efficiency-cost-and-practical-deployment/
Tomi Engdahl says:
“The human touch remains irreplaceable in many interactions.” https://trib.al/2CoE6Dj
Companies That Replaced Humans With AI Are Realizing Their Mistake
https://futurism.com/companies-replaced-workers-ai?fbclid=IwY2xjawLEjrxleHRuA2FlbQIxMQABHkI7-QVVAUSMQafqE3YVAOKiCVRedTX2TqEci8RbEAssxDMMv9vcxz0Z_WXD_aem_m1L41wTuVHVHdbOBKuNB1g
According to tech billionaire and OpenAI CEO Sam Altman, 2025 was supposed to be the year “when AI agents will work.”
Despite widespread hype, so-called “AI agents” — a software product that’s supposed to complete human-level tasks autonomously — have yet to live up to their name. As of April, even the best AI agent could only finish 24 percent of the jobs assigned to it. Still, that didn’t stop business executives from swarming to the software like flies to roadside carrion, gutting entire departments worth of human workers to make way for their AI replacements.
But as AI agents have yet to even pay for themselves — spilling their employer’s embarrassing secrets all the while — more and more executives are waking up to the sloppy reality of AI hype.
A recent survey by the business analysis and consulting firm Gartner, for instance, found that out of 163 business executives, a full half said their plans to “significantly reduce their customer service workforce” would be abandoned by 2027.
This is forcing corporate PR spinsters to rewrite speeches about AI “transcending automation,” instead leaning on phrases like “hybrid approach” and “transitional challenges” to describe the fact that they still need humans to run a workplace.
“The human touch remains irreplaceable in many interactions, and organizations must balance technology with human empathy and understanding,” said Kathy Ross, Gartner’s senior director of customer service and support analysis.
That’s a vibe employees have been feeling for a while now. Another report, this one by IT firm GoTo and research agency Workplace Intelligence, found that 62 percent of employees are currently saying that AI is “significantly overhyped.”
Likewise, only 45 percent of corporate IT managers reported having a formal AI policy in place, suggesting a scattered and hasty rollout of the tech. Of those IT leaders, 56 percent said “security concerns” and “integration challenges” were the main barriers to AI adoption.
The reports come as a number of businesses have already made the humiliating walk-back of shame in recent weeks.
Finance startup Klarna, for example, reduced its workforce by 22 percent throughout 2024 ahead of the long-promised AI revolution. But then the company did an about-face on its AI strategy back in May, announcing a “recruitment drive” to bring all those meat bags back to work.
According to tech critic Ed Zitron, the whole agentic charade can be explained by the fact that “it isn’t obvious what any of these AI-powered products do, and when you finally work it out, they don’t seem to do that much.”
“These ‘agents’ are branded to sound like intelligent lifeforms that can make intelligent decisions,” Zitron writes, “but are really just trumped-up automations that require enterprise customers to invest time programming them.”
Tomi Engdahl says:
Google Adds Button to Generate Error-Laden AI Podcast About Your Search Results Instead of Just Reading Them Like a Functioning Member of Society
We’re opting out.
https://futurism.com/google-button-generate-ai-podcast-search-results?fbclid=IwY2xjawLEkL5leHRuA2FlbQIxMQABHkyS2EWKGu3L2Ji3enpCVxZYnSXboRFJ_Aa1NQoaAQbAhczflBTMZXjuZvE2_aem_6XvzDttPsvnPMDRVyBU-ow
Google has released a baffling new AI feature that turns your web search into a podcast.
Why anybody would want to enable the feature is unclear. Why be plagued by misleading and hallucinated AI Overviews search results when you can have a robotic voice read them out loud instead? Have we really lost the ability as a species to parse written information, nevermind original sources?
The opt-in feature — which currently lives inside Google’s experimental “Labs” section and has to be manually turned on — harnesses the power of the company’s Gemini AI model to turn a search query into “quick, conversational audio overviews.”
According to the tech giant, an “audio overview can help you get a lay of the land, offering a convenient, hands-free way to absorb information whether you’re multitasking or simply prefer an audio experience.”
But is this anything anybody really asked for? Having two fake podcast hosts rant about a subject you’re researching — likely with a smattering of hallucinations — sounds like an incredibly counterintuitive and needlessly obtuse way to get quick access to information.
Tomi Engdahl says:
The feature first surfaced last year as part of Google’s NotebookLM, a note-taking tool that uses AI to help users organize their thoughts and summarize notes. An “Audio Overviews” feature can then take your notes and turn them into AI-generated podcasts, with often unintentionally hilarious results.
Tomi Engdahl says:
Particularly when it comes to search results, where speed has conventionally trumped anything else, turning AI summaries into rambling audio snippets sounds pretty exhausting.
Tomi Engdahl says:
“Are there ever going to be enough GPUs?” https://trib.al/Naz9mGM
Sam Altman Says “Significant Fraction” of Earth’s Total Electricity Should Go to Running AI
“Are there ever going to be enough GPUs?”
https://futurism.com/openai-altman-electricity-ai?fbclid=IwY2xjawLEkkVleHRuA2FlbQIxMQABHq-qzoR_OCiwawp89Hgy9MErNbJkv0eqyrfkgC2574PB82y5laNzJ-fwBb5I_aem_j1TzB_aYxBtiVNMG4pSmJg
During a recent public appearance, OpenAI CEO Sam Altman admitted that he wants a large chunk of the world’s power grid to help him run artificial intelligence models.
As Laptop Mag flagged, he dropped that bomb dropped during AMD’s AI conference last week after Lisa Su, the CEO of the hosting firm who counts Altman as a client and friend, mentioned ChatGPT’s recent outages.
Though OpenAI hasn’t revealed the exact causes of its massive June outage, there’s a good chance it had to do with running out of computing power. This seems all the more probable given that Altman admitted earlier this year that the company had run out of graphics processing units or GPUs, the high-end computer chips that AMD sells and companies like OpenAI use to power their large language models (LLMs).
“Theoretically, at some points, you can see that a significant fraction of the power on Earth should be spent running AI compute,” Altman said. “And maybe we’re going to get there.”
To reiterate: the CEO of the world’s largest AI company said he believes a “significant fraction” of the electricity on this planet should be used to run AI — and said so to the CEO of a company whose GPUs he recently committed to purchasing, too.
Perhaps most upsetting about Altman’s flippant admission is the environmental impact he so casually ignored. Conventional electric generation often relies on the combustion of fossil fuels, which have been killing our planet since way before OpenAI was a twinkle in Altman’s eye.
Add in a new electricity-guzzling industry like AI to a power grid already stretched to the brink, and you’ve got a serious problem — one that Altman, Su, and everyone else who boosts AI seems to not want to face full-on.
In a new blog post in which the OpenAI CEO claimed that the world is approaching what he calls a “gentle singularity,” or the point at which artificial intelligence meets or surpasses the capabilities of humans, Altman attempted to explain how much power ChatGPT uses — but his description fell short.
“People are often curious about how much energy a ChatGPT query uses; the average query uses about 0.34 watt-hours, about what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes,” the CEO wrote. “It also uses about 0.000085 gallons of water; roughly one fifteenth of a teaspoon.”
Tomi Engdahl says:
https://www.facebook.com/share/p/1EEwdMtbZM/
In a surprising and somewhat amusing twist, ChatGPT, one of today’s most advanced AI chatbots, was soundly defeated in a game of chess by the Atari 2600, a home video game console that dates back to 1977. Despite its initial confidence and self-proclaimed chess knowledge, ChatGPT struggled to keep up with the decades-old machine. The Atari’s simple game, Video Chess, operates with extremely limited computing power—only able to analyze a few moves ahead—yet it still consistently outperformed ChatGPT.
The match, conducted as a fun experiment by a Citrix engineer, quickly revealed that ChatGPT, while powerful in language understanding and generation, is not built to handle complex game strategy like a specialized chess engine. It made several blunders, misinterpreted piece positions, and failed to grasp the board’s state correctly. Observers joked that it played worse than a beginner-level human player.
This entertaining loss serves as a humbling reminder: ChatGPT is a language model, not a chess-playing AI. While it excels in communication, writing, and problem-solving through text, it’s not designed for tasks that require precise logic and spatial reasoning like traditional chess engines.
#ChatGPT #Atari2600 #AIvsRetro #ChessChallenge #FunnyTechFails #AIlimitations #LanguageModel #VintageVictory
Tomi Engdahl says:
Image understanding
Gemini models are built to be multimodal from the ground up, unlocking a wide range of image processing and computer vision tasks including but not limited to image captioning, classification, and visual question answering without having to train specialized ML models.
https://ai.google.dev/gemini-api/docs/image-understanding?lang=python
Tomi Engdahl says:
Generate images with Gemini Apps
You can create captivating images in seconds with Gemini Apps. From work, play, or anything in between, Gemini Apps can help you generate images to help bring your imagination to life.
https://support.google.com/gemini/answer/14286560?hl=en&co=GENIE.Platform%3DDesktop
Unleash your imagination with Pixel Studio.
It’s easy to turn ideas into reality – with a little help from Google AI on Pixel
https://store.google.com/intl/en/ideas/articles/pixel-image-gen/
Tomi Engdahl says:
How to Use Google Gemini to Generate Images?
https://clickup.com/blog/gemini-image-generation/
Tomi Engdahl says:
https://www.continue.dev/
Amplified developers,
AI-native development
Create, share, and use custom AI code assistants with our open-source IDE extensions and hub of rules, tools, and models
Tomi Engdahl says:
AGI / ASI tulee. Ehkä? Pian? ”Why Big Tech cannot agree on artificial general intelligence” https://on.ft.com/40aSNlu
Tomi Engdahl says:
Automated Network Packet Analysis with Gemini AI and Scapy
https://medium.com/the-last/automated-network-packet-analysis-with-gemini-ai-and-scapy-d049763d40b3
This project was developed as part of a graduate course at UVU, aiming to address a challenging problem in packet analysis. The assignment involved scanning and analyzing packets within a large PCAP file, with the objective of reviewing each packet and identifying potential vulnerabilities. Given that the file contained over 500 lines, I decided to automate the process by creating a tool to specifically detect HTTP requests like POST, PUT, DELETE, and other significant operations.
This automation enables efficient inspection between packets to identify any activity that warrants attention. When a relevant packet is detected, the tool generates a detailed report containing the following information: source IP, destination IP, protocol, payload size, a brief summary, an explanation of the issue, potential solutions, and recommended actions.
Manual packet analysis can be time-consuming, especially when dealing with large datasets.
This script reads PCAP files, extracts packet details, and uses Google’s Gemini AI to explain packet behavior and suggest security solutions.
Scapy is a powerful library for packet manipulation and analysis. The script reads packets from .pcap files
Using Gemini AI (gemini-pro model), the script generates a human-readable explanation for each packet summary.
The script dynamically asks Gemini AI to suggest potential security fixes or improvements based on the explanation.
The details, explanations, and solutions are formatted into a comprehensive report for each packet.
The script can process all PCAP files in a specified folder and save detailed reports in an output folder.
Why Use This Tool?
Efficient Analysis: Process multiple PCAP files quickly.
AI-Powered Insights: Gemini AI helps explain complex packets and generate tailored security recommendations.
Automated Reporting: Generates organized reports to simplify your workflow.
Final Thoughts
Automating network analysis with AI can save valuable time and enhance your ability to detect and mitigate security issues. This script combines the power of Scapy for packet analysis and Gemini AI to generate insights and solutions.
Ready to give it a try? Clone or adapt this script — https://github.com/hvaandres/PcapAnalyzer/blob/dev/pcap_formatted.py for your own cybersecurity projects and start analyzing PCAP files smarter and faster!
Tomi Engdahl says:
https://image2prompt.net/
Tomi Engdahl says:
Synthesia made making training videos too easy for more than 50,000 L&D teams. This is why:
Because you can:
- Skip the production studio
- Update videos in seconds
- Don’t even need to be on camera
Just select your presenter, type in the script and hit “Generate video.”
That’s it. No video editing skills needed.
Try it out for FREE
https://www.synthesia.io/ads/meta/learning-and-development
P.S. Used by 50,000+ teams and rated 4.7/5
Tomi Engdahl says:
How to Choose the Right AI Model for Your Use Case (Without Going Crazy)
https://dev.to/mhamadelitawi/how-to-choose-the-right-ai-model-for-your-use-case-without-going-crazy-1ko0
You’re building with AI — maybe a chatbot, an agent, a writing assistant, or something more experimental. The code is coming together, the idea is taking shape… and then the real question hits:
“Which model should I actually use?”
Suddenly, you’re lost in a jungle of names: GPT-4, GROK, Mistral, Claude, LLaMA, Gemma… Some are open source. Some are locked behind APIs. Some are fast, others smart, all of them marketed like they’re magic.
And every source seems to offer conflicting advice. The truth is:
It’s not about picking the best model in the world — it’s about picking the best model for your job.
Before diving into model comparisons, define what success looks like for your application. Not hype-worthy demos. What matters is what works for your users — and your goals
Ask yourself:
What kind of results do I need? (Accuracy, creativity, safety, etc.)
What are my non-negotiables? (Privacy, low latency, low cost?)
What kind of hardware or budget do I have?
Do I want to use an API or run the model myself?
This might seem obvious, but skipping this step is why so many teams waste time testing the wrong models.
Picking a model isn’t a one-time thing. You’ll probably test and switch models multiple times as your app grows.
For example , You might start testing with a big fancy model to see if your idea even works. Then try smaller, cheaper models to save cost. Maybe later, you want to finetune a model for better results.
Here’s the core process most teams follow:
Find the best achievable performance
Map models along cost–performance trade-offs
Choose the best model for your needs and budget
Build or Buy? Use APIs or Run Your Own Model?
Here’s the classic question:
Should I use a commercial model through an API, or host an open-source model myself?
Using Commercial APIs (like OpenAI, Anthropic, etc.)
Pros:
Easy to get started
No server headaches
Great performance, usually
Cons:
You don’t control the model
Can’t tweak everything
Expensive at scale
Privacy/legal concerns
Hosting Open Source Models
Pros:
Full control
Better privacy (data stays with you)
You can finetune or modify as needed
Cons:
Harder to set up
You need infra, GPUs, and time
May not match top commercial models in raw power
Ask Yourself:
How sensitive is your data?
Do you need full control or flexibility?
What’s your team’s technical skill level?
How fast do you need to scale?
Licensing: The Fine Print That Can Mess You Up
Not all “open-source” models are created equal. Some only share their weights (how the model behaves), but not the training data (what it learned from).
Before using a model, ask:
Can I use this model for commercial stuff?
Can I use its output to train other models?
Are there limits on user count or distribution?
Read the license (or ask your lawyer). Some models seem open, but have tricky clauses. Better safe than sorry.
Leaderboards are helpful to narrow down options, not to pick your final model.
Once you’ve picked a few promising models, the best thing to do is run your own tests, using your own data.
Steps:
Pick real tasks your model needs to handle.
Write test prompts (e.g., customer questions, documents to summarize).
Define what good looks like (Accuracy? Speed? Tone?)
Compare models side-by-side.
Don’t rely only on numbers—look at outputs with your own eyes. Real-world behavior matters more than benchmark charts.
Tomi Engdahl says:
https://www.oreilly.com/radar/designing-collaborative-multi-agent-systems-with-the-a2a-protocol/
Tomi Engdahl says:
https://www.fastcompany.com/91355910/perplexitys-new-ai-features-are-a-game-changer-heres-how-to-make-the-most-of-them
Tomi Engdahl says:
Generative AI
Recalculating the Costs and Benefits of Gen AI
https://hbr.org/2025/06/recalculating-the-costs-and-benefits-of-gen-ai
Summary. Gen AI has the potential to create a tremendous amount of value for organizations and individual employees. By focusing on the value of the output that AI produces, however, we tend not to think…
Everywhere we look, we’re surrounded by messaging about the profound impact of generative AI. A steady stream of articles outlines how organizations can increase the speed and scale of work or automate and streamline processes, while our inboxes and social media feeds are bombarded with “top 10 AI tools” lists that promise to make us individually more efficient and to take menial or mundane tasks off our plates.
Tomi Engdahl says:
https://developers.googleblog.com/en/google-cloud-donates-a2a-to-linux-foundation/
Tomi Engdahl says:
https://dev.to/pradumnasaraf/run-mcp-servers-in-seconds-with-docker-1ik5
Tomi Engdahl says:
https://www.xda-developers.com/trying-notebooklm-competitors/
Tomi Engdahl says:
Vibe Coder Gets Legal Notice From DocuSign
https://analyticsindiamag.com/ai-news-updates/vibe-coder-gets-legal-notice-from-docusign/
“I never stole anything from DocuSign or made misleading statements,” said the developer.
DocuSign, a platform that provides digital signature services for documents, has sent a legal notice to Michael Luo, a developer who built a website offering a free alternative with a similar feature suite.
Luo built a free e-sign tool using ChatGPT, Cursor, and Lovable—platforms that help developers write code using natural language prompts. He built a product called Inkless in two days, which lets users sign unlimited documents for free.
“Just as DocuSign respects the intellectual property rights of third parties, we expect third parties to do the same with our intellectual property,” the company said in a cease-and-desist letter sent to Luo. The company said it is also concerned about how Luo was “disseminating” false and misleading statements regarding its product. This likely refers to Luo expressing his inspiration to create a free alternative to Docusign’s high costs.
Luo added that despite receiving the legal notice, he is continuing to build the platform and ship new features to make the “free product even better”.
DocuSign allows users to sign and send back unlimited documents for free. However, if one is collecting signatures, they can only send three documents with a free account.
Over the last few months, using AI to build applications has quickly gained popularity, and platforms serving these capabilities have observed unprecedented growth rates. Andrej Karpathy, a former researcher at OpenAI, calls this phenomenon ‘vibe coding’.
Tomi Engdahl says:
https://www.llamaindex.ai/blog/does-mcp-kill-vector-search
The Model Context Protocol (MCP) has sparked significant excitement in the AI community. It gives every data‑ or SaaS‑owner a universal “USB‑C port” that any agent can discover at run‑time. As standardized tool endpoints proliferate — the official MCP servers repository features everything from Auth0 to Zapier — developers are asking important questions about whether they should bother using vector indexing and retrieval pipelines, i.e. RAG pipelines. If agents can route queries directly to specialized MCP servers each owned by an external software/data provider (what we’ll call “federated MCP”), do we still need the traditional approach of crawling, indexing, and retrieving from centralized knowledge bases?
Tomi Engdahl says:
https://forum.cursor.com/t/cursor-new-unlimited-update/107729