AI trends 2025

AI is developing all the time. Here are some picks from several articles what is expected to happen in AI and around it in 2025. Here are picks from various articles, the texts are picks from the article edited and in some cases translated for clarity.

AI in 2025: Five Defining Themes
https://news.sap.com/2025/01/ai-in-2025-defining-themes/
Artificial intelligence (AI) is accelerating at an astonishing pace, quickly moving from emerging technologies to impacting how businesses run. From building AI agents to interacting with technology in ways that feel more like a natural conversation, AI technologies are poised to transform how we work.
But what exactly lies ahead?
1. Agentic AI: Goodbye Agent Washing, Welcome Multi-Agent Systems
AI agents are currently in their infancy. While many software vendors are releasing and labeling the first “AI agents” based on simple conversational document search, advanced AI agents that will be able to plan, reason, use tools, collaborate with humans and other agents, and iteratively reflect on progress until they achieve their objective are on the horizon. The year 2025 will see them rapidly evolve and act more autonomously. More specifically, 2025 will see AI agents deployed more readily “under the hood,” driving complex agentic workflows.
In short, AI will handle mundane, high-volume tasks while the value of human judgement, creativity, and quality outcomes will increase.
2. Models: No Context, No Value
Large language models (LLMs) will continue to become a commodity for vanilla generative AI tasks, a trend that has already started. LLMs are drawing on an increasingly tapped pool of public data scraped from the internet. This will only worsen, and companies must learn to adapt their models to unique, content-rich data sources.
We will also see a greater variety of foundation models that fulfill different purposes. Take, for example, physics-informed neural networks (PINNs), which generate outcomes based on predictions grounded in physical reality or robotics. PINNs are set to gain more importance in the job market because they will enable autonomous robots to navigate and execute tasks in the real world.
Models will increasingly become more multimodal, meaning an AI system can process information from various input types.
3. Adoption: From Buzz to Business
While 2024 was all about introducing AI use cases and their value for organizations and individuals alike, 2025 will see the industry’s unprecedented adoption of AI specifically for businesses. More people will understand when and how to use AI, and the technology will mature to the point where it can deal with critical business issues such as managing multi-national complexities. Many companies will also gain practical experience working for the first time through issues like AI-specific legal and data privacy terms (compared to when companies started moving to the cloud 10 years ago), building the foundation for applying the technology to business processes.
4. User Experience: AI Is Becoming the New UI
AI’s next frontier is seamlessly unifying people, data, and processes to amplify business outcomes. In 2025, we will see increased adoption of AI across the workforce as people discover the benefits of humans plus AI.
This means disrupting the classical user experience from system-led interactions to intent-based, people-led conversations with AI acting in the background. AI copilots will become the new UI for engaging with a system, making software more accessible and easier for people. AI won’t be limited to one app; it might even replace them one day. With AI, frontend, backend, browser, and apps are blurring. This is like giving your AI “arms, legs, and eyes.”
5. Regulation: Innovate, Then Regulate
It’s fair to say that governments worldwide are struggling to keep pace with the rapid advancements in AI technology and to develop meaningful regulatory frameworks that set appropriate guardrails for AI without compromising innovation.

12 AI predictions for 2025
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
https://www.cio.com/article/3630070/12-ai-predictions-for-2025.html
This year we’ve seen AI move from pilots into production use cases. In 2025, they’ll expand into fully-scaled, enterprise-wide deployments.
1. Small language models and edge computing
Most of the attention this year and last has been on the big language models — specifically on ChatGPT in its various permutations, as well as competitors like Anthropic’s Claude and Meta’s Llama models. But for many business use cases, LLMs are overkill and are too expensive, and too slow, for practical use.
“Looking ahead to 2025, I expect small language models, specifically custom models, to become a more common solution for many businesses,”
2. AI will approach human reasoning ability
In mid-September, OpenAI released a new series of models that thinks through problems much like a person would, it claims. The company says it can achieve PhD-level performance in challenging benchmark tests in physics, chemistry, and biology. For example, the previous best model, GPT-4o, could only solve 13% of the problems on the International Mathematics Olympiad, while the new reasoning model solved 83%.
If AI can reason better, then it will make it possible for AI agents to understand our intent, translate that into a series of steps, and do things on our behalf, says Gartner analyst Arun Chandrasekaran. “Reasoning also helps us use AI as more of a decision support system,”
3. Massive growth in proven use cases
This year, we’ve seen some use cases proven to have ROI, says Monteiro. In 2025, those use cases will see massive adoption, especially if the AI technology is integrated into the software platforms that companies are already using, making it very simple to adopt.
“The fields of customer service, marketing, and customer development are going to see massive adoption,”
4. The evolution of agile development
The agile manifesto was released in 2001 and, since then, the development philosophy has steadily gained over the previous waterfall style of software development.
“For the last 15 years or so, it’s been the de-facto standard for how modern software development works,”
5. Increased regulation
At the end of September, California governor Gavin Newsom signed a law requiring gen AI developers to disclose the data they used to train their systems, which applies to developers who make gen AI systems publicly available to Californians. Developers must comply by the start of 2026.
There are also regulations about the use of deep fakes, facial recognition, and more. The most comprehensive law, the EU’s AI Act, which went into effect last summer, is also something that companies will have to comply with starting in mid-2026, so, again, 2025 is the year when they will need to get ready.
6. AI will become accessible and ubiquitous
With gen AI, people are still at the stage of trying to figure out what gen AI is, how it works, and how to use it.
“There’s going to be a lot less of that,” he says. But gen AI will become ubiquitous and seamlessly woven into workflows, the way the internet is today.
7. Agents will begin replacing services
Software has evolved from big, monolithic systems running on mainframes, to desktop apps, to distributed, service-based architectures, web applications, and mobile apps. Now, it will evolve again, says Malhotra. “Agents are the next phase,” he says. Agents can be more loosely coupled than services, making these architectures more flexible, resilient and smart. And that will bring with it a completely new stack of tools and development processes.
8. The rise of agentic assistants
In addition to agents replacing software components, we’ll also see the rise of agentic assistants, adds Malhotra. Take for example that task of keeping up with regulations.
Today, consultants get continuing education to stay abreast of new laws, or reach out to colleagues who are already experts in them. It takes time for the new knowledge to disseminate and be fully absorbed by employees.
“But an AI agent can be instantly updated to ensure that all our work is compliant with the new laws,” says Malhotra. “This isn’t science fiction.”
9. Multi-agent systems
Sure, AI agents are interesting. But things are going to get really interesting when agents start talking to each other, says Babak Hodjat, CTO of AI at Cognizant. It won’t happen overnight, of course, and companies will need to be careful that these agentic systems don’t go off the rails.
Companies such as Sailes and Salesforce are already developing multi-agent workflows.
10. Multi-modal AI
Humans and the companies we build are multi-modal. We read and write text, we speak and listen, we see and we draw. And we do all these things through time, so we understand that some things come before other things. Today’s AI models are, for the most part, fragmentary. One can create images, another can only handle text, and some recent ones can understand or produce video.
11. Multi-model routing
Not to be confused with multi-modal AI, multi-modal routing is when companies use more than one LLM to power their gen AI applications. Different AI models are better at different things, and some are cheaper than others, or have lower latency. And then there’s the matter of having all your eggs in one basket.
“A number of CIOs I’ve spoken with recently are thinking about the old ERP days of vendor lock,” says Brett Barton, global AI practice leader at Unisys. “And it’s top of mind for many as they look at their application portfolio, specifically as it relates to cloud and AI capabilities.”
Diversifying away from using just a single model for all use cases means a company is less dependent on any one provider and can be more flexible as circumstances change.
12. Mass customization of enterprise software
Today, only the largest companies, with the deepest pockets, get to have custom software developed specifically for them. It’s just not economically feasible to build large systems for small use cases.
“Right now, people are all using the same version of Teams or Slack or what have you,” says Ernst & Young’s Malhotra. “Microsoft can’t make a custom version just for me.” But once AI begins to accelerate the speed of software development while reducing costs, it starts to become much more feasible.

9 IT resolutions for 2025
https://www.cio.com/article/3629833/9-it-resolutions-for-2025.html
1. Innovate
“We’re embracing innovation,”
2. Double down on harnessing the power of AI
Not surprisingly, getting more out of AI is top of mind for many CIOs.
“I am excited about the potential of generative AI, particularly in the security space,”
3. And ensure effective and secure AI rollouts
“AI is everywhere, and while its benefits are extensive, implementing it effectively across a corporation presents challenges. Balancing the rollout with proper training, adoption, and careful measurement of costs and benefits is essential, particularly while securing company assets in tandem,”
4. Focus on responsible AI
The possibilities of AI grow by the day — but so do the risks.
“My resolution is to mature in our execution of responsible AI,”
“AI is the new gold and in order to truly maximize it’s potential, we must first have the proper guardrails in place. Taking a human-first approach to AI will help ensure our state can maintain ethics while taking advantage of the new AI innovations.”
5. Deliver value from generative AI
As organizations move from experimenting and testing generative AI use cases, they’re looking for gen AI to deliver real business value.
“As we go into 2025, we’ll continue to see the evolution of gen AI. But it’s no longer about just standing it up. It’s more about optimizing and maximizing the value we’re getting out of gen AI,”
6. Empower global talent
Although harnessing AI is a top objective for Morgan Stanley’s Wetmur, she says she’s equally committed to harnessing the power of people.
7. Create a wholistic learning culture
Wetmur has another talent-related objective: to create a learning culture — not just in her own department but across all divisions.
8. Deliver better digital experiences
Deltek’s Cilsick has her sights set on improving her company’s digital employee experience, believing that a better DEX will yield benefits in multiple ways.
Cilsick says she first wants to bring in new technologies and automation to “make things as easy as possible,” mirroring the digital experiences most workers have when using consumer technologies.
“It’s really about leveraging tech to make sure [employees] are more efficient and productive,”
“In 2025 my primary focus as CIO will be on transforming operational efficiency, maximizing business productivity, and enhancing employee experiences,”
9. Position the company for long-term success
Lieberman wants to look beyond 2025, saying another resolution for the year is “to develop a longer-term view of our technology roadmap so that we can strategically decide where to invest our resources.”
“My resolutions for 2025 reflect the evolving needs of our organization, the opportunities presented by AI and emerging technologies, and the necessity to balance innovation with operational efficiency,”
Lieberman aims to develop AI capabilities to automate routine tasks.
“Bots will handle common inquiries ranging from sales account summaries to HR benefits, reducing response times and freeing up resources for strategic initiatives,”

Not just hype — here are real-world use cases for AI agents
https://venturebeat.com/ai/not-just-hype-here-are-real-world-use-cases-for-ai-agents/
Just seven or eight months ago, when a customer called in to or emailed Baca Systems with a service question, a human agent handling the query would begin searching for similar cases in the system and analyzing technical documents.
This process would take roughly five to seven minutes; then the agent could offer the “first meaningful response” and finally begin troubleshooting.
But now, with AI agents powered by Salesforce, that time has been shortened to as few as five to 10 seconds.
Now, instead of having to sift through databases for previous customer calls and similar cases, human reps can ask the AI agent to find the relevant information. The AI runs in the background and allows humans to respond right away, Russo noted.
AI can serve as a sales development representative (SDR) to send out general inquires and emails, have a back-and-forth dialogue, then pass the prospect to a member of the sales team, Russo explained.
But once the company implements Salesforce’s Agentforce, a customer needing to modify an order will be able to communicate their needs with AI in natural language, and the AI agent will automatically make adjustments. When more complex issues come up — such as a reconfiguration of an order or an all-out venue change — the AI agent will quickly push the matter up to a human rep.

Open Source in 2025: Strap In, Disruption Straight Ahead
Look for new tensions to arise in the New Year over licensing, the open source AI definition, security and compliance, and how to pay volunteer maintainers.
https://thenewstack.io/open-source-in-2025-strap-in-disruption-straight-ahead/
The trend of widely used open source software moving to more restrictive licensing isn’t new.
In addition to the demands of late-stage capitalism and impatient investors in companies built on open source tools, other outside factors are pressuring the open source world. There’s the promise/threat of generative AI, for instance. Or the shifting geopolitical landscape, which brings new security concerns and governance regulations.
What’s ahead for open source in 2025?
More Consolidation, More Licensing Changes
The Open Source AI Debate: Just Getting Started
Security and Compliance Concerns Will Rise
Paying Maintainers: More Cash, Creativity Needed

Kyberturvallisuuden ja tekoälyn tärkeimmät trendit 2025
https://www.uusiteknologia.fi/2024/11/20/kyberturvallisuuden-ja-tekoalyn-tarkeimmat-trendit-2025/
1. Cyber ​​infrastructure will be centered on a single, unified security platform
2. Big data will give an edge against new entrants
3. AI’s integrated role in 2025 means building trust, governance engagement, and a new kind of leadership
4. Businesses will adopt secure enterprise browsers more widely
5. AI’s energy implications will be more widely recognized in 2025
6. Quantum realities will become clearer in 2025
7. Security and marketing leaders will work more closely together

Presentation: For 2025, ‘AI eats the world’.
https://www.ben-evans.com/presentations

Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing
However, just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity. Right now, we very much see AI in this “peak of inflated expectations” phase and predict that it will dip into the “trough of disillusionment”, where organizations realize that it is not the silver bullet they thought it would be. In fact, there are already signs of cynicism as decision-makers are bombarded with marketing messages from vendors and struggle to discern what is a genuine use case and what is not relevant for their organization.
There is also regulation that will come into force, such as the EU AI Act, which is a comprehensive legal framework that sets out rules for the development and use of AI.
AI certainly won’t solve every problem, and it should be used like automation, as part of a collaborative mix of people, process and technology. You simply can’t replace human intuition with AI, and many new AI regulations stipulate that human oversight is maintained.

7 Splunk Predictions for 2025
https://www.splunk.com/en_us/form/future-predictions.html
AI: Projects must prove their worth to anxious boards or risk defunding, and LLMs will go small to reduce operating costs and environmental impact.

OpenAI, Google and Anthropic Are Struggling to Build More Advanced AI
Three of the leading artificial intelligence companies are seeing diminishing returns from their costly efforts to develop newer models.
https://www.bloomberg.com/news/articles/2024-11-13/openai-google-and-anthropic-are-struggling-to-build-more-advanced-ai
Sources: OpenAI, Google, and Anthropic are all seeing diminishing returns from costly efforts to build new AI models; a new Gemini model misses internal targets

It Costs So Much to Run ChatGPT That OpenAI Is Losing Money on $200 ChatGPT Pro Subscriptions
https://futurism.com/the-byte/openai-chatgpt-pro-subscription-losing-money?fbclid=IwY2xjawH8epVleHRuA2FlbQIxMQABHeggEpKe8ZQfjtPRC0f2pOI7A3z9LFtFon8lVG2VAbj178dkxSQbX_2CJQ_aem_N_ll3ETcuQ4OTRrShHqNGg
In a post on X-formerly-Twitter, CEO Sam Altman admitted an “insane” fact: that the company is “currently losing money” on ChatGPT Pro subscriptions, which run $200 per month and give users access to its suite of products including its o1 “reasoning” model.
“People use it much more than we expected,” the cofounder wrote, later adding in response to another user that he “personally chose the price and thought we would make some money.”
Though Altman didn’t explicitly say why OpenAI is losing money on these premium subscriptions, the issue almost certainly comes down to the enormous expense of running AI infrastructure: the massive and increasing amounts of electricity needed to power the facilities that power AI, not to mention the cost of building and maintaining those data centers. Nowadays, a single query on the company’s most advanced models can cost a staggering $1,000.

Tekoäly edellyttää yhä nopeampia verkkoja
https://etn.fi/index.php/opinion/16974-tekoaely-edellyttaeae-yhae-nopeampia-verkkoja
A resilient digital infrastructure is critical to effectively harnessing telecommunications networks for AI innovations and cloud-based services. The increasing demand for data-rich applications related to AI requires a telecommunications network that can handle large amounts of data with low latency, writes Carl Hansson, Partner Solutions Manager at Orange Business.

AI’s Slowdown Is Everyone Else’s Opportunity
Businesses will benefit from some much-needed breathing space to figure out how to deliver that all-important return on investment.
https://www.bloomberg.com/opinion/articles/2024-11-20/ai-slowdown-is-everyone-else-s-opportunity

Näin sirumarkkinoilla käy ensi vuonna
https://etn.fi/index.php/13-news/16984-naein-sirumarkkinoilla-kaey-ensi-vuonna
The growing demand for high-performance computing (HPC) for artificial intelligence and HPC computing continues to be strong, with the market set to grow by more than 15 percent in 2025, IDC estimates in its recent Worldwide Semiconductor Technology Supply Chain Intelligence report.
IDC predicts eight significant trends for the chip market by 2025.
1. AI growth accelerates
2. Asia-Pacific IC Design Heats Up
3. TSMC’s leadership position is strengthening
4. The expansion of advanced processes is accelerating.
5. Mature process market recovers
6. 2nm Technology Breakthrough
7. Restructuring the Packaging and Testing Market
8. Advanced packaging technologies on the rise

2024: The year when MCUs became AI-enabled
https://www-edn-com.translate.goog/2024-the-year-when-mcus-became-ai-enabled/?fbclid=IwZXh0bgNhZW0CMTEAAR1_fEakArfPtgGZfjd-NiPd_MLBiuHyp9qfiszczOENPGPg38wzl9KOLrQ_aem_rLmf2vF2kjDIFGWzRVZWKw&_x_tr_sl=en&_x_tr_tl=fi&_x_tr_hl=fi&_x_tr_pto=wapp
The AI ​​party in the MCU space started in 2024, and in 2025, it is very likely that there will be more advancements in MCUs using lightweight AI models.
Adoption of AI acceleration features is a big step in the development of microcontrollers. The inclusion of AI features in microcontrollers started in 2024, and it is very likely that in 2025, their features and tools will develop further.

Just like other technologies that have gone before, such as cloud and cybersecurity automation, right now AI lacks maturity.
https://www.securityweek.com/ai-implementing-the-right-technology-for-the-right-use-case/
If 2023 and 2024 were the years of exploration, hype and excitement around AI, 2025 (and 2026) will be the year(s) that organizations start to focus on specific use cases for the most productive implementations of AI and, more importantly, to understand how to implement guardrails and governance so that it is viewed as less of a risk by security teams and more of a benefit to the organization.
Businesses are developing applications that add Large Language Model (LLM) capabilities to provide superior functionality and advanced personalization
Employees are using third party GenAI tools for research and productivity purposes
Developers are leveraging AI-powered code assistants to code faster and meet challenging production deadlines
Companies are building their own LLMs for internal use cases and commercial purposes.
AI is still maturing

AI Regulation Gets Serious in 2025 – Is Your Organization Ready?
While the challenges are significant, organizations have an opportunity to build scalable AI governance frameworks that ensure compliance while enabling responsible AI innovation.
https://www.securityweek.com/ai-regulation-gets-serious-in-2025-is-your-organization-ready/
Similar to the GDPR, the EU AI Act will take a phased approach to implementation. The first milestone arrives on February 2, 2025, when organizations operating in the EU must ensure that employees involved in AI use, deployment, or oversight possess adequate AI literacy. Thereafter from August 1 any new AI models based on GPAI standards must be fully compliant with the act. Also similar to GDPR is the threat of huge fines for non-compliance – EUR 35 million or 7 percent of worldwide annual turnover, whichever is higher.
While this requirement may appear manageable on the surface, many organizations are still in the early stages of defining and formalizing their AI usage policies.
Later phases of the EU AI Act, expected in late 2025 and into 2026, will introduce stricter requirements around prohibited and high-risk AI applications. For organizations, this will surface a significant governance challenge: maintaining visibility and control over AI assets.
Tracking the usage of standalone generative AI tools, such as ChatGPT or Claude, is relatively straightforward. However, the challenge intensifies when dealing with SaaS platforms that integrate AI functionalities on the backend. Analysts, including Gartner, refer to this as “embedded AI,” and its proliferation makes maintaining accurate AI asset inventories increasingly complex.
Where frameworks like the EU AI Act grow more complex is their focus on ‘high-risk’ use cases. Compliance will require organizations to move beyond merely identifying AI tools in use; they must also assess how these tools are used, what data is being shared, and what tasks the AI is performing. For instance, an employee using a generative AI tool to summarize sensitive internal documents introduces very different risks than someone using the same tool to draft marketing content.
For security and compliance leaders, the EU AI Act represents just one piece of a broader AI governance puzzle that will dominate 2025.
The next 12-18 months will require sustained focus and collaboration across security, compliance, and technology teams to stay ahead of these developments.

The Global Partnership on Artificial Intelligence (GPAI) is a multi-stakeholder initiative which aims to bridge the gap between theory and practice on AI by supporting cutting-edge research and applied activities on AI-related priorities.
https://gpai.ai/about/#:~:text=The%20Global%20Partnership%20on%20Artificial,activities%20on%20AI%2Drelated%20priorities.

3,878 Comments

  1. Tomi Engdahl says:

    Top Spy Says Tech Corporations Are Closer to Running Entire World Than Governments
    Who would know better?
    https://futurism.com/future-society/mi6-tech-billionaires-government

    Reply
  2. Tomi Engdahl says:

    LG TV users baffled by unremovable Microsoft Copilot installation — surprise forced update shows app pinned to the home screen
    News
    By Luke James published 3 days ago
    Users report Copilot appearing after a recent software update, with no option to uninstall.
    https://www.tomshardware.com/service-providers/tv-providers/lg-tv-update-adds-non-removable-microsoft-copilot-app-to-webos

    Reply
  3. Tomi Engdahl says:

    Racks of AI chips are too damn heavy
    Old data centers physically cannot support rows and rows of GPUs, which is one reason for the massive AI data center buildout.
    https://www.theverge.com/ai-artificial-intelligence/844966/heavy-ai-data-center-buildout

    Reply
  4. Tomi Engdahl says:

    Vast Number of Windows Users Refusing to Upgrade After Microsoft’s Embrace of AI Slop
    “Stop this nonsense. No one wants this.”
    https://futurism.com/artificial-intelligence/windows-users-refusing-upgrade-windows-11-ai

    Earlier this year, Microsoft officially yanked the cord on Windows 10, ending support for an operating system that had been superseded by Windows 11 four years earlier.

    But the tech giant’s controversial attempts to shoehorn AI into every aspect of the software appear to have turned off a staggering number of users from upgrading. While it’s to be expected at this point that not everybody will have jumped at the opportunity to update their machine’s operating system, the sheer scale of that refusal is staggering.

    As Forbes reports, a whopping 1 billion PCs are still running Windows 10 — despite half of them technically being eligible for an upgrade.

    During PC maker Dell’s November quarterly earnings call, the company’s COO, Jeff Clarke, admitted that “we have about 500 million of them capable of running Windows 11 that haven’t been upgraded,” referring to all PCs, and not just Dell machines.

    “Those are all rich opportunities to upgrade towards Windows 11 and modern technology,” he said. The remaining 500 million were not eligible for the upgrade.

    In other words, those who own a whopping third of the estimated 1.5 billion PCs worldwide are outright refusing to upgrade, indicating Microsoft is seriously struggling to woo them.

    But given the widespread backlash over Microsoft’s doubling down on AI features, there’s a good chance a vast number of Windows users are also balking at the idea of the company shoving those features down their throats.

    As Windows turns 40, Microsoft faces an AI backlashMicrosoft wants to overhaul Windows into an agentic OS, but that’s easier said than done.
    https://www.theverge.com/tech/825022/microsoft-windows-40-year-anniversary-agentic-os-future

    Reply
  5. Tomi Engdahl says:

    Rogan Reckoning
    Joe Rogan Keeps Playing AI-Generated Music for Guests, But He’s Speechless When One of Them Points Out That Podcasts Can Be AI-Generated Too
    Now wait a minute!
    https://futurism.com/artificial-intelligence/joe-rogan-speechless-ai-podcasts

    Reply
  6. Tomi Engdahl says:

    Please Enjoy Laughing at the Prediction Markets, in Full Meltdown, After Time’s “Person of the Year” Reveal
    “This is actually so freaking stupid.”
    https://futurism.com/future-society/prediction-markets-meltdown-time-person-of-the-year

    Just as many had predicted, Time magazine once again took some liberties with its annual “Person of the Year” issue.

    Besides blocking users from reading its website with an AI chatbot, the magazine anointed the “architects of AI” as its most important visionaries of 2025, eschewing the definition of “person” yet again.

    The eyeroll-inducing announcement was met with plenty of incredulity, especially considering the astronomical amount of money being spent on building out data centers, their enormous carbon footprint, and a whole litany of other ethical conundrums that the embrace of generative AI has spawned.

    Reply
  7. Tomi Engdahl says:

    Robot Walks for Three Days Straight, Hotswapping Its Battery Over and Over in New World Record
    “Accompanied by the first ray of dawn in the morning, I have reached the finish line of this hike.”
    https://futurism.com/robots-and-machines/robot-agibot-humanoid-walking

    Reply
  8. Tomi Engdahl says:

    Why Google built its own VS Code fork in the first place
    https://www.howtogeek.com/why-google-built-its-own-vs-code-fork-in-the-first-place/

    Google forking VS Code was a very significant change, and there weren’t many signs that this was coming. When Google, a company that has championed and contributed to open source projects like Visual Studio Code, chooses to fork the application and build its own parallel app, it demands attention.

    Antigravity is a very popular app, and has poached me from VS Code. With the success, it is safe to say that it was a very smart move by the company.

    The ‘agent-first’ architecture required breaking the extension sandbox

    Google decided to fork Visual Studio Code because the standard extension API was too restrictive for an agent-first plan. Traditional extensions are basically passive assistants. They usually just wait for you to type something out before popping up to help you. Also, they operate inside these strict security sandboxes that severely limit their ability to make big changes across your whole codebase.

    When you’re using a standard setup, tools like GitHub Copilot are helpful, but they tend to live in the margins.

    Reply
  9. Tomi Engdahl says:

    Gartner: Yritysten pitää kieltää tekoälyselaimet työntekijöiltään välittömästi
    https://dawn.fi/uutiset/2025/12/11/gartner-yritysten-pitaa-kieltaa-tekoalyselaimet-tyontekijoiltaan#google_vignette

    Tutkimusyhtiö Gartner kehottaa yrityksiä kieltämään välittömästi ns. tekoälyselainten käytön työntekijöiltään, kokonaan.

    Gartnerin kehotus koskee nimenaomaan tekoälyagentteja hyödyntäviä selaimia, kuten OpenAI:n Atlas-selainta ja Perplexityn selainta, eikä niinkään pelkän tekoälyavustajan sisältäviä selaimia, kuten Microsoft Edgeä.

    Agenttipohjaiset selaimet ovat siis selaimia, jotka pystyvät toteuttamaan kokonaisia tehtäviä käyttäjänsä puolesta, eli selailemaan verkkoa, täyttämään verkkolomakkeita ja jopa tekemään ostoksia verkossa käyttäjänsä puolesta.

    Yhtiö on huolissaan siitä, että nämä uudet selaimet pistävät käyttäjän käyttökokemuksen tietoturvan edelle. Gartnerin mukaan tällaisissa täysin automatisoiduissa työprosesseissa piilee se riski, että tekoäly voi kaikessa rauhassa puuhata jotain epäluotettavilla verkkosivustoilla ja jakaa yrityksen arkaluonteisia tietoja täysin väärille tahoille.

    Reply
  10. Tomi Engdahl says:

    Even the man behind ChatGPT, OpenAI CEO Sam Altman, is worried about the ‘rate of change that’s happening in the world right now’ thanks to AI
    https://fortune.com/2025/12/09/openai-ceo-sam-altman-worried-about-ai-future-chatgpt-pros-cons-rate-of-change-future-of-work-uncertain/

    Just three years since ChatGPT launched to the world, it has upended industries, accelerated scientific discovery, and sparked visions in which diseases are cured and workweeks shrink. Yet the same technology fueling those promises is also creating a host of new anxieties—and no one feels that more acutely than the man who helped unleash it.

    OpenAI CEO Sam Altman has just revealed that there is a “long list of things” that haven’t been so great about ChatGPT’s rapid rise, starting with the speed at which it has reshaped the world. The very system that could eradicate illnesses, he said on The Tonight Show, can also be misused in ways society isn’t remotely prepared for.

    “One of the things that I’m worried about is just the rate of change that’s happening in the world right now,” Altman told Jimmy Fallon. “This is a three-year-old technology. No other technology has ever been adopted by the world this fast.”

    Reply
  11. Tomi Engdahl says:

    The ‘no prompt’ rule makes ChatGPT give expert-level writing advice — here’s how it works
    Features
    By Amanda Caswell published December 9, 2025
    I get the answers I need without typing a single prompt
    https://www.tomsguide.com/ai/the-no-prompt-rule-makes-chatgpt-give-expert-level-writing-advice-heres-how-it-works

    As a journalist, a former scriptwriter and someone who loves to self-publish sci-fi books, I often use AI for writing — but not in the way you’d expect. In fact, many times, I don’t prompt it at all. Instead, I get everything I need to know with what I call my “zero prompt” prompt.

    If you’ve ever stared at a messy draft wondering what’s wrong with it, you’re not alone — but I use a trick that makes that moment a lot less painful. And it’s almost laughably simple.

    I stopped prompting.

    Instead of telling ChatGPT what I wanted with a prompt like, “give me feedback,” “analyze this,” “fix this paragraph,” “help me rewrite this scene” or really any other ask — I started uploading my draft without asking anything at all.
    And what happens next unlocks a secret editing mode nobody talks about. ChatGPT (or Gemini or Claude) instantly jumps in with exactly the kind of feedback I need. It tells me what’s working, what’s not, where the pacing drags, where the dialouge shines and even where the reader might be confused. What’s wild is that the chatbot does this completely on its own — because the document itself becomes the prompt.

    Reply
  12. Tomi Engdahl says:

    Why AI agents are so good at coding
    opinion
    Dec 10, 2025
    Soon LLMs will write better code than any human, for several simple reasons. Developers should rejoice.
    https://www.infoworld.com/article/4101337/why-ai-agents-are-so-good-at-coding.html

    I’ve written about how coding is so over. AI is getting smarter every day, and it won’t be long before large language models (LLMs) write better code than any human.

    But why is coding the one thing that AI agents seem to excel at? The reasons are simple and straightforward.

    At their core, LLMs process text. They take in massive amounts of text, learn the patterns of that text, and then use all of that information to predict what the next word will be in a given sentence. These models take your question, parse it into text tokens, and then use the trillions (quadrillions?) of vectors they have learned to understand the question and give an answer, one word, or token, at a time. It seems wild, but it is literally that simple. An LLM produces its answer one word at a time.

    Doing all this ultimately comes down to just a huge amount of vector math—staggering amounts of calculations. Fortunately, GPUs are really good at vector math, and that is why AI companies have an insatiable appetite for GPUs and why Nvidia is the most valuable company in the world right now. It seems weird to me that the technology used to generate amazing video games is the same that produces amazing text answers to our questions.

    Code is text
    And of course, code is just words, right? In fact, that is one of the basic tenets of coding—it’s all just text. Git is designed specifically to store and manage text, and to understand the differences between two chunks of text. The tool we all work in, an integrated development environment (IDE), is really a glorified text editor with a bunch of bells and whistles attached. Coding is all about words.

    In addition to being words, those words are structured consistently and succinctly—much moreso than the words we speak. Most text is messy, but all code by definition has patterns that are easier for an LLM to recognize than natural language. As a result, LLMs are naturally better at reading and writing code. LLMs can quite quickly and easily parse code, detect patterns, and reproduce those patterns on demand.

    Code is plentiful
    And there is an enormous amount of code out there. Just think of GitHub alone. A back-of-the-envelope calculation says there is around 100 billion lines of open-source code available for training AI. That’s a lot of code. A whole lot of code.

    And if you need an explanation of how code works, there are something like 20 million questions and even more answers on Stack Overflow for AI to learn from. There’s a reason that Stack Overflow is a shell of its former self—we all are asking AI for answers instead of our fellow developers.

    Code is verifiable
    In addition, code is easily verified. First, does it compile? That is always the big first test, and then we can check via testing if it actually does what we want. Unlike other domains, AI’s code output can be checked and verified fairly easily.

    If you choose to, you can even have your AI write unit and integration tests beforehand, further clarifying and defining what the AI should do. Then, tell your AI to write code that passes the tests. Eventually, AI will figure out that test-driven development is the best path to writing good code and executing on your wishes, and you won’t even have to ask it to do that.

    Reply
  13. Tomi Engdahl says:

    Cory Doctorow Says the AI Industry Is About to Collapse
    “So, you’re saying a third of the stock market is tied up in seven AI companies that have no way to become profitable and that this is a bubble that’s going to burst and take the whole economy with it?”
    https://futurism.com/future-society/cory-doctorow-ai-collapse

    When it comes to tech doomsaying, few are as cynical — or prescient — as sci-fi author and tech journalist Cory Doctorow.

    His far-sighted blend of tech criticism and class analysis has found Doctorow time and again at the bleeding edge of commentary on our techno-capitalist society. In many ways, his screeds on topics like tech broligarchs and enshittification broke the mold to allow room for the kind of tech-critical reporting being done by media projects like Tech Policy Press and 404 Media, or the podcasts “Tech Won’t Save Us,” and “Better Offline.”

    For better or worse, Doctorow’s insights have often been a much-needed mirror into our society’s complicated relationship with tech corporations.

    In his essay, Doctorow recounts a conversation he had with an undergraduate student after delivering a lecture on the AI bubble, in which he reiterated that AI investors are propping up the US economy.

    “So, you’re saying a third of the stock market is tied up in seven AI companies that have no way to become profitable and that this is a bubble that’s going to burst and take the whole economy with it?” the student asked fearfully.

    “Yes, that’s right,” the tech critic responded.

    “Okay, but what can we do about that?”

    Doctorow answered that the bubble is being propped up by tech mega–corporations who are now begging investors to come aboard, now that their growth potential is slowing to a halt.

    To court investors, the monopolists are selling a lie that AI can replace human workers — when in reality, AI experiments are failing at 95 percent of companies that attempt them.

    “AI cannot do your job, but an AI salesman can 100 percent convince your boss to fire you and replace you with an AI that can’t do your job,” Doctorow writes. “When the bubble bursts, the money-hemorrhaging ‘foundation models’ will be shut off and we’ll lose the AI that can’t do your job, and you will be long gone, retrained or retired or ‘discouraged’ and out of the labor market, and no one will do your job.”

    Given the fact that these investments are already locked in an intricate web of finance capital, Doctorow argues that the best — and only — thing to do is the “puncture the AI bubble as soon as possible, to halt this before it progresses any further and to head off the accumulation of social and economic debt.”

    Popping the bubble will mean taking out the “material basis” propping it up: the myth that large language models can do our jobs.

    “The most important thing about AI isn’t its technical capabilities or limitations,” Doctorow concludes. “The most important thing is the investor story and the ensuing mania that has teed up an economical catastrophe that will harm hundreds of millions or even billions of people. AI isn’t going to wake up, become superintelligent and turn you into paperclips — but rich people with AI investor psychosis are almost certainly going to make you much, much poorer.”

    Reply
  14. Tomi Engdahl says:

    14 THINGS YOU SHOULD NEVER ASK CHATGPT
    https://www.slashgear.com/2042576/never-ask-chatgpt-these-things/

    Just a few short years after its release, ChatGPT has become the world’s de facto digital multi-tool for everything from a simple Google search to planning an event. Though I’m quite pessimistic about how useful ChatGPT (and its ilk) is, I can’t deny that it provides some utility. However, there is a laundry list of things that no one should be using ChatGPT (or any chatbot) for.

    Too many people treat ChatGPT like an oracle, not like a multi-modal LLM that is heavily prone to misinformation and hallucinations. They assume it knows everything and can do anything. The unsettling reasons why you should avoid using ChatGPT have been discussed practically to death. It’s said some pretty creepy things and has given a lot of people straight-up terrible advice on a number of subjects. Chatbots can provide utility, but that assumes you know what they shouldn’t be used for. Here are 14 things you should take to a real person, not to OpenAI.

    Read More: https://www.slashgear.com/2042576/never-ask-chatgpt-these-things/

    Reply
  15. Tomi Engdahl says:

    Why AI coding agents aren’t production-ready: Brittle context windows, broken refactors, missing operational awareness
    https://venturebeat.com/ai/why-ai-coding-agents-arent-production-ready-brittle-context-windows-broken

    Reply
  16. Tomi Engdahl says:

    Työnhakijat huijaavat tekoälyllä niin paljon, että pelifirma pisti heidät piirtämään
    Justus Vento7.12.202520:00Tekoäly
    Japanilaisyhtiö joutui ottamaan kovat keinot käyttöön työhaastatteluissa.
    https://www.tivi.fi/uutiset/a/dbdf2698-bf1a-44d9-8921-853ccd9edcf4

    Japanilainen peliyhtiö on joutunut pakottamaan työnhakijat piirtämään työhaastattelun yhteydessä osoittaakseen, ettei heidän hakemuksensa ja portfolionsa ole tekoälyn tuotosta. Asiasta kertoo japanilainen Daily Shincho -lehti.

    Reply
  17. Tomi Engdahl says:

    Tekoälyhype ei kanna – Yritykset seisovat lähtöviivalla
    Anne Lahnajärvi8.12.202506:30
    Yritykset eivät muutu, jos ihmiset eivät muutu, kansainvälisessä konsulttiyhtiö BearingPointissa nähdään.
    https://www.tivi.fi/uutiset/a/c0fb99c9-5974-4bb9-86a5-2959b7f6aea2

    Liiketoiminta. “Jos et osaa mitata vaikuttavuutta, et uskalla skaalata. Ja jos et skaalaa, jäät jälkeen”,

    Tekoälyn hype jatkuu edelleen, mutta suurin osa yrityksistä seisoo edelleen lähtöviivalla. Näin uskotaan kansainvälisessä konsulttiyhtiö BearingPointissa.

    Reply
  18. Tomi Engdahl says:

    Linux First, Windows Later! Dell Launches Qualcomm NPU Laptop on Linux Before Windows
    Windows takes a backseat on Dell’s latest AI workstation as Linux gets the priority. Windows 11 version will be coming in 2026.
    https://itsfoss.com/news/dell-pro-max-16-plus/

    Reply
  19. Tomi Engdahl says:

    ‘Godfather of AI’ says Bill Gates and Elon Musk are right about the future of work—but he predicts mass unemployment is on its way
    https://fortune.com/2025/12/04/godfather-of-ai-geoffrey-hinton-massive-unemployment-warning-thanks-to-big-tech-replacing-workers-with-ai-senator-bernie-sanders-bill-gates-elon-musk-predictions-probably-right/

    The long-term impact of artificial intelligence is one of the most hotly debated topics in Silicon Valley. Nvidia CEO Jensen Huang predicts that every job will be transformed—and likely lead to a four-day workweek. Other tech titans go even further: Bill Gates says humans may soon not be needed “for most things,” and Elon Musk believes most humans won’t have to work at all in “less than 20 years.”

    Reply
  20. Tomi Engdahl says:

    OpenAI Is Suddenly in Major Trouble
    “They’re going to end up just like MySpace did.”
    https://futurism.com/artificial-intelligence/openai-is-suddenly-in-major-trouble

    The alarm bells are going off at OpenAI.

    What was once a healthy lead over its competition thanks to its blockbuster AI chatbot ChatGPT has turned into a razor-thin edge, motivating OpenAI CEO Sam Altman to declare a “code red.”

    The financial stakes are almost comical in their magnitude: The company is lighting billions of dollars on fire, with no end in sight; it’s committed to spending well over $1 trillion over the next several years while simultaneously losing a staggering sum each quarter.

    And revenues are lagging far behind, with the vast majority of ChatGPT users balking at the idea of paying for a subscription.

    Meanwhile, Google has made major strides, quickly catching up with OpenAI’s claimed 800 million or so weekly active ChatGPT users as of September. Worse yet, Google is far better positioned to turn generative AI into a viable business — all while minting a comfortable $30 billion in profit each quarter, as the Washington Post points out.

    Reply
  21. Tomi Engdahl says:

    Man Realizes He Can Feed Poison Pills to Facebook AI Slop Page, Driving Its Followers Berserk
    “What kinda word salad is this?”
    https://futurism.com/artificial-intelligence/facebook-ai-slop-poison-pill

    AI bros love cribbing what humans make so they can churn out loads of meaningless slop. But one of those humans wasn’t going to take getting ripped off without fighting back.

    Scott Collette, a Hollywood screenwriter who runs the popular “Forgotten Los Angeles” account on Instagram, says he noticed that an AI Facebook account was stealing his history posts for the past six weeks and “slopping out new captions.”

    In one example, the AI slop page, dubbed “Historical Los Angeles USA,” shares a photo of what appears to be the horrific flood that swallowed the city nearly a century ago.

    The post’s caption, though, was an eyebrow-raiser: “A lake made of conservative tears (2025).”

    “This satirical caption reflects the intense political climate of the era, where online culture embraced humor, exaggeration, and meme style commentary to express frustration or celebration,” the description asserts, in an amazing display of AI’s ability to bullshit an answer about literally anything.

    “This ‘lake’ represents digital era emotional exhaustion, ideological clashes, and the dramatic style of commentary that defined the mid 2020s,” it added. “It captures a moment when humor felt like both protest and release.”

    Reply
  22. Tomi Engdahl says:

    Researchers Just Found Something Extremely Alarming About AI’s Power Usage
    It’s even worse than we thought.
    https://futurism.com/future-society/ai-power-usage-text-to-video-generator

    Reply
  23. Tomi Engdahl says:

    OpenAI boss Sam Altman declares ‘code red’ over ChatGPT
    ChatGPT has fallen behind Google Gemini in a range of benchmark tests
    https://www.independent.co.uk/tech/chatgpt-openai-sam-altman-code-red-b2876932.html

    Reply
  24. Tomi Engdahl says:

    IBM CEO warns that ongoing trillion-dollar AI data center buildout is unsustainable — says there is ‘no way’ that infrastructure costs can turn a profit
    News
    By Luke James published December 3, 2025
    Krishna’s cost model challenges the economics behind multi-gigawatt AI campuses.
    https://www.tomshardware.com/tech-industry/ibm-ceo-warns-trillion-dollar-ai-boom-unsustainable-at-current-infrastructure-costs

    Reply
  25. Tomi Engdahl says:

    Microsoft’s Attempts to Sell AI Agents Are Turning Into a Disaster
    Yikes.
    https://futurism.com/artificial-intelligence/microsoft-sell-ai-agents-disaster

    Reply
  26. Tomi Engdahl says:

    Arvostettu tiedelehti poisti surkuhupaisan ”tiedeartikkelin”: Täynnä tekoälyn tuottamaa höpönlöpöä
    Lukuisat seikat viittaavat siihen, että tiedelehdessä julkaistu artikkeli on tekoälyn luomaa sotkua.
    https://www.tivi.fi/uutiset/a/61fb383b-d919-42ab-a7de-ab6d0b6d0499

    Autismia käsittelevä ”tiedeartikkeli” on noussut noloutensa vuoksi viraalihitiksi. 19. marraskuuta julkaistun artikkelin tieteellisyys on nyt naurettu maan rakoon. Lukuisat seikat nimittäin viittaavat siihen, että paperi on täyttä tekoälyn luomaa kauhtaa.

    Nature Scientific Reports kuuluu arvostettuun Nature-tiedelehtiperheeseen, mutta sitä ei pidetä yhtä korkeatasoisena kuin Naturen kuuluisampia tiedejulkaisusarjoja. Scientific Reports julkaisee tuhansittain artikkeleja vuosittain. Nyt julkaisuun on päässyt artikkeli, jonka kohdalla ihmetellään onko se todella käynyt vertaisarvioinnissa tai onko sitä kukaan ihminen lukenut kertaakaan ajatuksen kanssa läpi ennen julkaisua.

    Artikkelissa analysoitiin autismin kirjolla olevien henkilöiden potilastietoja koneoppimisella diagnoosien tarkentamiseksi. Erityisesti tutkimuksen ensimmäinen grafiikka on joutunut naurun alaiseksi. Sen ja artikkelin muiden osien tekoälykömmähdyksiä on listannut australialainen it-asiantuntija, kirjailija ja tunnettu kryptovaluuttakriitikko David Gerard blogissaan

    Nature’s latest AI-generated paper — with medical frymblal and Factor Fexcectorn
    https://pivot-to-ai.com/2025/11/28/natures-latest-ai-generated-paper-with-medical-frymblal-and-factor-fexcectorn/

    Reply
  27. Tomi Engdahl says:

    7 ChatGPT Tricks to Automate Your Data Tasks
    This article explores how to transform ChatGPT from a chatbot into a powerful data assistant that streamlines the repetitive, the tedious, and the complex.
    https://www.kdnuggets.com/7-chatgpt-tricks-to-automate-your-data-tasks

    Reply
  28. Tomi Engdahl says:

    Beyond math and coding: New RL framework helps train LLM agents for complex, real-world tasks
    https://venturebeat.com/ai/beyond-math-and-coding-new-rl-framework-helps-train-llm-agents-for-complex

    Reply
  29. Tomi Engdahl says:

    https://fadr.com/blog/v5.0?gad_source=1&gad_campaignid=20957798289&gclid=Cj0KCQiA6NTJBhDEARIsAB7QHD2HP11vXR9-F_QPcG7X2PNlnkHFklGTjxvfZ3CiMnz-CjTr7v8fsC4aAu7AEALw_wcB

    The World’s Best AI Stems Are Free – v5.0
    Gabe

    Jan 31, 2023
    Fadr v5.0 is here! We’ve taken AI stems another leap forward with a brand new stemming model. Best of all, you can separate unlimited stems for free. Yep, you heard that right.

    Even Better Stems

    Reply
  30. Tomi Engdahl says:

    Sam Altman Is Suddenly Terrified
    He just declared a “code red.”
    https://futurism.com/artificial-intelligence/sam-altman-code-red

    Reply
  31. Tomi Engdahl says:

    Scientists Discover “Universal” Jailbreak for Nearly Every AI, and the Way It Works Will Hurt Your Brain
    It’s AI versus verse.
    https://futurism.com/artificial-intelligence/universal-jailbreak-ai-poems

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*