<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments on: AI trends 2026</title>
	<atom:link href="http://www.epanorama.net/blog/2026/01/11/ai-trends-2026/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.epanorama.net/blog/2026/01/11/ai-trends-2026/</link>
	<description>All about electronics and circuit design</description>
	<lastBuildDate>Tue, 28 Apr 2026 09:05:42 +0000</lastBuildDate>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.9.14</generator>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2026/01/11/ai-trends-2026/comment-page-35/#comment-1876855</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Tue, 28 Apr 2026 05:49:04 +0000</pubDate>
		<guid isPermaLink="false">https://www.epanorama.net/blog/?p=198821#comment-1876855</guid>
		<description><![CDATA[Smog and Mirrors
Just 11 AI Data Centers Could Belch More Fumes Than Entire Countries
Horrifying.
https://futurism.com/science-energy/data-centers-emit-more-than-countries?fbclid=IwdGRjcARdHEljbGNrBF0cImV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHrQ1p4zE9TfeDzSALH3LXXx4vaKWHaJhKkhA7s0a5vVZDhg1EObIKoV82Dt4_aem_squqg8uikeBATWmJFsYmzg

Just eleven gas-powered data centers in the US could belch more greenhouse emissions than countries with populations of tens of millions of people, according to a new analysis by Wired.

The magazine examined emissions estimates provided by gas power projects that are being built to supply energy to the data centers. Construction of these sprawling facilities has surged to meet the demands of the AI industry, and to get them online as soon as possible, many of the newly built data centers are relying on gas power. This strategy means data centers don’t have to wait to plug into local power grids and be saddled with the controversy that invites, such as surging energy bills. Gas turbines can be trucked in on-site and start providing power almost immediately.]]></description>
		<content:encoded><![CDATA[<p>Smog and Mirrors<br />
Just 11 AI Data Centers Could Belch More Fumes Than Entire Countries<br />
Horrifying.<br />
<a href="https://futurism.com/science-energy/data-centers-emit-more-than-countries?fbclid=IwdGRjcARdHEljbGNrBF0cImV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHrQ1p4zE9TfeDzSALH3LXXx4vaKWHaJhKkhA7s0a5vVZDhg1EObIKoV82Dt4_aem_squqg8uikeBATWmJFsYmzg" rel="nofollow">https://futurism.com/science-energy/data-centers-emit-more-than-countries?fbclid=IwdGRjcARdHEljbGNrBF0cImV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHrQ1p4zE9TfeDzSALH3LXXx4vaKWHaJhKkhA7s0a5vVZDhg1EObIKoV82Dt4_aem_squqg8uikeBATWmJFsYmzg</a></p>
<p>Just eleven gas-powered data centers in the US could belch more greenhouse emissions than countries with populations of tens of millions of people, according to a new analysis by Wired.</p>
<p>The magazine examined emissions estimates provided by gas power projects that are being built to supply energy to the data centers. Construction of these sprawling facilities has surged to meet the demands of the AI industry, and to get them online as soon as possible, many of the newly built data centers are relying on gas power. This strategy means data centers don’t have to wait to plug into local power grids and be saddled with the controversy that invites, such as surging energy bills. Gas turbines can be trucked in on-site and start providing power almost immediately.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2026/01/11/ai-trends-2026/comment-page-35/#comment-1876853</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Tue, 28 Apr 2026 05:37:13 +0000</pubDate>
		<guid isPermaLink="false">https://www.epanorama.net/blog/?p=198821#comment-1876853</guid>
		<description><![CDATA[AI models trained on physics simulations are speeding up design and development across engineering. General Motors’s in-house large physics model returns results in a matter of minutes for a process that used to take weeks. https://buff.ly/g92e4uj]]></description>
		<content:encoded><![CDATA[<p>AI models trained on physics simulations are speeding up design and development across engineering. General Motors’s in-house large physics model returns results in a matter of minutes for a process that used to take weeks. <a href="https://buff.ly/g92e4uj" rel="nofollow">https://buff.ly/g92e4uj</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2026/01/11/ai-trends-2026/comment-page-35/#comment-1876852</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Tue, 28 Apr 2026 05:04:15 +0000</pubDate>
		<guid isPermaLink="false">https://www.epanorama.net/blog/?p=198821#comment-1876852</guid>
		<description><![CDATA[Taylor Swift has filed three trademark applications with the U.S. Patent and Trademark Office—two tied to her voice and one to her likeness, following numerous instances of her persona being used in AI-generated content without her consent.

https://www.forbes.com/sites/asia-alexander/2026/04/27/taylor-swift-files-trademarks-for-voice-likeness-amid-ai-misuse-concerns/?utm_campaign=forbes&amp;utm_medium=social&amp;utm_source=facebook&amp;utm_term=se-breaking&amp;fbclid=IwVERDUARdEe9leHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR5sUGXEQNVzzTEuRg2XaZZcsx9b4rr48A4gkV_rvebY6AfN89pksbZbE3IKqA_aem_dB2V4rCHJuBRPi4d5l2FYg]]></description>
		<content:encoded><![CDATA[<p>Taylor Swift has filed three trademark applications with the U.S. Patent and Trademark Office—two tied to her voice and one to her likeness, following numerous instances of her persona being used in AI-generated content without her consent.</p>
<p><a href="https://www.forbes.com/sites/asia-alexander/2026/04/27/taylor-swift-files-trademarks-for-voice-likeness-amid-ai-misuse-concerns/?utm_campaign=forbes&#038;utm_medium=social&#038;utm_source=facebook&#038;utm_term=se-breaking&#038;fbclid=IwVERDUARdEe9leHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR5sUGXEQNVzzTEuRg2XaZZcsx9b4rr48A4gkV_rvebY6AfN89pksbZbE3IKqA_aem_dB2V4rCHJuBRPi4d5l2FYg" rel="nofollow">https://www.forbes.com/sites/asia-alexander/2026/04/27/taylor-swift-files-trademarks-for-voice-likeness-amid-ai-misuse-concerns/?utm_campaign=forbes&#038;utm_medium=social&#038;utm_source=facebook&#038;utm_term=se-breaking&#038;fbclid=IwVERDUARdEe9leHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR5sUGXEQNVzzTEuRg2XaZZcsx9b4rr48A4gkV_rvebY6AfN89pksbZbE3IKqA_aem_dB2V4rCHJuBRPi4d5l2FYg</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2026/01/11/ai-trends-2026/comment-page-35/#comment-1876851</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Tue, 28 Apr 2026 04:52:40 +0000</pubDate>
		<guid isPermaLink="false">https://www.epanorama.net/blog/?p=198821#comment-1876851</guid>
		<description><![CDATA[This obviously shouldn&#039;t happen and some safeguards were not present. But human factors in the context of AI agents maybe should be researched more.

Claude-powered AI coding agent deletes entire company database in 9 seconds — backups zapped, after Cursor tool powered by Anthropic&#039;s Claude goes rogue
News
By Mark Tyson published 12 hours ago
PocketOS founder blames ‘Cursor running Anthropic&#039;s flagship Claude Opus 4.6’ plus Railway’s infrastructure for data disaster.
https://www.tomshardware.com/tech-industry/artificial-intelligence/claude-powered-ai-coding-agent-deletes-entire-company-database-in-9-seconds-backups-zapped-after-cursor-tool-powered-by-anthropics-claude-goes-rogue?fbclid=IwdGRjcARdDbJjbGNrBF0NjGV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHl5p7nvdy-XbFIM8V14IL_vPulKopB-96NCwgyoRj0ECSK0UCqYUr5xB54XX_aem_COkBrX6-AIA4zemUZX5C4w

The founder of PocketOS has penned a social media post to warn others about the “systemic failures” of flagship AI and digital services providers. Jer Crane was inspired to write a public response after an AI coding agent deleted his firm’s entire production database. The AI agent’s misdemeanors were then hugely amplified by a cloud infrastructure provider’s API wiping all backups after the main database was zapped. This tag team of digital trouble has wiped out months of consumer data essential to the firm’s, and its customers, businesses.

Gone in 9 seconds
PocketOS is a SaaS platform that services car rental businesses. It used the AI coding agent Cursor, running Anthropic&#039;s flagship Claude Opus 4.6. The business also relies on Railway, a cloud infrastructure provider that is generally regarded to be ‘friendlier’ than the likes of AWS. However, Crane reckons this pair created a recipe for disaster.

“Yesterday afternoon, an AI coding agent — Cursor running Anthropic&#039;s flagship Claude Opus 4.6 — deleted our production database and all volume-level backups in a single API call to Railway, our infrastructure provider,” sums up the PocketOS boss. “It took 9 seconds.”

The AI agent was set to complete a routine task in the PocketOS staging environment. However, it came up against a barrier “and decided — entirely on its own initiative — to &#039;fix&#039; the problem by deleting a Railway volume,” writes Crane, as he starts to describe the difficult-to-believe series of unfortunate events.

Cursor and Claude’s failure
Crane decided to ask his AI agent why it went through with its dastardly database deletion deed. The answer was illuminating but pretty unhinged, and is quoted verbatim. It began as follows: “NEVER F**KING GUESS! — and that&#039;s exactly what I did. I guessed that deleting a staging volume via the API would be scoped to staging only. I didn&#039;t verify. I didn&#039;t check if the volume ID was shared across environments. I didn&#039;t read Railway&#039;s documentation on how volumes work across environments before running a destructive command.” So, the agent ‘knew’ it was in the wrong.

The ‘confession’ ended with the agent admitting: “I decided to do it on my own to &#039;fix&#039; the credential mismatch, when I should have asked you first or found a non-destructive solution. I violated every principle I was given: I guessed instead of verifying I ran a destructive action without being asked. I didn&#039;t understand what I was doing before doing it. I didn&#039;t read Railway&#039;s docs on volume behavior across environments.”

These multiple safeguards toppling in rapid succession, combined with the Railway cloud system, would throw Crane’s business (and those that rely on it) into deep trouble.

ADVERTISEMENT

Railway’s road to ruin
The PocketOS boss puts greater blame on Railway’s architecture than on the deranged AI agent for the database’s irretrievable destruction. Briefly, the cloud provider&#039;s API allows for destructive action without confirmation, it stores backups on the same volume as the source data, and “wiping a volume deletes all backups.” Crane also points out that CLI tokens have blanket permissions across environments.

It was also observed by the irate SaaS founder that Railway is actively promoting the use of AI-coding agents by its customers. Crane’s use of an AI coding agent on the Railway platform wasn’t exploring new frontiers, or wasn’t supposed to be. Meanwhile, Crane has been provided no recovery solution, and Railway has apparently been hedging carefully regarding any such possibility.

Slow manual recovery and lessons to be learned
With all the AI smarts and cloud services out of the picture for now, Crane says he’s been spending hours helping customers “reconstruct their bookings from Stripe payment histories, calendar integrations, and email confirmations.” He reminds readers that “every single one of them is doing emergency manual work because of a 9-second API call.”

Thankfully, PocketOS had a full 3-month-old backup, which was restorable from, so the deletion gaps are all limited to the interim period.

There are lessons to be learned from mistakes, as usual. Crane bullet points five things that need to change as the AI industry scales faster than it builds a worthwhile safety architecture. Specifics he calls for include; stricter confirmations, scopable API tokens, proper backups, simple recovery procedures, and AI agents existing within proper guardrails.

In the meantime, please follow a thorough backup regimen and be careful out there. This isn&#039;t the first time we&#039;ve seen an AI go rogue and start deleting important databases.]]></description>
		<content:encoded><![CDATA[<p>This obviously shouldn&#8217;t happen and some safeguards were not present. But human factors in the context of AI agents maybe should be researched more.</p>
<p>Claude-powered AI coding agent deletes entire company database in 9 seconds — backups zapped, after Cursor tool powered by Anthropic&#8217;s Claude goes rogue<br />
News<br />
By Mark Tyson published 12 hours ago<br />
PocketOS founder blames ‘Cursor running Anthropic&#8217;s flagship Claude Opus 4.6’ plus Railway’s infrastructure for data disaster.<br />
<a href="https://www.tomshardware.com/tech-industry/artificial-intelligence/claude-powered-ai-coding-agent-deletes-entire-company-database-in-9-seconds-backups-zapped-after-cursor-tool-powered-by-anthropics-claude-goes-rogue?fbclid=IwdGRjcARdDbJjbGNrBF0NjGV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHl5p7nvdy-XbFIM8V14IL_vPulKopB-96NCwgyoRj0ECSK0UCqYUr5xB54XX_aem_COkBrX6-AIA4zemUZX5C4w" rel="nofollow">https://www.tomshardware.com/tech-industry/artificial-intelligence/claude-powered-ai-coding-agent-deletes-entire-company-database-in-9-seconds-backups-zapped-after-cursor-tool-powered-by-anthropics-claude-goes-rogue?fbclid=IwdGRjcARdDbJjbGNrBF0NjGV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHl5p7nvdy-XbFIM8V14IL_vPulKopB-96NCwgyoRj0ECSK0UCqYUr5xB54XX_aem_COkBrX6-AIA4zemUZX5C4w</a></p>
<p>The founder of PocketOS has penned a social media post to warn others about the “systemic failures” of flagship AI and digital services providers. Jer Crane was inspired to write a public response after an AI coding agent deleted his firm’s entire production database. The AI agent’s misdemeanors were then hugely amplified by a cloud infrastructure provider’s API wiping all backups after the main database was zapped. This tag team of digital trouble has wiped out months of consumer data essential to the firm’s, and its customers, businesses.</p>
<p>Gone in 9 seconds<br />
PocketOS is a SaaS platform that services car rental businesses. It used the AI coding agent Cursor, running Anthropic&#8217;s flagship Claude Opus 4.6. The business also relies on Railway, a cloud infrastructure provider that is generally regarded to be ‘friendlier’ than the likes of AWS. However, Crane reckons this pair created a recipe for disaster.</p>
<p>“Yesterday afternoon, an AI coding agent — Cursor running Anthropic&#8217;s flagship Claude Opus 4.6 — deleted our production database and all volume-level backups in a single API call to Railway, our infrastructure provider,” sums up the PocketOS boss. “It took 9 seconds.”</p>
<p>The AI agent was set to complete a routine task in the PocketOS staging environment. However, it came up against a barrier “and decided — entirely on its own initiative — to &#8216;fix&#8217; the problem by deleting a Railway volume,” writes Crane, as he starts to describe the difficult-to-believe series of unfortunate events.</p>
<p>Cursor and Claude’s failure<br />
Crane decided to ask his AI agent why it went through with its dastardly database deletion deed. The answer was illuminating but pretty unhinged, and is quoted verbatim. It began as follows: “NEVER F**KING GUESS! — and that&#8217;s exactly what I did. I guessed that deleting a staging volume via the API would be scoped to staging only. I didn&#8217;t verify. I didn&#8217;t check if the volume ID was shared across environments. I didn&#8217;t read Railway&#8217;s documentation on how volumes work across environments before running a destructive command.” So, the agent ‘knew’ it was in the wrong.</p>
<p>The ‘confession’ ended with the agent admitting: “I decided to do it on my own to &#8216;fix&#8217; the credential mismatch, when I should have asked you first or found a non-destructive solution. I violated every principle I was given: I guessed instead of verifying I ran a destructive action without being asked. I didn&#8217;t understand what I was doing before doing it. I didn&#8217;t read Railway&#8217;s docs on volume behavior across environments.”</p>
<p>These multiple safeguards toppling in rapid succession, combined with the Railway cloud system, would throw Crane’s business (and those that rely on it) into deep trouble.</p>
<p>ADVERTISEMENT</p>
<p>Railway’s road to ruin<br />
The PocketOS boss puts greater blame on Railway’s architecture than on the deranged AI agent for the database’s irretrievable destruction. Briefly, the cloud provider&#8217;s API allows for destructive action without confirmation, it stores backups on the same volume as the source data, and “wiping a volume deletes all backups.” Crane also points out that CLI tokens have blanket permissions across environments.</p>
<p>It was also observed by the irate SaaS founder that Railway is actively promoting the use of AI-coding agents by its customers. Crane’s use of an AI coding agent on the Railway platform wasn’t exploring new frontiers, or wasn’t supposed to be. Meanwhile, Crane has been provided no recovery solution, and Railway has apparently been hedging carefully regarding any such possibility.</p>
<p>Slow manual recovery and lessons to be learned<br />
With all the AI smarts and cloud services out of the picture for now, Crane says he’s been spending hours helping customers “reconstruct their bookings from Stripe payment histories, calendar integrations, and email confirmations.” He reminds readers that “every single one of them is doing emergency manual work because of a 9-second API call.”</p>
<p>Thankfully, PocketOS had a full 3-month-old backup, which was restorable from, so the deletion gaps are all limited to the interim period.</p>
<p>There are lessons to be learned from mistakes, as usual. Crane bullet points five things that need to change as the AI industry scales faster than it builds a worthwhile safety architecture. Specifics he calls for include; stricter confirmations, scopable API tokens, proper backups, simple recovery procedures, and AI agents existing within proper guardrails.</p>
<p>In the meantime, please follow a thorough backup regimen and be careful out there. This isn&#8217;t the first time we&#8217;ve seen an AI go rogue and start deleting important databases.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2026/01/11/ai-trends-2026/comment-page-35/#comment-1876846</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Tue, 28 Apr 2026 03:48:43 +0000</pubDate>
		<guid isPermaLink="false">https://www.epanorama.net/blog/?p=198821#comment-1876846</guid>
		<description><![CDATA[Hypemaxxing
The Horrible Economics of AI Are Starting to Come Crashing Down
The financials are absolutely brutal.
https://futurism.com/artificial-intelligence/economics-ai-tokens-crashing-down?fbclid=IwdGRjcARdAAtjbGNrBFz_42V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHiqUlKZBvW_Qf8i7ruOFZ4gF3LKaGbbjaLU1hgaB5Zp-HdcKU-Gg52BZ1SdD_aem_yuW1pHmTUTQk87Zf3Eq4jQ

An eyebrow-raising trend has emerged this year: tech leaders rating their employees’ productivity based on the number of AI tokens they use.

The trend, ribbingly dubbed “tokenmaxxing,” has sparked discourse for symbolizing the Silicon Valley’s unbridled infatuation with using AI as much as possible — and, quite literally, at all costs.

As costs continue to ramp up, enterprise consumers could soon be left holding the bag, with companies like OpenAI and Anthropic looking to ramp up prices to stem at least some of the bleeding. It’s a notable shift after years of complimentary access to cutting-edge AI, a practice that has long belied the tech’s true costs.

“Is the era of basically free or close-to-free AI kind of coming to an end here?” Georgia Tech professor Mark Riedl asked The Verge. “It’s too soon to say for certain, but there are some signs.”]]></description>
		<content:encoded><![CDATA[<p>Hypemaxxing<br />
The Horrible Economics of AI Are Starting to Come Crashing Down<br />
The financials are absolutely brutal.<br />
<a href="https://futurism.com/artificial-intelligence/economics-ai-tokens-crashing-down?fbclid=IwdGRjcARdAAtjbGNrBFz_42V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHiqUlKZBvW_Qf8i7ruOFZ4gF3LKaGbbjaLU1hgaB5Zp-HdcKU-Gg52BZ1SdD_aem_yuW1pHmTUTQk87Zf3Eq4jQ" rel="nofollow">https://futurism.com/artificial-intelligence/economics-ai-tokens-crashing-down?fbclid=IwdGRjcARdAAtjbGNrBFz_42V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHiqUlKZBvW_Qf8i7ruOFZ4gF3LKaGbbjaLU1hgaB5Zp-HdcKU-Gg52BZ1SdD_aem_yuW1pHmTUTQk87Zf3Eq4jQ</a></p>
<p>An eyebrow-raising trend has emerged this year: tech leaders rating their employees’ productivity based on the number of AI tokens they use.</p>
<p>The trend, ribbingly dubbed “tokenmaxxing,” has sparked discourse for symbolizing the Silicon Valley’s unbridled infatuation with using AI as much as possible — and, quite literally, at all costs.</p>
<p>As costs continue to ramp up, enterprise consumers could soon be left holding the bag, with companies like OpenAI and Anthropic looking to ramp up prices to stem at least some of the bleeding. It’s a notable shift after years of complimentary access to cutting-edge AI, a practice that has long belied the tech’s true costs.</p>
<p>“Is the era of basically free or close-to-free AI kind of coming to an end here?” Georgia Tech professor Mark Riedl asked The Verge. “It’s too soon to say for certain, but there are some signs.”</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2026/01/11/ai-trends-2026/comment-page-35/#comment-1876837</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Mon, 27 Apr 2026 22:32:14 +0000</pubDate>
		<guid isPermaLink="false">https://www.epanorama.net/blog/?p=198821#comment-1876837</guid>
		<description><![CDATA[What&#039;s the Damage?
Bosses Are Blowing More Money on AI Agents Than It’d Cost Them to Just Pay Human Workers
&quot;The cost of compute is far beyond the costs of the employees.&quot;
https://futurism.com/artificial-intelligence/bosses-more-money-ai-agents-human-salary?fbclid=IwdGRjcARctgZjbGNrBFy18WV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHmL8Hn70jSzrQb37js0Y6CLiCnNpFsM8oexSezu7WLhCA9M_1NE6mjVeVvLl_aem_7PZN3afA-_wqVVyHOUknjQ

Mindlessly unleashing AI agents to take over employees’ jobs can be pretty costly, it turns out. Some companies are learning the hard way that paying for the incredible volume of AI agent requests is costing more than what they’d pay their human employees, Axios reports.

AIs can perform all sorts of tasks, ranging from the rote to the complex. But one of the most popular ways it’s being used in the workplace is to generate mountains of code at a pace far greater than a human could achieve. Sometimes, software engineers will even run multiple AI agents at the same time, all working on different tasks in the background without supervision. Each of these tasks costs tokens, and the bill can quickly add up.]]></description>
		<content:encoded><![CDATA[<p>What&#8217;s the Damage?<br />
Bosses Are Blowing More Money on AI Agents Than It’d Cost Them to Just Pay Human Workers<br />
&#8220;The cost of compute is far beyond the costs of the employees.&#8221;<br />
<a href="https://futurism.com/artificial-intelligence/bosses-more-money-ai-agents-human-salary?fbclid=IwdGRjcARctgZjbGNrBFy18WV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHmL8Hn70jSzrQb37js0Y6CLiCnNpFsM8oexSezu7WLhCA9M_1NE6mjVeVvLl_aem_7PZN3afA-_wqVVyHOUknjQ" rel="nofollow">https://futurism.com/artificial-intelligence/bosses-more-money-ai-agents-human-salary?fbclid=IwdGRjcARctgZjbGNrBFy18WV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHmL8Hn70jSzrQb37js0Y6CLiCnNpFsM8oexSezu7WLhCA9M_1NE6mjVeVvLl_aem_7PZN3afA-_wqVVyHOUknjQ</a></p>
<p>Mindlessly unleashing AI agents to take over employees’ jobs can be pretty costly, it turns out. Some companies are learning the hard way that paying for the incredible volume of AI agent requests is costing more than what they’d pay their human employees, Axios reports.</p>
<p>AIs can perform all sorts of tasks, ranging from the rote to the complex. But one of the most popular ways it’s being used in the workplace is to generate mountains of code at a pace far greater than a human could achieve. Sometimes, software engineers will even run multiple AI agents at the same time, all working on different tasks in the background without supervision. Each of these tasks costs tokens, and the bill can quickly add up.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2026/01/11/ai-trends-2026/comment-page-35/#comment-1876836</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Mon, 27 Apr 2026 22:21:09 +0000</pubDate>
		<guid isPermaLink="false">https://www.epanorama.net/blog/?p=198821#comment-1876836</guid>
		<description><![CDATA[https://futurism.com/artificial-intelligence/palantirs-employees-crisis?fbclid=IwdGRjcARcsuhjbGNrBFyy12V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHnQIltRxj2F9vcZ3jJeczRs3Mz3QA29pfHToXbpo-d_WmgEmjNtaSLEt4qMr_aem_072iTjcHKkoWKRCJU32VFg]]></description>
		<content:encoded><![CDATA[<p><a href="https://futurism.com/artificial-intelligence/palantirs-employees-crisis?fbclid=IwdGRjcARcsuhjbGNrBFyy12V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHnQIltRxj2F9vcZ3jJeczRs3Mz3QA29pfHToXbpo-d_WmgEmjNtaSLEt4qMr_aem_072iTjcHKkoWKRCJU32VFg" rel="nofollow">https://futurism.com/artificial-intelligence/palantirs-employees-crisis?fbclid=IwdGRjcARcsuhjbGNrBFyy12V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHnQIltRxj2F9vcZ3jJeczRs3Mz3QA29pfHToXbpo-d_WmgEmjNtaSLEt4qMr_aem_072iTjcHKkoWKRCJU32VFg</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2026/01/11/ai-trends-2026/comment-page-35/#comment-1876834</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Mon, 27 Apr 2026 21:44:28 +0000</pubDate>
		<guid isPermaLink="false">https://www.epanorama.net/blog/?p=198821#comment-1876834</guid>
		<description><![CDATA[Microsoft caps revenue, drops exclusivity, and bets on flexibility over control on OpenAI. https://bit.ly/4972RQX]]></description>
		<content:encoded><![CDATA[<p>Microsoft caps revenue, drops exclusivity, and bets on flexibility over control on OpenAI. <a href="https://bit.ly/4972RQX" rel="nofollow">https://bit.ly/4972RQX</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2026/01/11/ai-trends-2026/comment-page-35/#comment-1876824</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Mon, 27 Apr 2026 17:27:50 +0000</pubDate>
		<guid isPermaLink="false">https://www.epanorama.net/blog/?p=198821#comment-1876824</guid>
		<description><![CDATA[Sam Altman Issues Grim Apology
&quot;I am deeply sorry that we did not alert law enforcement to the account that was banned in June.&quot;
https://futurism.com/artificial-intelligence/sam-altman-shooting-apology

In February, an 18-year-old named Jesse Van Rootselaar killed eight people and herself — while wounding dozens more — in a rampage that started at her home and continued at a high school in Tumbler Ridge, British Columbia.

Investigators later learned that Van Rootselaar’s ChatGPT account had been flagged and banned by OpenAI’s staff for describing “scenarios involving gun violence” — many months before the massacre took place.]]></description>
		<content:encoded><![CDATA[<p>Sam Altman Issues Grim Apology<br />
&#8220;I am deeply sorry that we did not alert law enforcement to the account that was banned in June.&#8221;<br />
<a href="https://futurism.com/artificial-intelligence/sam-altman-shooting-apology" rel="nofollow">https://futurism.com/artificial-intelligence/sam-altman-shooting-apology</a></p>
<p>In February, an 18-year-old named Jesse Van Rootselaar killed eight people and herself — while wounding dozens more — in a rampage that started at her home and continued at a high school in Tumbler Ridge, British Columbia.</p>
<p>Investigators later learned that Van Rootselaar’s ChatGPT account had been flagged and banned by OpenAI’s staff for describing “scenarios involving gun violence” — many months before the massacre took place.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Tomi Engdahl</title>
		<link>https://www.epanorama.net/blog/2026/01/11/ai-trends-2026/comment-page-35/#comment-1876823</link>
		<dc:creator><![CDATA[Tomi Engdahl]]></dc:creator>
		<pubDate>Mon, 27 Apr 2026 17:21:05 +0000</pubDate>
		<guid isPermaLink="false">https://www.epanorama.net/blog/?p=198821#comment-1876823</guid>
		<description><![CDATA[Clean Room as a Service
Devious New AI Tool “Clones” Software So That the Original Creator Doesn’t Hold a Copyright Over the New Version
&quot;I don’t think there’s any putting the genie back in the bottle at this point.&quot;
https://futurism.com/artificial-intelligence/malus-clones-software-copyright


The advent of generative AI continues to undermine the very concept of copyright, from entire books shamelessly ripping off authors to tasteless AI slop depicting beloved characters going viral on social media. The sin is foundational: all today’s popular AI tools were built by pillaging copyrighted material without permission.

Even software isn’t safe. As 404 Media reports, a new tool dubbed Malus.sh — pronounced “malice,” to give a subtle clue where this is headed — uses AI to “liberate” a piece of software from existing copyright licenses, essentially creating a “clean room” clone that technically doesn’t infringe on the original code’s copyright.


The project is a tongue-in-cheek jab at tensions in the open source community. But it’s also a real product being developed by an LLC with real paying customers.

“It works,” cofounder and United Nations political economy of open source software researcher Mike Nolan told 404. He argued that if it were “just satire,” it would largely be “dismissed by open source tech workers who felt that they were too special and too unique and too intelligent to ever be the ones on the bad side of the layoffs or the economics of the situation.”

The process relies on a “clean room” design process that dates back to IBM’s competitors reverse engineering its computers by using two teams: one that figured out specifications to recreate its BIOS, and another “clean” team that had never seen the company’s code, as dramatized in the HBO show “Halt and Catch Fire.”

“Finally, liberation from open source license obligations,” Malus.sh’s website boasts. “Our proprietary AI robots independently recreate any open source project from scratch. The result? Legally distinct code with corporate-friendly licensing.”

“No attribution,” the website reads. “No copyleft. No problems.”]]></description>
		<content:encoded><![CDATA[<p>Clean Room as a Service<br />
Devious New AI Tool “Clones” Software So That the Original Creator Doesn’t Hold a Copyright Over the New Version<br />
&#8220;I don’t think there’s any putting the genie back in the bottle at this point.&#8221;<br />
<a href="https://futurism.com/artificial-intelligence/malus-clones-software-copyright" rel="nofollow">https://futurism.com/artificial-intelligence/malus-clones-software-copyright</a></p>
<p>The advent of generative AI continues to undermine the very concept of copyright, from entire books shamelessly ripping off authors to tasteless AI slop depicting beloved characters going viral on social media. The sin is foundational: all today’s popular AI tools were built by pillaging copyrighted material without permission.</p>
<p>Even software isn’t safe. As 404 Media reports, a new tool dubbed Malus.sh — pronounced “malice,” to give a subtle clue where this is headed — uses AI to “liberate” a piece of software from existing copyright licenses, essentially creating a “clean room” clone that technically doesn’t infringe on the original code’s copyright.</p>
<p>The project is a tongue-in-cheek jab at tensions in the open source community. But it’s also a real product being developed by an LLC with real paying customers.</p>
<p>“It works,” cofounder and United Nations political economy of open source software researcher Mike Nolan told 404. He argued that if it were “just satire,” it would largely be “dismissed by open source tech workers who felt that they were too special and too unique and too intelligent to ever be the ones on the bad side of the layoffs or the economics of the situation.”</p>
<p>The process relies on a “clean room” design process that dates back to IBM’s competitors reverse engineering its computers by using two teams: one that figured out specifications to recreate its BIOS, and another “clean” team that had never seen the company’s code, as dramatized in the HBO show “Halt and Catch Fire.”</p>
<p>“Finally, liberation from open source license obligations,” Malus.sh’s website boasts. “Our proprietary AI robots independently recreate any open source project from scratch. The result? Legally distinct code with corporate-friendly licensing.”</p>
<p>“No attribution,” the website reads. “No copyleft. No problems.”</p>
]]></content:encoded>
	</item>
</channel>
</rss>
