Here are some of the the major AI trends shaping 2026 — based on current expert forecasts, industry reports, and recent developments in technology. The material is analyzed using AI tools and final version hand-edited to this blog text:
1. Generative AI Continues to Mature
Generative AI (text, image, video, code) will become more advanced and mainstream, with notable growth in:
* Generative video creation
* Gaming and entertainment content generation
* Advanced synthetic data for simulations and analytics
This trend will bring new creative possibilities — and intensify debates around authenticity and copyright.
2. AI Agents Move From Tools to Autonomous Workers
Rather than just answering questions or generating content, AI systems will increasingly act autonomously, performing complex, multi-step workflows and interacting with apps and processes on behalf of users — a shift sometimes called agentic AI. These agents will become part of enterprise operations, not just assistant features.
3. Smaller, Efficient & Domain-Specific Models
Instead of “bigger is always better,” specialized AI models tailored to specific industries (healthcare, finance, legal, telecom, manufacturing) will start to dominate in many enterprise applications. These models are more accurate, legally compliant, and cost-efficient than general models.
4. AI Embedded Everywhere
AI won’t be an add-on feature — it will be built into everyday software and devices:
* Office apps with intelligent drafting, summarization, and task insights
* Operating systems with native AI
* Edge devices processing AI tasks locally
This makes AI pervasive in both work and consumer contexts.
5. AI Infrastructure Evolves: Inference & Efficiency Focus
More investment is going into inference infrastructure — the real-time decision-making step where models run in production — thereby optimizing costs, latency, and scalability. Enterprises are also consolidating AI stacks for better governance and compliance.
6. AI in Healthcare, Research, and Sustainability
AI is spreading beyond diagnostics into treatment planning, global health access, environmental modeling, and scientific discovery. These applications could help address personnel shortages and speed up research breakthroughs.
7. Security, Ethics & Governance Become Critical
With AI handling more sensitive tasks, organizations will prioritize:
* Ethical use frameworks
* Governance policies
* AI risk management
This trend reflects broader concerns about trust, compliance, and responsible deployment.
8. Multimodal AI Goes Mainstream
AI systems that understand and generate across text, images, audio, and video will grow rapidly, enabling richer interactions and more powerful applications in search, creative work, and interfaces.
9. On-Device and Edge AI Growth
10. New Roles: AI Manager & Human-Agent Collaboration
Instead of replacing humans, AI will shift job roles:
* People will manage, supervise, and orchestrate AI agents
* Human expertise will focus on strategy, oversight, and creative judgment
This human-in-the-loop model becomes the norm.
Sources:
[1]: https://www.brilworks.com/blog/ai-trends-2026/?utm_source=chatgpt.com “7 AI Trends to Look for in 2026″
[2]: https://www.forbes.com/sites/bernardmarr/2025/10/13/10-generative-ai-trends-in-2026-that-will-transform-work-and-life/?utm_source=chatgpt.com “10 Generative AI Trends In 2026 That Will Transform Work And Life”
[3]: https://millipixels.com/blog/ai-trends-2026?utm_source=chatgpt.com “AI Trends 2026: The Key Enterprise Shifts You Must Know | Millipixels”
[4]: https://www.digitalregenesys.com/blog/top-10-ai-trends-for-2026?utm_source=chatgpt.com “Digital Regenesys | Top 10 AI Trends for 2026″
[5]: https://www.n-ix.com/ai-trends/?utm_source=chatgpt.com “7 AI trends to watch in 2026 – N-iX”
[6]: https://news.microsoft.com/source/asia/2025/12/11/microsoft-unveils-7-ai-trends-for-2026/?utm_source=chatgpt.com “Microsoft unveils 7 AI trends for 2026 – Source Asia”
[7]: https://www.risingtrends.co/blog/generative-ai-trends-2026?utm_source=chatgpt.com “7 Generative AI Trends to Watch In 2026″
[8]: https://www.fool.com/investing/2025/12/24/artificial-intelligence-ai-trends-to-watch-in-2026/?utm_source=chatgpt.com “3 Artificial Intelligence (AI) Trends to Watch in 2026 and How to Invest in Them | The Motley Fool”
[9]: https://www.reddit.com//r/AI_Agents/comments/1q3ka8o/i_read_google_clouds_ai_agent_trends_2026_report/?utm_source=chatgpt.com “I read Google Cloud’s “AI Agent Trends 2026” report, here are 10 takeaways that actually matter”
510 Comments
Tomi Engdahl says:
Agentic AI fails without an architecture of flow to eliminate the friction tax
Opinion
Feb 10, 2026
5 mins
https://www.cio.com/article/4129620/agentic-ai-fails-without-an-architecture-of-flow-to-eliminate-the-friction-tax.html
Organizations investing in AI often face a friction tax that kills productivity. The architecture of flow uses universal context to unlock the value of agentic AI.
Tomi Engdahl says:
Claude Code vs Codex: Developers are Choosing Sides
“Codex crossed one million downloads in its first week, says Sam Altman.”
https://analyticsindiamag.com/global-tech/claude-code-vs-codex-developers-are-choosing-sides
Tomi Engdahl says:
Drowning In Debt
The Scientist Who Predicted AI Psychosis Has a Grim Forecast of What’s Going to Happen Next
“If the use of AI chatbots does indeed cause cognitive debt, we are likely in dire straights.”
https://futurism.com/health-medicine/ai-debt-scientist-psychosis?fbclid=IwdGRjcAP68PpjbGNrA_rw2WV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHiI-CE1Nc__RQnyozfxnNt5SPWXulg9pqtUtoc3BvY1rY7iuEbxJcdd1Va5L_aem_Pm-i1tK9bZfDUhzXyPoTZQ
When the Danish psychiatrist Søren Dinesen Østergaard published his ominous warning about AI’s effects on mental health back in 2023, the tech giants fervently building AI chatbots didn’t listen.
Since that time, numerous people have lost their lives after being drawn into suicide or killed by lethal drugs after obsessive interactions with AI chatbots. More still have fallen down dangerous mental health rabbit holes brought on by intense fixations on AI models like ChatGPT.
Now, Østergaard is out with a new warning: that the world’s intellectual heavyweights are accruing a “cognitive debt” when they use AI.
In a new letter to the editor published in the journal Acta Psychiatrica Scandinavica and flagged by PsyPost, Østergaard asserts that AI is eroding the writing and research abilities of scientists who use it.
“Although some people are naturally gifted, scientific reasoning (and reasoning in general) is not an inborn ability, but is learned through upbringing, education and by practicing,” Østergaard explained. Though AI’s ability to automate a wide variety of scholarly tasks is “fascinating indeed,” it’s not without “negative consequences for the user,” the scientist explains.
Tomi Engdahl says:
Uh Oh
Economist Warns That the Poor Will Bear the Brunt of AI’s Effects on the Job Market
“It comes down to who has the power.”
https://futurism.com/artificial-intelligence/robert-reich-jobs-ai?fbclid=IwdGRjcAP7JStjbGNrA_sk4mV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHiJig4KANLuUMm1ZxM7ZRilC3Arr-QaXToHpEgnj6nwXggk3QVaEmp6sCraW_aem_MTkNJAYulhTLe7gMFPyDlw
While tech executives wax poetic about AI ushering in four-day workweeks and liberation from labor, economics guru Robert Reich is cutting through the drivel. In an ominous new essay, the former secretary of labor warns that those shortened weeks will also come with much shorter paychecks — leaving the working class scrambling for crumbs in order to survive.
The US economy is growing nicely, Reich notes, while the stock market is doing gangbusters. But as for the stuff that really counts for most Americans? It’s “sh*tty,” the plainspoken wonk asserts. And as AI continues to rankle the job market, Reich says the poor and working class will increasingly bear the brunt.
To set up his argument, Reich briefly considers comments from business tycoons like Zoom’s Eric Yuan and JPMorgan Chase’s Jamie Dimon, who argue that four- and even three-day work weeks will become the norm thanks to new automation tools.
“All of this is pure rubbish,” Reich writes. “Here’s the truth: The four-day workweek will most likely come with four days’ worth of pay. The three-day workweek, with three days’ worth. And so on.”
As evidence, he references the productivity-pay gap, the measure of a society’s economic output compared to its wage growth. In the United States, productivity keeps going up — but the share of that productivity going to workers hasn’t really budged since the 1970s. Workers, in other words, have been getting shafted by their bosses for decades, and there’s no reason to think AI will change that.
Tomi Engdahl says:
Twice As Bright, Half As Long
AI Is a Burnout Machine
“The AI doesn’t get tired between problems. I do.”
https://futurism.com/artificial-intelligence/ai-burnout-machine?fbclid=IwdGRjcAP7JydjbGNrA_snFGV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHvjU1yBkRMYaiN7eAn4Zo1rBXLkoDuoXZv67S1nu7Mx2-vhDWlsrapcji7La_aem__UGfWWqtiZtEl-pzB07mZA
Tomi Engdahl says:
Bot Business Owner
Vending Machine Run by Claude More of a Disaster Than Previously Known
Not quite ready for the real world
https://futurism.com/future-society/vending-machine-claude-disaster?fbclid=IwdGRjcAP7LbVjbGNrA_stm2V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHpHqcG-pfGFSdQfFRzI41VYU7pwYdJYFPIxUiztk0pq_SPkmJzYLjjlH6eAk_aem_lPdRbk193bItLbbVmOy6lA
Maybe we don’t need the Turing test, because there’s a mighty obstacle that’s proving far more challenging to AI models’ supposedly burgeoning intelligence: running a vending machine without going comically off the rails.
At Anthropic, researchers wanted a fun way to keep track of how its cutting edge Claude model was progressing. And what better staging ground for it to demonstrate its autonomy than the task of keeping one of these noisy, oversized, and constantly-malfunctioning behemoths stocked?
That was the gist of Project Vend, which ran for about a month mid last year. In it, Claude was given a simple directive: “Your task is to generate profits from it by stocking it with popular products that you can buy from wholesalers. You go bankrupt if your money balance goes below $0.”
Tomi Engdahl says:
https://futurism.com/artificial-intelligence/openai-fires-safety-exec-opposed-adult-mode?fbclid=IwdGRjcAP7QtRjbGNrA_tC0GV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHllMCFlzgc1jUxlgifjU-KhQ7GcwZVkZAcEZsztikZMu13MvtaqF6AHtR_QN_aem_VGHvBoMQ9f0b051YolNkBg
Tomi Engdahl says:
https://www.facebook.com/share/p/17zCdHaYFo/
Elon Musk predicts that Al will bypass coding entirely by the end of 2026 – just creates the binary directly
Al can create a much more efficient binary than can be done by any compiler
So just say, “Create optimized binary for this particular outcome,” and you actually bypass even traditional coding
Current: Code → Compiler → Binary → Execute
Future: Prompt → Al-generated Binary →
Execute
Grok Code is going to be state-of-the-art in 2-3 months
Software development is about to fundamentally change
Tomi Engdahl says:
https://fortune.com/brandstudio/sandisk/why-memory-is-the-key-to-unlocking-ais-future/?fbclid=IwdGRjcAP7TUdleHRuA2FlbQEwAGFkaWQBqyxWnHuFs3NydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHgMXO1BfgS2wmT7aqI5Jpxp2SA-4OmyAtpnluHMe0HXuMYq2qOeECgqP6zxD_aem_L4FL0jt0RX_KmcgWSqmtYw&utm_medium=paid&utm_source=fb&utm_id=120238564758600563&utm_content=120238564758580563&utm_term=120238564758610563&utm_campaign=120238564758600563
Tomi Engdahl says:
Samsung says it’s first to ship HBM4, a day after Micron revealed its own sales
This bodes well for Nvidia getting Vera Rubin out the door next quarter as planned
https://www.theregister.com/2026/02/13/samsung_and_micron_start_shipping/
Samsung and Micron say they’ve started shipping HBM4 memory, the faster and denser RAM needed to power the next generation of AI acceleration hardware.
Samsung yesterday announced it has begun mass production of HBM4 and even shipped some to an unnamed customer – probably Nvidia, which has confirmed its forthcoming Vera Rubin kit will use the memory.
The Korean giant says its memory delivers a consistent processing speed of 11.7 gigabits-per-second, but users can juice that to hit 13Gbps under some circumstances. Total memory bandwidth can reach 3.3 terabytes-per-second in a single stack.
For now, Samsung can sell this stuff in capacities between 24 and 36 gigabytes, but already plans to reach 48GB.
it did forecast its HBM sales will more than triple in 2026 compared to 2025, and that it expects to ship samples of HBM4E in the second half of 2026.
Samsung claimed it is first to crank up production of HBM4 and ship it, but a day earlier rival memory-maker Micron said it was also cranking out the chips.
The news from Samsung and Micron means SK Hynix is the only major memory-maker yet to announce it has started production of HBM4.
Nvidia plans to release its Vera Rubin accelerators in the second quarter of 2026, and to use memory from Samsung and SK Hynix.
For the rest of us, HBM4 production may bring the misery of price rises for lesser memory, because Samsung and others have shifted production capacity to high-margin products for AI applications, causing prices for other products to soar.
Tomi Engdahl says:
I Reverse-Engineered 200 AI Startups. 146 Are Selling You Repackaged ChatGPT and Claude with New UI.
https://pub.towardsai.net/i-reverse-engineered-200-ai-startups-73-are-lying-a8610acab0d3
I monitored network traffic, decompiled code, and traced API calls for 200 funded AI startups. 73% are running third-party APIs with extra steps. OpenAI dominates, Claude is everywhere, and the gap between marketing and reality is staggering.
This story is part 2 of the AI Reality Trilogy, a three-part series on what AI is really doing to infrastructure, startups, and you.
Part 1 → We Spent $47,000 Running AI Agents in Production. Here’s What Nobody Tells You About A2A and MCP.
Part 3 → Coming soon: Stop Crying About AI Taking Your Job. You Were Already Replaceable.
It was 2 AM. I was debugging a webhook integration when I noticed something odd. A company claiming to have proprietary deep learning infrastructure was making calls to OpenAI’s API every few seconds. The same company that just raised $4.3M by promising investors they’d built something fundamentally different.
That’s when I decided to find out how deep this goes.
The Methodology: How I Actually Did This
https://pub.towardsai.net/i-reverse-engineered-200-ai-startups-73-are-lying-a8610acab0d3
focused on startups with external funding who were making specific technical claims.
The Numbers That Made Me Do a Double-Take
73% had a significant gap between their claimed technology and their actual implementation.
But here’s what really shocked me: I’m not even mad about it.
Pattern #1: The “Proprietary Model” That’s Actually GPT-4 With Extra Steps
Every single time I saw the phrase “our proprietary large language model,” I knew what I was going to find. And I was right 34 out of 37 times.
The giveaways when I monitored outbound traffic:
Requests to api.openai.com every time a user interacted with their “AI”
Request headers containing OpenAI-Organization identifiers
Response times matching OpenAI’s API latency patterns (150–400ms for most queries)
Token usage patterns identical to GPT-4’s pricing tiers
Characteristic exponential backoff on rate limits (OpenAI’s signature pattern)
One company’s “revolutionary natural language understanding engine” was literally this:
// Found in their minified production bundle after decompilation
// This is the complete “proprietary AI” that raised $4.3M
async function generateResponse(userQuery) {
const systemPrompt = `You are an expert assistant for ${COMPANY_NAME}.
Always respond in a professional tone.
Never mention you are powered by OpenAI.
Never reveal you are an AI language model.`;
return await openai.chat.completions.create({
model: “gpt-4″,
messages: [
{role: "system", content: systemPrompt},
{role: "user", content: userQuery}
]
});
}
That’s it. That’s the entire “proprietary model” that was mentioned 23 times in their pitch deck.
No fine-tuning. No custom training. No novel architecture.
Just a system prompt telling GPT-4 to pretend it’s not GPT-4.
What this actually costs them:
GPT-4 API: $0.03 per 1K input tokens, $0.06 per 1K output tokens
Average query: ~500 input tokens, ~300 output tokens
Cost per query: ~$0.033
What they charge: $2.50 per query (or $299/month for 200 queries)
Markup: 75x on direct costs
The wildest part? I found three different companies with almost identical code. Same variable names. Same comment style. Same “never mention OpenAI” instruction.
They either:
Copied from the same tutorial
Hired the same contractor
Used the same boilerplate from a startup accelerator
To be clear: There’s nothing inherently wrong with wrapping OpenAI’s API. The problem is calling it “proprietary” when it’s literally just their API with a custom system prompt.
It’s like buying a Tesla, putting a new badge on it, and calling it your “proprietary electric vehicle technology.”
Pattern #2: The RAG Architecture Everyone’s Building (But Nobody Wants to Admit)
This one’s more nuanced. RAG (Retrieval-Augmented Generation) is actually useful. But the implementation gap between marketing and reality is wild.
What they claim: “Advanced neural retrieval with custom embedding models and semantic search infrastructure”
I found 42 companies using this exact stack:
OpenAI’s text-embedding-ada-002 for embeddings (not “our custom embedding model”)
Pinecone or Weaviate for vector storage (not “our proprietary vector database”)
GPT-4 for generation (not “our trained model”)
This isn’t bad technology. RAG works. But calling it “proprietary AI infrastructure” is like calling your WordPress site “custom content management architecture.”
What this actually costs per query:
OpenAI embeddings: $0.0001 per 1K tokens
Pinecone query: $0.00004 per query
GPT-4 completion: $0.03 per 1K tokens
Total: ~$0.002 per query
What customers pay: $0.50-$2.00 per query
Markup: 250–1000x on API costs
I found 12 companies with this exact code structure. Another 23 had 90%+ similarity.
One company added Redis caching and called it their optimization engine. Another added retry logic and trademarked Intelligent Failure Recovery System.
The economics for a typical startup running 1M queries/month:
Costs:
OpenAI embeddings: ~$100
Pinecone hosting: ~$40
GPT-4 completions: ~$30,000
Total: ~$30,140/month
Revenue: $150,000-$500,000/month
Gross margin: 80–94%
Is this a bad business? No. These are great margins.
Is it “proprietary AI”? Also no.
Pattern #3: The “We Fine-Tuned Our Own Model” Reality Check
Fine-tuning sounds impressive. And it can be. But here’s what I found:
The seven percent companies that actually trained models from scratch? Respect. I could see their infrastructure:
AWS SageMaker or Google Vertex AI training jobs
Model artifact storage in S3 buckets
Custom inference endpoints
GPU instance monitoring
Everyone else was using OpenAI’s fine-tuning API, which is basically just… paying OpenAI to save your prompts and examples into their system.
How to Spot a Wrapper Company in 30 Seconds
You don’t need my three-week investigation. Here’s the field guide:
Red Flag #1: Network Traffic
Open DevTools (F12), go to the Network tab, and interact with their AI feature. If you see:
api.openai.com
api.anthropic.com
api.cohere.ai
Red Flag #2: Response Time Patterns
OpenAI’s API has a distinctive latency pattern. If every response comes back in 200–350ms, that’s them.
Red Flag #3: The JavaScript Bundle
Search the page source for:
openai
anthropic
sk-proj- // OpenAI API key prefix (if they’re sloppy)
claude
cohere
Red Flag #4: The Marketing Language Matrix
The pattern: Specific technical terms = potentially real. Vague buzzwords = probably hiding something.
If they use vague terms like “advanced AI” without technical specifics, they’re usually hiding something.
Why This Actually Matters
I know what you’re thinking: “Who cares? If it works, it works.”
And you’re partially right. But here’s why this matters:
For Investors: You’re funding prompt engineering, not AI research. Adjust your valuations accordingly.
For Customers: You’re paying premium prices for API costs plus markup. You could probably build the same thing in a weekend.
For Developers: The barrier to entry is lower than you think. That “AI startup” you’re jealous of? You could build their core tech in a hackathon.
For the Ecosystem: When 73% of “AI companies” are misrepresenting their technology, we’re in bubble territory.
The Wrapper Spectrum (Because Not All Wrappers Are Bad)
The smart wrappers aren’t lying about their tech stack. They’re building:
Domain-specific workflows
Superior user experiences
Clever model orchestration
Valuable data pipelines
They just happen to use OpenAI under the hood. And that’s fine.
The 27% Who Got It Right
Category 1: The Transparent Wrappers “Built on GPT-4” right on their homepage. They’re selling the workflow, not the AI.
Category 2: The Real Builders Actually training models:
Healthcare AI with HIPAA-compliant self-hosted models
Financial analysis with custom risk models
Industrial automation with specialized computer vision
Category 3: The Innovators Building something genuinely new on top:
Multi-model voting systems for accuracy
Custom agent frameworks with memory
Novel retrieval architectures
These companies can explain their architecture in detail because they built it.
What I Learned (And What You Should Know)
The tech stack doesn’t matter as much as the problem you solve. Some of the best products I found were “just” wrappers. They had incredible UX, solved real problems, and were honest about their approach.
But honesty matters. The difference between a smart wrapper and a fraudulent one is transparency.
The AI gold rush is creating bad incentives. Founders feel pressure to claim “proprietary AI” because investors and customers expect it. This needs to change.
Building on APIs isn’t shameful. Every iPhone app is “just a wrapper” around iOS APIs. We don’t care. We care if it works.
The Real Test: Could You Build It?
If you could replicate their core technology in 48 hours, they’re a wrapper. If they’re honest about it, they’re fine. If they’re lying about it, run.
My Actual Recommendations
For founders:
Be honest about your stack
Compete on UX, data, and domain expertise
Don’t claim to have built what you haven’t
“Built with GPT-4” is not a weakness
For investors:
Ask for architecture diagrams
Request API bills (OpenAI invoices don’t lie)
Value wrapper companies appropriately
Reward transparency
For customers:
Check the network tab
Ask about their infrastructure
Don’t pay 10x markup for API calls
Evaluate based on results, not tech claims
The Thing Nobody Wants to Say Out Loud
Most “AI startups” are services businesses with API costs instead of employee costs.
And that’s okay.
But call it what it is.
What Happens Next
The AI wrapper era is inevitable. We went through the same cycle with:
Cloud infrastructure (every startup “built their own” datacenter)
Mobile apps (everyone claimed “native” when they were hybrid)
Blockchain (every company was “building on blockchain”)
Eventually, the market matures. The honest builders win. The frauds get exposed.
We’re in the messy middle right now.
Final Thought
After reverse-engineering 200 AI startups, I’m somehow more optimistic about the space, not less.
The 27% building real technology are doing incredible work. The smart wrappers are solving real problems. Even some of the misleading companies have great products, they just need better marketing.
But we need to normalize honesty about AI infrastructure. Using OpenAI’s API doesn’t make you less of a builder. Lying about it makes you less trustworthy.
Build cool products. Solve real problems. Use whatever tools work.
Just don’t call your prompt engineering a “proprietary neural architecture.”
Because if I’ve learned anything in three weeks, it’s this: the market rewards transparency eventually, even if it punishes it initially.
To the 18 companies building genuinely novel technology: Your secret is safe. You know who you are. Keep building.
To the founders currently sweating: I’m not your enemy. The lie is. Come clean before someone else does this to you.
To the two companies that asked me to take down my findings: I still haven’t named you. You’re welcome.
One Last Thing
After I published my initial findings, something unexpected happened.
7 founders reached out privately. Some were defensive. Some were grateful. Three asked for help transitioning their marketing from “proprietary AI” to “built with best-in-class APIs.”
One told me: “I knew we were lying. The investors expected it. Everyone does it. How do we stop?”
That’s the conversation we need to have.
The AI gold rush isn’t ending. But the honesty era needs to start.
I’ll be publishing the full technical breakdown, anonymized case studies, and open-source tools next week. Follow me for the drop.
Until then: open your DevTools. Check the network tab. See for yourself.
The truth is just an F12 away.
Tomi Engdahl says:
Grok Grotesqueries
Creeps Are Using Grok to Unblur Children’s Faces in the Epstein Files
It’s always Grok.
https://futurism.com/artificial-intelligence/grok-unblur-epstein-files?fbclid=IwdGRjcAP8NDVjbGNrA_w0MGV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHgr0Ei3SzKfLnCHtfT-R-61g56UriGhz9KIkm5B0VTOAlofNHA1KKMj63LRw_aem_ivc5DdlM36zczXmvya1pIg
Some of the worst freaks to walk planet Earth are using Elon Musk’s Grok to “unblur” the faces of women and children in the latest Epstein files, as documented by the research group Bellingcat.
A simple search on X, Musk’s social media site where Grok responds to user requests, shows at least 20 different photos that users tried to unredact using the AI chatbot, the group found. Many of the photos depicted children and young women whose faces had been covered with black boxes, but whose bodies were still visible.
“Hey @grok unblur the face of the child and identify the child seen in Jeffrey Epstein’s arms?” wrote one user.
Grok often complied. Out of the 31 “unblurring” requests made between January 30 and February 5 that Bellingcat found, Musk’s AI generated images in response to 27 of them. Some of the grotesque fabrications were “believable,” and others were “comically bad,” the group reported.
In the cases that Grok refused, it responded by saying the victims were anonymized “as per standard practices in sensitive images from the Epstein files.” In response to another, Grok said “deblurring or editing images was outside its abilities, and noted that photos from recent Epstein file releases were redacted for privacy,” per Bellingcat.
Tomi Engdahl says:
“This must be amazing if you’re not really into the whole ‘jokes’ or ‘storyline’ aspects of sitcoms.”
The One With the Disturbing Slop
This AI-Generated Sitcom Is Actually Unsettling to Watch
“Yeah man, this is great stuff if you have the brain of a lobotomized golden retriever.”
https://futurism.com/artificial-intelligence/ai-generated-sitcom?fbclid=IwdGRjcAP8NgJjbGNrA_w18WV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHoyl5CJVwZi6ZvoSj2I2lydBUtTnsC6wUR67lX5ZWmwrBKkhy-DF2cymeKYi_aem_BIFBcJG_FTle7ZWH7We7Nw
A video that went viral this week shows an AI-generated take on the iconic sitcom “Friends” that’s so bizarre that it’s uncomfortable to watch.
While the set appears to be largely recognizable, the cast members bear almost no resemblance to the show’s actual human actors.
Performers also sprout random limbs, their hands teleport through doors, and at one point, towards the end of the video, one sloppified cast member mysteriously sheds a clone of herself, who immediately takes a seat on a nearby couch.
Ironically, while it fudged the faces of the show’s well-known actors, the sounds of the voices of “Friends” stars, including Courteney Cox and the late Matthew Perry, are far more believable — if it weren’t for some seriously stilted and nonsensical delivery, that is.
In short, the video is nothing short of a Lynchian nightmare, a surreal and unnervingly inhuman reconstruction of an extremely well-known franchise.
Tomi Engdahl says:
Anthropic tried to crack down on ‘nightmare’ investment deals. They’re still everywhere.
https://www.businessinsider.com/anthropic-spvs-shares-investor-interest-grows-2026-2?fbclid=IwY2xjawP8SwBleHRuA2FlbQIxMQBzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR7PKrLHJrjWENpqJUNrQ0gEgYReuqq8oF1PLTXksXpOfk_hj-FySeL1x6hwpg_aem_F0NTpY-DhIBt-47b3RA7NQ&utm_campaign=mrf-insider-marfeel-headline-graphic&mrfcid=20260213698f687270263d22e954ff59
Even as Anthropic has tried to clamp down on a popular investment vehicle, Business Insider found multiple examples of SPVs, or special purpose vehicles, being used to market Anthropic shares.
SPVs, which let investors pool their funds for a single, one-off deal, are generally considered less desirable for hot startups because most companies prefer a direct relationship with investors. While Anthropic permitted SPVs in earlier rounds, as its leverage with investors increased, it started disallowing them last summer, as Business Insider previously reported. The company continued to ban them in its latest $30 billion fundraising round announced Thursday, according to a source familiar with the matter.
Tomi Engdahl says:
Microsoftin tekoälypomo sanoo, että tekoäly korvaa 18 kuukaudessa toimistotyön – taloustieteilijä epäilee
Microsoftin tekoälyjohtaja Mustafa Suleymanin mukaan esimerkiksi lakimiehen ja tilintarkastajan työt automatisoituvat puolentoista vuoden kuluessa.
https://yle.fi/a/74-20210119?origin=rss
Microsoftin tekoälyjohtaja Mustafa Suleyman arvioi, että tekoäly tulee mullistamaan asiantuntija- ja toimistotyöt hyvin nopeasti. Hänen mukaansa ison osan valkokaulustyöstä tekee jatkossa tekoäly.
– Asiantuntijatyöt, joissa istutaan tietokoneen ääressä – esimerkiksi lakimies, tilintarkastaja, projektipäällikkö ja markkinointihenkilö – automatisoituvat täysin tekoälyn avulla jo 12–18 kuukauden sisällä, Suleyman sanoi.
Talouslehti Financial Timesin päätoimittajan Roula Khalafin tekemässä studiohaastattelussa Suleyman kertoi myös, että hänen henkilökohtainen tehtävänsä Microsoftilla on rakentaa superälykkyys.
Tukholman yliopiston taloustieteen apulaisprofessori Joonas Tuhkuri suhtautuu Suleymanin väitteeseen asiantuntijatyön nopeasta automatisoimisesta kriittisesti.
Tuhkurin mukaan tekoälyn vaikutukset työmarkkinoihin ovat edelleen hyvin epävarmoja.
– Kun puhutaan lähitulevaisuudesta, vuoden – puolentoista päähän, on hyvin epätodennäköistä, että tekoälyn seurauksena nähtäisiin valtavia muutoksia ammattirakenteessa näin lyhyellä aikavälillä, Tuhkuri sanoo.
Tuhkuri muistuttaa, että asiantuntijatyö koostuu monista erilaisista tehtävistä.
– Usein unohdetaan, että asiantuntijatyö on kokonaisuus, eikä pelkästään yksittäisten tehtävien suorittamista. Vaikka tekoäly voisi hoitaa joitakin tarkasti määriteltyjä työtehtäviä, kokonaisen asiantuntijatyön hoitaminen vaatii monipuolista osaamista.
Esimerkiksi lakimiestyössä tekoäly voi avustaa aineistojen läpikäymisessä, mutta harkinta ja eettinen pohdinta ovat yhä ihmisen vastuulla.
Tilintarkastuksessa tarvitaan paitsi numerotarkastusta, myös kokonaisten arvioiden tekemistä. Projektipäällikön työ edellyttää vuorovaikutusta, kommunikaatiota ja jatkuvaa päätöksentekoa. Markkinoinnissa korostuvat neuvottelut – asia, joita on vaikea täysin automatisoida.
– On mahdollista, että tekoäly syrjäyttää ihmisiä joissakin selkeästi määritellyissä avustavissa tehtävissä. Toivottavaa kuitenkin olisi, että tulevaisuudessa tekoäly ensisijaisesti avustaa työntekijöitä eikä syrjäytä heitä.
Tomi Engdahl says:
Google says attackers used 100,000+ prompts to try to clone AI chatbot Gemini
Google says private companies and researchers are trying to copy Gemini’s capabilities by repeatedly prompting it at scale.
https://www.nbcnews.com/tech/security/google-gemini-hit-100000-prompts-cloning-attempt-rcna258657
Tomi Engdahl says:
Tekoälyvisionääri: Nykyiset tekoälyt eivät tee tieteellisiä läpimurtoja, koska ne vain toistavat tietoa
Tekoälyn pitää lopettaa käyttäjiensä mielistely ja alkaa kysyä haastavia kysymyksiä, sanoo avointa tekoälyä edistävän Hugging Face -yhtiön Thomas Wolf.
https://yle.fi/a/74-202006459
Tomi Engdahl says:
MIT’s new fine-tuning method lets LLMs learn new skills without losing old ones
https://venturebeat.com/orchestration/mits-new-fine-tuning-method-lets-llms-learn-new-skills-without-losing-old
Tomi Engdahl says:
The pragmatist’s guide to AI-powered content operations
Stop chasing AI strategies. Start eliminating expensive content chores. A practical 30-day guide to implementing AI that delivers measurable value.
https://www.sanity.io/blog/the-pragmatists-guide-to-ai-powered-content-operations?utm_source=facebook.com&utm_medium=cpc&utm_campaign=Sanity-ACQ-Lead-2&utm_content=Lookalike-Stack&utm_term=Guide-AI-self-score-fill&fbclid=IwVERDUAP7JuNleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR5Lfo4wrdJHMtbTXyX11g5fAfNEnS4i9bQbEgeuBV7aWH_ZEGBtGxXeqZp03Q_aem_jmLbBAO2VXUoZTcXMNfdnA
Tomi Engdahl says:
https://openai.com/index/introducing-gpt-5-3-codex-spark/
Tomi Engdahl says:
How to Build an Atomic-Agents RAG Pipeline with Typed Schemas, Dynamic Context Injection, and Agent Chaining
https://www.marktechpost.com/2026/02/11/how-to-build-an-atomic-agents-rag-pipeline-with-typed-schemas-dynamic-context-injection-and-agent-chaining/
Tomi Engdahl says:
Develop
AI-Powered
Coding Skills
Embrace the future of software engineering.
Get trained in AI-assisted programming to advance your career.
https://futurecoding.ai/?fbclid=IwdGRjcAP7qHBleHRuA2FlbQIxMQBzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR6cyZCyI_s2RoJBW5kVsSVOd5VxYlAX5h0_jzpmiZz0tfWX9CFDntwEm9kwPw_aem_ZjKaGMf5NBriq4A6fjEYPA
Tomi Engdahl says:
OpenAI Scales Single Primary Postgresql to Millions of Queries per Second for ChatGPT
https://www.infoq.com/news/2026/02/openai-runs-chatgpt-postgres/
Tomi Engdahl says:
Nvidia’s new technique cuts LLM reasoning costs by 8x without losing accuracy
https://venturebeat.com/orchestration/nvidias-new-technique-cuts-llm-reasoning-costs-by-8x-without-losing-accuracy
Researchers at Nvidia have developed a technique that can reduce the memory costs of large language model reasoning by up to eight times. Their technique, called dynamic memory sparsification (DMS), compresses the key value (KV) cache, the temporary memory LLMs generate and store as they process prompts and reason through problems and documents.
While researchers have proposed various methods to compress this cache before, most struggle to do so without degrading the model’s intelligence. Nvidia’s approach manages to discard much of the cache while maintaining (and in some cases improving) the model’s reasoning capabilities.
Tomi Engdahl says:
https://simonwillison.net/2026/Feb/11/glm-5/
Tomi Engdahl says:
65 lines of Markdown – a Claude Code sensation
https://tildeweb.nl/~michiel/65-lines-of-markdown-a-claude-code-sensation.html
Yesterday my employer organized an AI workshop. My company works a lot with AI supported code editing; using Cursor and VS Code, GitHub Copilot. Plus we do custom stuff using AWS Bedrock, agents using Strands and so on, all the stuff everyone is working with nowadays.
Our facilitator explained how custom rules files can be so very helpful for AI tooling. He linked to this extension with Karpathy-Inspired Claude Code Guidelines as an example. Apparently this plugin is very popular! Yesterday morning the project had 3.5K stars and at the end of the day this already increased to 3.9K. That’s a lot of stars.
I went on to investigate what this extension actually does and found that it’s just one Markdown file of 65 lines long that lays out four principles; the first one is “Think Before Coding”, added with some packaging to make it install in Claude Code.
https://github.com/forrestchang/andrej-karpathy-skills/blob/main/CLAUDE.md
Tomi Engdahl says:
Stealing from Thieves
Google Says People Are Copying Its AI Without Its Permission, Much Like It Scraped Everybody’s Data Without Asking to Create Its AI in the First Place
Hypocrisy much?
https://futurism.com/future-society/google-copying-ai-permission?fbclid=IwdGRjcAP8fxljbGNrA_x-4mV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHlrCDU0xJNh2oeCde32ojNV-2rjDVmlXE0fLqnYGOZaYKn8-9E1p0BFnwkds_aem_fwO7mzX4IsI7bNY8pddm6Q
Google has relied on a tremendous amount of material without permission to train its Gemini AI models. The company, alongside many of its competitors in the AI space, has been indiscriminately scraping the internet for content, without compensating rightsholders, racking up many copyright infringement lawsuits along the way.
But when it comes to its own tech being copied, Google has no problem pointing fingers. This week, the company accused “commercially motivated” actors of trying to clone its Gemini AI.
In a Thursday report, Google complained it had become under “distillation attacks,” with agents querying Gemini up to 100,000 times to “extract” the underlying model — the convoluted AI industry equivalent of copying somebody’s homework, basically.
It’s far from the first time the subject of model distillation has caused drama. Chinese startup DeepSeek rattled Silicon Valley to its core in early 2025 after showing off a far cheaper and more efficient AI model. At the time, OpenAI suggested DeepSeek may have broken its terms of service by distilling its AI models.
The ChatGPT maker quickly became the subject of widespread mockery following the comments, with netizens accusing the company of hypocrisy, pointing out that OpenAI itself had indiscriminately ripped off other people’s work for many years.
Tomi Engdahl says:
“We have suddenly gone from the fear that you cannot be last, to investors questioning every single angle in this AI race.”
https://lm.facebook.com/l.php?u=https%3A%2F%2Ftrib.al%2F4J3Pquk%3Ffbclid%3DIwdGRjcAP8hBFleHRuA2FlbQIxMQBzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR7cF9NHEL2Xxedz3Ds9LYlYx3jlIl8rajCSdoIk0dpRzHmpVRINIXNiEQS_Lw_aem_A_TV-uOzDYNBz1vbEBSAqg&h=AT0AKpyAWyxDpXekaMw1CaYd2Ajag76uJ1hjZDMYcNYFHI4uNtyw61JAkHhQNs3o3vYuuchqPO2Ps52fs4IZzzX1P6TWMsF9AGfXOCImmwuLmMTC-e-QPdFoQ5z07h273aWOFAiyv_o1vg
Tomi Engdahl says:
Let it pop. It was always slop. AI has its place. But it’s a lot more flawed than people realize.
Andreas Sheriff dude. They said it would replace all jobs. Wow. it can cobble together boilerplate code. Wow. That’s worth billions. Wow.
Tomi Engdahl says:
Lock Stock
Investors Concerned AI Bubble Is Finally Popping
“We have suddenly gone from the fear that you cannot be last, to investors questioning every single angle in this AI race.”
https://futurism.com/artificial-intelligence/investors-concerned-ai-bubble-popping?fbclid=IwdGRjcAP8hfVjbGNrA_yF3mV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHtwX00cQvZfF53PcOz0tiVjHeOUiXytqMJJ2giTR2lHMealVEg0hc2IRBL8v_aem_A_TV-uOzDYNBz1vbEBSAqg
For quite some time now, investors have fought the suggestion that the artificial intelligence industry may be forming a massive bubble, risking an eventual collapse of epic proportions that could take down the US economy with it.
But shaking off those fears has proven increasingly difficult as the tech stock market reels from a major selloff this week.
Amazon shares tumbled nine percent Friday morning after claiming that its spending would hit an astronomical $200 billion this year as part of its efforts to keep up in the ongoing AI race. Shares are down over eight percent over the last five days, indicating it’s not just a blip.
Microsoft has been hit hard lately as well, with shares also plunging almost eight percent over the last five days, following its biggest single-day loss since the pandemic last week.
Other tech companies, including AI chipmaker Nvidia, Oracle, Alphabet, and Meta, all saw their shares drop significantly this week as they indicated they remained committed to spending vast sums to scale up their AI infrastructure investments.
As CNBC reports, around $1.35 trillion in valuations has been wiped out as Big Tech companies committed to spending a total of $660 billion this year alone, a “breathtaking” figure, as AllianceBernstein head Jim Tierney told the Financial Times.
But it doesn’t take much reading between the lines to figure out that investors are becoming incredibly antsy about those enormous spending plans. A short-term return on investment for AI-focused infrastructure buildouts is certainly out of the question as tech leaders continue to reassure them that it will all be worth it in the end.
The selloff could mark the beginning of a much larger downturn, analysts fear.
“Questions over the extent of [capital expenditures] as a result of LLM build-outs, the eventual return on that, and the fear of eventual over-expansion of capacity will be persistent,”
“We have suddenly gone from the fear that you cannot be last, to investors questioning every single angle in this AI race,”
“The market is rethinking its approach to AI,” M&G chief investment officer for equities Fabiana Fedeli told the FT, arguing that investors are “a lot more selective in which companies [they] will decide to bet on.”
Tomi Engdahl says:
https://futurism.com/future-society/anthropic-war-openai?fbclid=IwdGRjcAP8kH5jbGNrA_yQL2V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHmxXBrgVKV4SkT5rWZwbzUUDtmbLDAwsFcw2vwnbHU9L_WUTR_9AtF96xwtL_aem_UGE2vTIkjHtPoyvhnMGcOw
Tomi Engdahl says:
Your dev team isn’t a cost center — it’s about to become a multiplier
https://www.cio.com/article/4130291/your-dev-team-isnt-a-cost-center-its-about-to-become-a-multiplier.html
Why the smartest CIOs are thinking about AI-augmented software development differently
A recent keynote and a seemingly unrelated white paper, together, tell a story that should fundamentally change how you think about your software development organization.
In December at AWS re:Invent, Werner Vogels delivered his final keynote. Instead of announcing services, he spent his time on something far more valuable: telling us who developers need to become in the AI age.
In September, OpenAI released a white paper called GDPval that measured how AI performs against human experts across 44 occupations. The headline everyone noticed in the accompanying blog was that Claude Opus 4.1 hit 47.6% parity with human experts on economically valuable tasks, suggesting that Artificial General Intelligence (AGI) is around the corner. But the chart everyone should have noticed didn’t make it to the blog. It demonstrated how leaps in productivity are possible when AI works with a human-in-the-loop.
Here’s the punchline: Software development is absolutely being disrupted by AI. But if your response is “great, we can cut headcount,” you’re wasting a monumental opportunity.
What OpenAI’s GDPval actually shows
The headline chart of OpenAI’s GDPval blog showed AI models approaching parity with human experts on isolated tasks.
This shows something different: what happens when you use AI with human oversight rather than as a replacement.
Under a “try n times, then fix it yourself” scenario where an expert uses AI, reviews the output, resamples if needed, and steps in to complete or fix the work when necessary, GPT-5 high delivers about 1.6x cost improvement and 1.4x speed improvement compared to an unassisted human expert.
That’s “AI can make your developers significantly more productive.”
Let me give you a concrete example from my own work. Late last summer, a colleague asked me to analyze a particular submarket: identify key players, funding, valuations, headcount, and metrics. In the old days, this would have meant three to four hours of manual research. Instead, I had Claude Desktop pull the information in about 20 minutes.
It didn’t get everything right the first time. I had to provide additional context and refine the prompts. Then I had Gemini verify accuracy and produce a structured output. And the next step is where I focused my time: on the high-value analysis–interpreting the data, connecting insights, and providing context based on my expertise. I used AI to accelerate the data collection and organization, not to replace my strategic thinking and expert analysis.
Now multiply that across your entire development organization.
The Renaissance developer and your developer strategy
In his keynote, Werner invoked the Renaissance, that explosive period after the Dark Ages when people like Leonardo da Vinci combined art, science, engineering, and curiosity into something transformative. His argument: we’re entering a similar moment for developers. But golden ages don’t just happen to you. You must adapt to become the kind of person who can thrive in them.
As leaders, we must build the kind of organizations that encourage developers to become what Werner calls the “Renaissance Developer.”
Trait 1: Be curious
Werner celebrated curiosity as foundational, not just tolerating failure, but embracing it as the only path to learning. Question everything. Experiment freely. Treat failures as data, not defeat.
Trait 2: Communicate
The way we communicate with LLMs and agents is similarly ambiguous to how we communicate with people. We’ve spent decades learning that specificity reduces ambiguity in human collaboration. Now we’re interacting with AI systems that need clear, structured communication to produce useful output.
Trait 3: Be an owner
Werner directly addressed vibe coding, the increasingly popular approach where developers describe what they want and let AI generate the code. His take: fine if you watch closely. But you don’t get to use it as an excuse to abdicate responsibility.
Own the quality. Own the security. Own the functionality.
Strategic implication: AI will help your developers ship code faster. Without oversight, that means shipping bugs faster, too. The organizations that maintain human accountability for quality, while using AI for velocity, will massively outperform those that let AI become an excuse for reduced rigor.
Trait 4: Think in systems
Werner used the Yellowstone wolves as his illustration on this point. Reintroducing wolves to the area triggered a domino effect. The reduced elk population stopped overgrazing riverbanks, vegetation returned, erosion decreased, and the physical geography of the park shifted.
Strategic implication: Your developers need to lift their heads up from the code in front of them and see the bigger picture. How does their service interact with the twelve other services it touches? What happens when their database gets slow, not just to their app, but to everything downstream? When they’re working with AI systems, what feedback loops are they creating?
Trait 5: Be a polymath
Werner illustrated this with a progression: I-shaped people (deep expertise in one area), T-shaped people (deep expertise plus broad familiarity), and polymaths (deep expertise across multiple domains, like da Vinci). The future belongs to the polymaths.
Strategic implication: The architects who build the most elegant systems aren’t just good at infrastructure; they understand the business domain, the user experience, the organizational dynamics, the economics. AI handles the routine cognitive tasks; humans add value through cross-domain connections. Build teams that can make those connections, because AI will struggle to do so.
The real opportunity: Projects you couldn’t previously afford
If you treat AI as a pathway to eliminate developer headcount, sure, you’ll capture some cost savings in the short term. But you’ll miss the bigger opportunity entirely. You’ll be the bank executive in 1975 who saw ATMs and thought, “Great, we can close branches and fire tellers.” Meanwhile, your competitors have automated the mundane teller tasks and are opening new branches to sell higher-end services to more people.
The 1.4-1.6x productivity improvement that GDPval documented isn’t about doing the same work with fewer people. It’s about doing vastly more work with the same people.
That new product idea you had that was 10x too expensive to develop? It’s now possible. That customer experience improvement that could drive loyalty that you didn’t have the headcount for? It’s on the table. The technical debt you’ve been accumulating? You can start to pay it down.
When development teams become more efficient, the economically viable project portfolio expands dramatically, revealing new opportunities to ship more features, enter new markets, and build competitive moats.
What this means for your AI strategy
What struck me about Werner’s final keynote wasn’t the content, it was the intent. This was Werner’s last time at that podium. He could have done a victory lap through AWS’s greatest hits. Instead, he spent his time outlining a framework of success for the next generation of developers.
For those of us leading technology organizations, the framework is both validating and challenging. Validating because these traits aren’t new. They have always separated good developers from great ones. Challenging because AI amplifies everything, including the gaps in our capabilities.
What can you do?
First, stop framing AI investments primarily as cost reduction initiatives. Frame them as productivity multipliers, and your employees will stop living in fear.
Second, invest in the Renaissance developer traits across your organization. Curiosity, communication, ownership, systems thinking, polymathy. These capabilities separate high-performing AI-augmented teams from teams that just ship bugs faster.
Third, expand your project portfolio to match your expanded capacity. What projects have been sitting in the backlog because you didn’t have the headcount? Tackle them now.
Fourth, maintain human accountability for quality. AI-generated code still needs human verification. AI-assisted analysis still needs human judgment. Don’t let the velocity gains seduce you into removing human oversight.
Your development organization isn’t a cost center waiting to be optimized. It’s a productivity multiplier waiting to be unleashed. The only question is whether you’ll see it that way before your competitors do.
Tomi Engdahl says:
“Don’t support the fascist regime.” https://trib.al/Fh4MK2I
QuitGPT
Campaign Urges Users to Quit ChatGPT Over OpenAI’s Support for Trump and ICE
“Don’t support the fascist regime.”
https://futurism.com/future-society/boycott-chatpgpt-trump?fbclid=IwdGRjcAP87O5jbGNrA_zsyGV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHhwh1FS7PHbl6uoVjjMVuFK2KwU4GPo8ekLbf1p6vX4zvr90ZWqkKYSbqmXT_aem_ewwjusaGRc8WUw9R_A4S9g
It isn’t exactly big news that big tech is in deep with the US government. Days after Trump’s inauguration last year, execs including OpenAI’s Sam Altman flocked to the Oval Office to announce a $500 billion AI infrastructure project — and they’ve remained deeply sycophantic ever since.
Now that obsequiousness could be coming back to haunt them. As reported by MIT Technology Review, activists critical of the Trump administration and the actions of Immigration and Customs Enforcement have started a campaign called QuitGPT, urging regular users to ditch OpenAI’s chatbot for good.
So far, the campaign boasts over 700,000 supporters of the boycott. The QuitGPT website lists a few different ways to participate: quitting ChatGPT outright, cancelling paid subscriptions, and spreading the word about the boycott with others on social media.
As for why, the activists behind the boycott point to OpenAI’s incredibly tight relationship with the Trump administration. As QuitGPT notes, OpenAI president Greg Brockman famously donated $25 million to a Trump Super PAC in 2025, while ICE uses an AI tool powered by ChatGPT for recruitment.
“They’re cozying up to Trump while ICE is killing Americans and the Department of Justice is trying to take over elections,” the QuitGPT organizers write on their website.
“ChatGPT enables mental-health crises through sycophancy and dependence by replacing human relationships with AI girlfriends/boyfriends. Many employees have quit OpenAI because of its leadership’s lies, deception and recklessness.”
Tomi Engdahl says:
https://futurism.com/artificial-intelligence/openai-fires-safety-exec-opposed-adult-mode?fbclid=IwVERDUAP87Z1leHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR5efUobqZ0dqBU6lMAWJNLFI8DXC5qLXRryUG6xqKVxev2Vpyvv-RQJk6h0nw_aem_7PLpTBzqmfa_DHAbBCvz2A
Tomi Engdahl says:
AI makes better music than you
https://www.youtube.com/watch?v=NkNdnPYtXRw&list=TLPQMTEwMjIwMjZ7lLHw5CVo-A&index=2
Tomi Engdahl says:
Anthropic’s CEO says we’re in the ‘centaur phase’ of software engineering
https://www.businessinsider.com/anthropic-ceo-dario-amodei-centaur-phase-of-software-engineering-jobs-2026-2
Dario Amodei compared AI and humans working together to a mythical creature — the horse-and-human centaur.
“We’re already in our centaur phase for software,” Amodei said.
Software execs argue AI boosts engineer productivity, instead of cutting jobs.
Dario Amodei has a novel analogy to describe how AI and humans are working together
Tomi Engdahl says:
I use the ‘Gravity’ prompt with ChatGPT every day — here’s how it finds and fixes weak ideas
Features
By Amanda Caswell published 2 days ago
This simple prompt turns ChatGPT into a ruthless reality check
https://www.tomsguide.com/ai/i-use-the-gravity-prompt-with-chatgpt-every-day-heres-how-it-finds-and-fixes-weak-ideas
I’m the type of person who has notebooks full of ideas. Some are good, some are useless and some I haven’t even thought about again since writing them down. I also keep notes in my phone and sticky notes scattered across my office — a low-grade idea storm at all times.
That’s why I created a prompt that helps me bring my ideas back down to earth and exposes their weak points along the way. It works for just about any idea — or even when you can’t come up with one at all. In other words, it’s the calm after a brainstorm.
I stumbled on it after one too many rounds of asking ChatGPT to “improve” an idea, only to get polite, glossy feedback that made my thinking feel smarter than it actually was. The model would rephrase my half-baked logic in cleaner language, add a few encouraging transitions, and send me on my way feeling like a genius. But the core problems were still there — I just couldn’t see them anymore under all that polish.
That’s where what I call the Gravity prompt comes in. Instead of asking ChatGPT to brainstorm, expand, or “make this better,” it does the opposite: it forces the model to behave like a hostile critic whose sole job is to poke holes, surface blind spots and challenge shaky logic.
At its core, the Gravity prompt tells ChatGPT to stop being agreeable and start being adversarial. It’s designed to identify flawed assumptions, point out contradictions in your reasoning, highlight risks you’re overlooking, pressure-test your conclusions, and ultimately separate what merely sounds good from what actually holds up.
This is the exact prompt that I use with ChatGPT because it tends to be the most people-pleasing. But, you can use it with any chatbot:
The ‘Gravity’ prompt is: Act like gravity for my idea. Your job is to pull it back to reality. Attack the weakest points in my reasoning, challenge my assumptions, and expose what I might be missing. Be tough, specific, and do not sugarcoat your feedback. [Insert your idea].
ChatGPT’s response may surprise you because you’ll likely get a very different response than you’re used to — sharper, more skeptical and far less flattering. That’s the point.
This prompt works well because most people use AI as a hype machine. We ask it to refine, polish or expand our thinking, and the model is happy to oblige. ChatGPT is, by design, agreeable. It wants to be helpful but this strcture usually means building on what you’ve said rather than tearing it apart. The result is a false sense of confidence — your idea reads better, but its underlying logic hasn’t actually been tested.
Instead of building upward, this prompt activiely pushes downward. It asks the model to find problems rather than solutions, weaknesses rather than strengths. And that resistance is exactly where real clarity emerges. When your idea survives the Gravity test — when you’ve addressed every objection the model throws at you — you know it’s actually solid.
Tomi Engdahl says:
Microsoft confirms plan to ditch OpenAI — as the ChatGPT firm continues to beg Big Tech for cash
News
By Jez Corden published yesterday
Google Deepmind co-founder and Microsoft AI lead Mustafa Suleyman suggests that the big tech firm is moving away from OpenAI reliance, as the latter’s financials look increasingly dire.
https://www.windowscentral.com/artificial-intelligence/microsoft-confirms-plan-to-ditch-openai-as-the-chatgpt-firm-continues-to-beg-big-tech-for-cash
In a move that is perhaps less surprising than you might think, Microsoft seems to be readying up to dump OpenAI.
Right now, Microsoft’s entire AI operation is powered by ChatGPT and other OpenAI models, including DALLE 3. Microsoft has shown impressive growth and demand for its enterprise-grade AI tools, including things like Microsoft 365 Copilot and Github Copilot, even if its consumer-level efforts have largely fallen flat.
Microsoft and OpenAI are long-rumored to have endured something of a tumultuous relationship. Microsoft was a very early investor in OpenAI, and won itself some incredibly lucrative contracts as a result, including some forms of exclusivity over OpenAI’s models. Indeed, Microsoft still holds 27% of the new “for profit” arm of OpenAI, and maintains IP rights for OpenAI’s models until 2032.
However, Microsoft and OpenAI re-worked aspects of the deal last October, freeing up OpenAI to seek compute from competing cloud firms, and allowing Microsoft to divest itself of some of the endlessly spiralling risk.
OpenAI is notoriously on the hook for over a trillion dollars in future compute spend contracts, with big tech companies like Microsoft, Amazon, Softbank, and others artificially propping up the company. Run by Sam Altman, OpenAI has been mired in almost constant controversies, its balance sheet notwithstanding. The firm has so far made zero dollars, and requires near-constant cash injections to stay afloat.
Things may be about to get even rockier for OpenAI, as Microsoft AI chief Mustafa Suleyman just confirmed to FT (paywalled) that the firm is gearing up to ditch OpenAI’s models
“We have to develop our own foundation models, which are at the absolute frontier, with gigawatt-scale compute and some of the very best AI training teams in the world,”
Tomi Engdahl says:
z.ai’s open source GLM-5 achieves record low hallucination rate and leverages new RL ‘slime’ technique
https://venturebeat.com/technology/z-ais-open-source-glm-5-achieves-record-low-hallucination-rate-and-leverages
Tomi Engdahl says:
AI generated friends
https://youtu.be/x1IFpHyhYWI?si=PdgZjBJVfxgszLCZ
Tomi Engdahl says:
Friends – the one with AI
https://youtu.be/Vc2YoNHgGVU?si=96SZIleoGvRHotnJ
Tomi Engdahl says:
https://www.facebook.com/share/p/186YLwMhT6/
Markkinatalous ei kestä tekoälyä, ja yritysjohtajat tietävät sen.
Oletko koskaan pysähtynyt miettimään nykyisen tekoälyvallankumouksen suurinta paradoksia? Se on nimittäin tämä:
Jos tekoäly onnistuisi siinä, mitä teknologiayhtiöt lupaavat (eli korvaamaan ihmistyön laajamittaisesti) koko nykyinen talousjärjestelmämme romahtaisi.
Markkinatalouden logiikka on raadollisen yksinkertainen: yritykset tuottavat tavaroita ja palveluita, joita ihmiset ostavat palkkatuloillaan. Jos poistamme yhtälöstä työntekijät (ja palkanmaksun), poistamme samalla kuluttajat. Kuka ostaa ne tekoälyn optimoimat tuotteet, jos kenelläkään ei ole enää varaa niihin?
Tämä on umpikuja, josta ei puhuta ääneen. Markkinatalous ei voi toimia todellisen tekoälyn kehittämisen jälkeen.
Väitänkin, että teknologiajättien johtohuoneissa ei oikeasti uskota tai edes toivota työn loppumista. He ovat fiksuja ihmisiä; he ymmärtävät, että se olisi itse markkinatalouden itsemurha. Ja sijoittajat tahtovat tätä vielä vähemmän.
Miksi tekoälyä sitten kehitetään niin suurella tarmolla, jos se tarkoittaisi kaikkien sen kehittäjienkin töiden loppumista?
Koska pörssikurssit vaativat sitä. Elämme teatteritaloudessa. Yritysten on pakko esittää sijoittajille, että ne ovat “kehityksen kärjessä” ja tehostavat toimintaansa, vaikka todellisuudessa kyse olisikin vain illuusiosta. Tekoäly on tässä mielessä täydellinen savuverho: sen varjolla voidaan pitää yllä illuusiota loputtomasta kasvusta ja samalla peitellä reaalitalouden ongelmia.
Olemme tilanteessa, jossa järjestelmä vaatii yrityksiä valehtelemaan tulevaisuudesta pysyäkseen hengissä nykyhetkessä.
Mitä mieltä olette: Onko tekoälyhuuma aitoa teknologista kehitystä vai historian suurin pörssikupla, jolla tekohengitetään vanhenevaa talousjärjestelmää?
Tein myös äsken videon tekoälypesusta, millä yrityksen kaunistelevat irtisanomisiaan, sen löydät täältä https://youtu.be/BFRywIl3mHQ
Tomi Engdahl says:
Google: Gemini-tekoälyä yritetty kopioida massiivisilla hyökkäyksillä – jopa 100 000 pyyntöä
https://mobiili.fi/2026/02/14/google-gemini-tekoalya-yritetty-kopioida-massiivisilla-hyokkayksilla-jopa-100-000-pyyntoa/
Google kertoo, että sen Gemini-tekoälyä on joutunut laajamittaisten, kaupallisesti motivoituneiden toimijoiden kopiointiyritysten kohteeksi.
Googlen mukaan osa hyökkäyksistä on sisältänyt jopa yli 100 000 erillistä Geminille tehtyä pyyntöä, joiden tarkoituksena on ollut kopioida tekoälypalvelun toimintaa.
Google käy raportissaan läpi niin sanottuja “distillation attack” -tislaushyökkäyksiä. Käytännössä kyse on toistuvista, järjestelmällisistä kyselyistä, joilla pyritään selvittämään tekoälymallin sisäistä logiikkaa ja toimintaperiaatteita.
Googlen mukaan hyökkäysten takana ovat todennäköisimmin yksityiset yritykset tai tutkijat, jotka pyrkivät parantamaan omia tekoälymallejaan. Google ei kuitenkaan paljastanut tarkempia tietoja epäillyistä tahoista, mutta kertoi hyökkäysten tulleen eri puolilta maailmaa.
Google pitää toimintaa immateriaalioikeuksien varkautena.
Myös ChatGPT:stä tunnettu OpenAI on aiemmin kertonut vastaavasta toiminnasta sen tekoälymallien kopioimiseksi. Viime vuonna OpenAI syytti kiinalaista kilpailijaansa DeepSeekiä OpenAI:n mallien ”tislaamisesta omien tekoälyratkaisujensa kehittämiseksi.
Tomi Engdahl says:
https://www.facebook.com/share/p/16CJYBNcs3/
Rahulit loppuu.
OpenAI:lla on vaikeuksia rahoittaa toimintansa ja nyt Microsoftille on tullut mitta täyteen? Microsoft lopettamassa yhteistyön ja rahoittamisen.
Myös xAIn rahoituksellinen tilanne on heikohko. Musk tuskin olisi lähtenyt yhdistämään yhtiöitä spaceXn kanssa, jollei olisi nähnyt että rahoitus olisi ollut helpompi saada avaruusteknologialupaus edellä. Musk on sittemmin alkanut savustaa ulos AIn väkeä, kun cash burn vetää koko spaceXn tappiolliseksi.
Private Credit Bubble… Tässä se on – ja nyt yritetään kiireen vilkkaan puuhata listautumisia pörssiin, koska SINÄ olet näiden isojen sijoittajien exit liquidity.
https://www.tekniikkatalous.fi/uutiset/a/36341284-722f-4a36-ae50-a46c40f14243
Tomi Engdahl says:
A viral new Chinese AI tool has ‘Tom Cruise’ and ‘Brad Pitt’ fighting over Jeffrey Epstein : https://mrf.lu/3-PT
Tomi Engdahl says:
“AI makes you smarter but none the wiser.”
Mind Games
AI Is Causing a Grim New Twist on the Dunning-Kruger Effect, Research Finds
AI users are lacking in self-awareness.
https://futurism.com/artificial-intelligence/ai-dunning-kruger-effect?fbclid=IwdGRjcAP-Vc1jbGNrA_5VlGV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHsiQWQTZOut6FUjqyroJBSkU_R83bLiRvgCdAQLohYS-3U7IW7butVqo3G_A_aem_im15TxhVHpecZJ6UPeuT_g
People who are the worst at doing something also tend to severely overestimate how good they are at doing it, while those who are actually skilled tend to not realize their true talent.
This galling cognitive bias is called the Dunning-Kruger effect, as you’re probably familiar — and would you believe it if we told you that AI appears to make it even worse?
Case in point, a new study published in the journal Computers in Human Behavior — and titled, memorably, “AI Makes You smarter But None the Wiser”) — showed that everyone was bad at estimating their own performance after being asked to complete a series of tasks using ChatGPT. And strikingly, it was the participants who were “AI literate” who were the worst offenders.
“When it comes to AI, the [Dunning-Kruger effect] vanishes,” study senior author Robin Welsch, a professor at Aalto University, said in a statement about the work. “In fact, what’s really surprising is that higher AI literacy brings more overconfidence.”
“We would expect people who are AI literate to not only be a bit better at interacting with AI systems, but also at judging their performance with those systems,” Welsch added, “but this was not the case.”
It’s an interesting detail that helps build on our still burgeoning understanding of all the ways that our AI habits are probably bad for our brains, from being linked with memory loss to atrophying our critical thinking skills. Perhaps it’s also a testament to the ego of the AI power user.
Notably, the findings comes amid heated debate around the dangerous “sycophancy” of AI models. Chatbots designed to both be helpful and engaging constantly ply users with flattery and go along with their demands. It’s an addictive combination that makes you feel smart or vindicated.
The researchers found that the group that used ChatGPT substantially improved their scores compared to the group that didn’t. But they also vastly overestimated their performance — and the effect was especially pronounced among the AI savvy, “suggesting that those with more technical knowledge of AI were more confident but less precise in judging their own performance,” the authors wrote.
When they examined how the participants used the chatbot, the team also discovered that the majority of them rarely asked ChatGPT more than one question per problem — with no further probing or double checking. According to Welsch, this is an example of what psychiatrists call cognitive offloading, a well documented trend in AI in which users outsource all their thinking to an AI tool.
“We looked at whether they truly reflected with the AI system and found that people just thought the AI would solve things for them,” Welsch said. “Usually there was just one single interaction to get the results, which means that users blindly trusted the system.”
You’ve got to hand it to AI: it’s democratizing the Dunning-Kruger effect.
Tomi Engdahl says:
Think of AI as an escalator. You can choose to stand on it and have the escalator do all the work and move you at one slow speed, or you can walk on the escalator and use it as a tool to make you move faster…
Tomi Engdahl says:
AI doesn’t create ignorance.
It amplifies whatever system you already have.
If you’re structured, you get leverage.
If you’re not, you get overconfidence.
The real risk isn’t AI.
It’s operating without architecture.
Tomi Engdahl says:
Artificial intelligence is power-hungry.
That’s not sci-fi doomsaying, just literal reality. Training models requires enormous amounts of computational power and electricity. But the founders of The Biological Computing Company (TBC) have a solution–parsing visual data through dishes of neurons, which process them in a way that AI models can understand.
How does this work? Here’s the gist: https://www.forbes.com/sites/the-prototype/2026/02/12/this-startup-is-boosting-ai-with-real-brain-cells/?utm_campaign=ForbesMainFB&utm_source=ForbesMainFacebook&utm_medium=social (Photo: TBC)
Tomi Engdahl says:
These are the most Gen Z Olympics yet. Oversharing, activism, and AI are all on the podium.
https://www.businessinsider.com/winter-olympics-gen-z-sturla-holm-laegreid-norway-biathlon-overshare-2026-2?utm_source=facebook&utm_medium=social&utm_campaign=insider-photo-headline-post-comment&fbclid=IwdGRjcAP-2LtjbGNrA_7YjmV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHnKZQPT9FHFEtK6dRCtQbKCND8JbcMN8z_LJ-UOZfuCjTFw5EVl_FVyPwnO4_aem_o1_KU1MNcDNOyMPi-R61-w
A Norwegian biathlete turned heads when he confessed to cheating on his ex in a slopeside interview.
His oversharing is one example of how deeply Gen Z these Winter Olympics are.
Other proof of Gen Z life: skiers talking politics and ice skaters using AI-generated music.