Here are some of the the major AI trends shaping 2026 — based on current expert forecasts, industry reports, and recent developments in technology. The material is analyzed using AI tools and final version hand-edited to this blog text:
1. Generative AI Continues to Mature
Generative AI (text, image, video, code) will become more advanced and mainstream, with notable growth in:
* Generative video creation
* Gaming and entertainment content generation
* Advanced synthetic data for simulations and analytics
This trend will bring new creative possibilities — and intensify debates around authenticity and copyright.
2. AI Agents Move From Tools to Autonomous Workers
Rather than just answering questions or generating content, AI systems will increasingly act autonomously, performing complex, multi-step workflows and interacting with apps and processes on behalf of users — a shift sometimes called agentic AI. These agents will become part of enterprise operations, not just assistant features.
3. Smaller, Efficient & Domain-Specific Models
Instead of “bigger is always better,” specialized AI models tailored to specific industries (healthcare, finance, legal, telecom, manufacturing) will start to dominate in many enterprise applications. These models are more accurate, legally compliant, and cost-efficient than general models.
4. AI Embedded Everywhere
AI won’t be an add-on feature — it will be built into everyday software and devices:
* Office apps with intelligent drafting, summarization, and task insights
* Operating systems with native AI
* Edge devices processing AI tasks locally
This makes AI pervasive in both work and consumer contexts.
5. AI Infrastructure Evolves: Inference & Efficiency Focus
More investment is going into inference infrastructure — the real-time decision-making step where models run in production — thereby optimizing costs, latency, and scalability. Enterprises are also consolidating AI stacks for better governance and compliance.
6. AI in Healthcare, Research, and Sustainability
AI is spreading beyond diagnostics into treatment planning, global health access, environmental modeling, and scientific discovery. These applications could help address personnel shortages and speed up research breakthroughs.
7. Security, Ethics & Governance Become Critical
With AI handling more sensitive tasks, organizations will prioritize:
* Ethical use frameworks
* Governance policies
* AI risk management
This trend reflects broader concerns about trust, compliance, and responsible deployment.
8. Multimodal AI Goes Mainstream
AI systems that understand and generate across text, images, audio, and video will grow rapidly, enabling richer interactions and more powerful applications in search, creative work, and interfaces.
9. On-Device and Edge AI Growth
10. New Roles: AI Manager & Human-Agent Collaboration
Instead of replacing humans, AI will shift job roles:
* People will manage, supervise, and orchestrate AI agents
* Human expertise will focus on strategy, oversight, and creative judgment
This human-in-the-loop model becomes the norm.
Sources:
[1]: https://www.brilworks.com/blog/ai-trends-2026/?utm_source=chatgpt.com “7 AI Trends to Look for in 2026″
[2]: https://www.forbes.com/sites/bernardmarr/2025/10/13/10-generative-ai-trends-in-2026-that-will-transform-work-and-life/?utm_source=chatgpt.com “10 Generative AI Trends In 2026 That Will Transform Work And Life”
[3]: https://millipixels.com/blog/ai-trends-2026?utm_source=chatgpt.com “AI Trends 2026: The Key Enterprise Shifts You Must Know | Millipixels”
[4]: https://www.digitalregenesys.com/blog/top-10-ai-trends-for-2026?utm_source=chatgpt.com “Digital Regenesys | Top 10 AI Trends for 2026″
[5]: https://www.n-ix.com/ai-trends/?utm_source=chatgpt.com “7 AI trends to watch in 2026 – N-iX”
[6]: https://news.microsoft.com/source/asia/2025/12/11/microsoft-unveils-7-ai-trends-for-2026/?utm_source=chatgpt.com “Microsoft unveils 7 AI trends for 2026 – Source Asia”
[7]: https://www.risingtrends.co/blog/generative-ai-trends-2026?utm_source=chatgpt.com “7 Generative AI Trends to Watch In 2026″
[8]: https://www.fool.com/investing/2025/12/24/artificial-intelligence-ai-trends-to-watch-in-2026/?utm_source=chatgpt.com “3 Artificial Intelligence (AI) Trends to Watch in 2026 and How to Invest in Them | The Motley Fool”
[9]: https://www.reddit.com//r/AI_Agents/comments/1q3ka8o/i_read_google_clouds_ai_agent_trends_2026_report/?utm_source=chatgpt.com “I read Google Cloud’s “AI Agent Trends 2026” report, here are 10 takeaways that actually matter”
1,373 Comments
Tomi Engdahl says:
Open Multi-Agent
TypeScript framework for multi-agent orchestration. One runTeam() call from goal to result — the framework decomposes it into tasks, resolves dependencies, and runs agents in parallel.
https://github.com/JackChen-me/open-multi-agent/blob/main/README.md
Tomi Engdahl says:
Microsoft Agent Framework Version 1.0
https://devblogs.microsoft.com/agent-framework/microsoft-agent-framework-version-1-0/
Tomi Engdahl says:
https://hackaday.com/2026/03/29/training-a-transformer-with-1970s-era-technology/
Tomi Engdahl says:
I Built an AI Agent Team for Software Development and Tested on 5 Real Projects
I assigned agents to PM, SWE, QA, and on-call roles and used the setup across five different software projects.
https://alexeyondata.substack.com/p/i-built-an-ai-agent-team-for-software
Tomi Engdahl says:
Arcee’s new, open source Trinity-Large-Thinking is the rare, powerful U.S.-made AI model that enterprises can download and customize
https://venturebeat.com/technology/arcees-new-open-source-trinity-large-thinking-is-the-rare-powerful-u-s-made
Tomi Engdahl says:
Miten GEO muuttaa SEO:a?
https://www.hopkins.fi/artikkelit/geo-seo/
Tomi Engdahl says:
AI De-Tractors
AI-Powered Tractor Startup Burns Through a Quarter Billion Dollars, Fires All Employees in Epic Implosion
https://futurism.com/robots-and-machines/ai-tractor-startup-founders
Tomi Engdahl says:
https://thenewstack.io/persistent-ai-agents-compared/
Tomi Engdahl says:
Directing a Swarm of Agents for Fun and Profit
https://www.infoq.com/presentations/coding-agents/
Tomi Engdahl says:
The hidden technical debt of agentic engineering
Agents are easy to build but hard to run. At Port, we mapped seven blocks of hidden infrastructure debt with AI agents in enterprise systems.
https://thenewstack.io/hidden-agentic-technical-debt/
Tomi Engdahl says:
Microsoft launches 3 new AI models in direct shot at OpenAI and Google
https://venturebeat.com/technology/microsoft-launches-3-new-ai-models-in-direct-shot-at-openai-and-google
Tomi Engdahl says:
Softr launches AI-native platform to help nontechnical teams build business apps without code
https://venturebeat.com/technology/softr-launches-ai-native-platform-to-help-nontechnical-teams-build-business
Tomi Engdahl says:
Imagine if your Teams or Slack messages automatically turned into secure context for your AI agents — PromptQL built it
https://venturebeat.com/data/imagine-if-your-teams-or-slack-messages-automatically-turned-into-secure
Tomi Engdahl says:
Passing Gas
The Iran War Has Cut Off Supply of a Gas the AI Industry Desperately Needs
“The first victims are party balloons.”
https://futurism.com/artificial-intelligence/helium-ai-iran-war
Forget gas prices and fertilizer. One of the biggest casualties of the US war on Iran could be your favorite AI chatbot.
As the hare-brained conflict enters its fifth week, the tech industry is raising alarm about a growing shortage of helium, the odorless gas that makes birthday balloons lighter than air — and which, it turns out, is silently undergirding the AI boom.
Tomi Engdahl says:
I connected Claude Code to my home server through MCP, and now I manage my entire lab by talking to it
https://www.xda-developers.com/connected-claude-code-through-mcp-manage-entire-lab-by-talking/
Setting up a homelab server is straightforward. Install a base operating system, choose a Docker tool like Portainer, get the Compose file for a service, and deploy it. Managing the server and its services is where it gets tedious. Fixing an issue isn’t the tough part; finding the root cause takes more time. For a simple issue, I had to open Portainer to find the container, SSH to read the logs, then switch to Uptime Kuma to check whether it was actually down, and Beszel to cross-check the server usage. Even if I got these things right, I had to put the pieces together to understand what the actual problem was.
Tomi Engdahl says:
Agentic AI Patterns Reinforce Engineering Discipline
https://www.infoq.com/news/2026/03/agentic-engineering-patterns/
On a recent AI DevOps Podcast, Paul Duvall discussed how agentic AI patterns are reinforcing core engineering discipline as the capability of modern models increases. He also shared his repository of agentic AI engineering patterns, where he is documenting and evolving practices for AI assisted software development.
Duvall, author of Continuous Integration: Improving Software Quality and Reducing Risk, positioned his collection of patterns as an exploration of how established engineering practices are being adapted through hands-on use of agentic AI in client work. He emphasised grounding AI generated output in shared patterns, stating that “engineering practices are becoming even more relevant when you have AI generating code.”
Tomi Engdahl says:
“Whoever controls AI controls the world” – how does the geopolitical situation and other threat scenarios affect cybersecurity?
Cybersecurity is now a central factor in global power dynamics, as artificial intelligence, geopolitical tensions, and quantum computing are rapidly reshaping threats. Amid this, Finland – renowned for its preparedness and cross-sector crisis training – faces new challenges. How is the country responding?
https://www.dna.fi/dnabusiness/blogi/-/blogs/whoever-controls-ai-controls-the-world-how-does-the-geopolitical-situation-and-other-threat-scenarios-affect-cybersecurity
Tomi Engdahl says:
Meta’s new structured prompting technique makes LLMs significantly better at code review — boosting accuracy to 93% in some cases
https://venturebeat.com/orchestration/metas-new-structured-prompting-technique-makes-llms-significantly-better-at
Tomi Engdahl says:
https://thenewstack.io/cursor-self-hosted-coding-agents/
Tomi Engdahl says:
Crunching Numbers
Huge Study of Chats Between Delusional Users and AI Finds Alarming Patterns
“Chatbots seem to encourage, or at least play a role in, delusional spirals that people are experiencing.”
https://futurism.com/artificial-intelligence/study-chats-delusional-users-ai
Tomi Engdahl says:
https://towardsdatascience.com/how-to-make-claude-code-better-at-one-shotting-implementations/
Tomi Engdahl says:
Kubescape 4.0 Brings Runtime Security and AI Agent Scanning to Kubernetes
https://www.infoq.com/news/2026/03/kubescape-40/
Tomi Engdahl says:
Memory-makers’ shares are down. Some RAM prices have eased. Blaming Google is not a good idea
Chocolate Factory boffins have found a way to reduce AI’s memory use, but don’t assume that means less demand for DRAM
https://www.theregister.com/2026/03/31/google_turboquant_memory_market_impact/
Tomi Engdahl says:
MICROSOFT IS DROWNING
THE INTERNET IN AI SLOP
A manifesto documenting the systematic flooding of low-quality, synthesized, and unverified content
https://microslop.com/
Tomi Engdahl says:
Meet A-Evolve: The PyTorch Moment For Agentic AI Systems Replacing Manual Tuning With Automated State Mutation And Self-Correction
https://www.marktechpost.com/2026/03/29/meet-a-evolve-the-pytorch-moment-for-agentic-ai-systems-replacing-manual-tuning-with-automated-state-mutation-and-self-correction/
Tomi Engdahl says:
How to build an enterprise-grade MCP registry
feature
Mar 30, 2026
15 mins
MCP registries are emerging as the new integration catalog for AI agents. Building one for the enterprise requires semantic discovery, strong governance, and developer-friendly controls.
https://www.infoworld.com/article/4145014/how-to-build-an-enterprise-grade-mcp-registry.html
Tomi Engdahl says:
Why New Google-Agent May Be A Pivot Related To OpenClaw Trend
Why Shift of Googlers from Project Mariner to Gemini Agent may be related to the new Google-Agent crawler and the growing LAM competition.
https://www.searchenginejournal.com/why-new-google-agent-may-be-a-pivot-related-to-openclaw-trend/570764/
Tomi Engdahl says:
https://www.infoworld.com/article/4145014/how-to-build-an-enterprise-grade-mcp-registry.html
Tomi Engdahl says:
ChatGPT Data Leakage via a Hidden Outbound Channel in the Code Execution Runtime
https://research.checkpoint.com/2026/chatgpt-data-leakage-via-a-hidden-outbound-channel-in-the-code-execution-runtime/
Key Takeaways
Sensitive data shared with ChatGPT conversations could be silently exfiltrated without the user’s knowledge or approval.
Check Point Research discovered a hidden outbound communication path from ChatGPT’s isolated execution runtime to the public internet.
A single malicious prompt could turn an otherwise ordinary conversation into a covert exfiltration channel, leaking user messages, uploaded files, and other sensitive content.
A backdoored GPT could abuse the same weakness to obtain access to user data without the user’s awareness or consent.
The same hidden communication path could also be used to establish remote shell access inside the Linux runtime used for code execution.
Tomi Engdahl says:
https://www.lightreading.com/network-automation/towards-autonomous-networks-with-agentic-ai-and-real-time-simulation
Tomi Engdahl says:
Laser Chip Brings Multiplexing to AI Data Centers Feeding more optical signals into one fiber will reduce latency
https://www.lightreading.com/security/at-t-ericsson-call-for-5g-network-security-rethink
Tomi Engdahl says:
Google Maps mullistuu: Gemini-tekoäly tekee kartasta keskustelukumppanin ja tuo 3D-navigoinnin
https://dawn.fi/uutiset/2026/03/12/google-maps-gemini-tekoaly-ask-maps-immersive-navigation
Tomi Engdahl says:
https://www.dna.fi/yrityksille/teknologiatrendit2026
Tomi Engdahl says:
Altman said he felt “super sad” about blowing up the $1 billion deal.
OpenAI CEO Sam Altman has finally dished on Disney’s reaction to his decision to kill the company’s AI video generator app, Sora — scuttling a billion dollar deal the two giants had planned.
Given the amount of money Disney was prepared to invest, and the suddenness of the decision, speculation abounded on the drama behind the scenes. But in an interview on the “Mostly Human” podcast — the first he’s given since the Sora news — Altman insists that emotions were cool.
When he broke the news to Disney CEO Josh D’Amaro, the first thing he told Altman was, “I get it,” Altman recalled.
“But it’s super sad always to disappoint a partner or users or a team, all of which are doing incredible work,” he said.
Both companies sound cagey about burning bridges. AI, bubble or not, is too buzzy a market to shut the door on one of the industry’s leading companies. And Disney’s cultural clout is too influential to ignore.
Altman, in the interview, left the door open to a future collab.
“I love Sora, I love generated videos, and I love our partnership with Disney, and we’re working hard with them to find a world where they can still do something amazing, and we can help with that,” Altman said. “But we need to concentrate our compute and our product capacity into these next generation of automated researchers and companies.”
Disney, in response to the Sora shutdown, responded in lukewarm fashion, underscoring its continued openness to AI tech.
Tomi Engdahl says:
https://jco.fi/fi/mika-on-llms-txt-ja-miten-se-parantaa-verkkosivujen-nakyvyytta-tekoalyhaussa/?fbclid=IwZXh0bgNhZW0BMABhZGlkAasv9onTDgFzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR63SAh-oy7jf-qYfQwfDn2bwTjwLKIrR1YhmhPlTY4UMoVlNCjmx2oSyxSI8Q_aem_mkw3oBADk0eS6yBiZe-96g&utm_medium=paid&utm_source=fb&utm_id=120242550949130673&utm_content=120242550951600673&utm_term=120242550949410673&utm_campaign=120242550949130673
Tomi Engdahl says:
Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage
https://techcrunch.com/2026/04/04/anthropic-says-claude-code-subscribers-will-need-to-pay-extra-for-openclaw-support/
Tomi Engdahl says:
Inside the Claude Code source
https://gist.github.com/Haseeb-Qureshi/d0dc36844c19d26303ce09b42e7188c1
Anthropic’s Claude Code CLI source code leaked onto GitHub recently. All of it. About 1,900 files and a lot of TypeScript.
I read through the key modules. What follows is a breakdown of the surprising parts: how the system actually works, where Anthropic made clever engineering choices, and where their approach diverges from OpenAI’s Codex in ways you wouldn’t guess from using either tool.
Tomi Engdahl says:
Anthropic announces free Claude update for Microsoft 365 users, details here
Anthropic has announced that you can now connect your Microsoft 365 data to Claude for all plans, including the free tier. This will allow Claude to access your emails, documents and other files across various Microsoft apps such as Outlook, and OneDrive. Previously, this was restricted to team or entreprise users.
https://www.indiatoday.in/technology/news/story/anthropic-announces-free-claude-update-for-microsoft-365-users-details-here-2891472-2026-04-04#google_vignette
Tomi Engdahl says:
Tekoälytaistossa tapahtui käänne
Tekoälytyökalujen kilpailussa Claude ohittaa Chat GPT:n. Jääkö Chat GPT kansan tekoälyksi, kysyy HS Vision toimittaja Elina Lappalainen.
https://www.hs.fi/visio/art-2000011916902.html
Tomi Engdahl says:
Influencer Era
Groups Set Up to Shill AI and Data Centers Are Pouring Huge Sums of Money Into the Midterm Elections
“The cavalry is coming to back up the policymakers who stand with the president and will hold accountable the ones who don’t.”
https://futurism.com/future-society/ai-pacs-trump-lobbying?fbclid=IwdGRjcARAKKhjbGNrBEAobWV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHvnmspAN-L5i_dRXCLblQisjYAa_oBg5e2iuV9aWHGoeLaTZdnz-3q_W8z-e_aem_G_DfnmYnosgQeDBYSIiCcQ
Artificial intelligence remains deeply unpopular with the American public. One poll found it’s even more reviled than ICE, which is no small feat given the mass protests that erupt whenever the agency’s goons march into another US city.
A few political action groups are hoping to turn that around. Going into the 2026 midterm elections, the Financial Times reports, newly-formed PACs with major tech industry backing are spending hundreds of millions of dollars to shape how voters think about AI regulation.
Some of the groups cast a wide net, like Leading the Future, a super PAC backed by Trump donors and AI barons like OpenAI co-founder Greg Brockman, Palantir co-founder Joe Lonsdale, and tech venture capital giant Andreessen Horowitz. Founded in August of 2025, Leading the Future has raised over $125 million to back pro-AI candidates who oppose state-level regulations, according to the FT.
Others, like the pro-regulation PAC Public First Action, serve as vehicles for individual AI companies to push their agendas. Backed solely by Anthropic, this group aims to raise $75 million to boost candidates who want to preserve state’s individual rights to regulate AI.
Mark Zuckerberg’s Meta also has its own pet super PAC, the American Technology Excellence Project, which aims to spend $65 million on state-level candidates who will “defend American tech leadership at home and abroad” — a fluffy way of saying “oppose AI regulation.”
This jockeying over states’ rights to regulate AI is the key question in the 2026 PAC wars.
Bankrolling that push is Innovation Council Action, a hawkish super PAC backed by Trump advisor and PayPal mafioso David Sacks and led by former Trump communications aide Taylor Budowich.
That PAC marks a major challenge to groups like Leading the Future, which Trump and his cabinet found to be insufficiently loyal.
“President Trump has made it clear, America will win the AI race against China, period,” Budowich told Fox. “He built the framework, he’s leading from the front, and this organization exists to make sure he doesn’t fight that battle alone. The cavalry is coming to back up the policymakers who stand with the president and will hold accountable the ones who don’t.”
Tomi Engdahl says:
(Teko-) äly hoi älä jätä!
Tomi Engdahl says:
Clock In
AI Expert Says It’s Time to Stop Freaking Out About AI Taking Our Jobs
That’s a relief.
https://futurism.com/artificial-intelligence/ai-jobs-automation-expert?fbclid=IwdGRjcARAPCljbGNrBEA8J2V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHq3Izuv7J62_x_hBD9oRe2MAXpdIUkwMmf9eEFE7gImuoZEhCQIhp0rFBIOK_aem_Ywv84XkKL4VXXsNIfX7hpA
Few fears have taken hold of the public imagination quite like the specter of AI-driven unemployment. A recent survey by the think tank Data for Progress found that a majority of US voters think AI is likely to increase unemployment rates, a scenario that often feels like it’s playing out in the horrid job market.
It makes a compelling narrative. Yet in a recent article published in Fortune, New York University cognitive scientist emeritus and prominent AI critic Gary Marcus makes a strong case in the opposite direction: that AI isn’t coming for anyone’s job anytime soon.
Plenty of the fear mongering, Marcus writes, comes down to good ol’ propaganda. The AI industry, for example, wants you to believe that artificial general intelligence — a still-theoretical form of AI with intellectual capabilities that rival or surpass that of humans — is either already here, or just around the corner.
As Marcus notes, this kind of AI remains firmly in the realm of science fiction, regardless of what tech executives would like anybody to believe. “[T]hey might be covering their bases in case that actually happens,” he writes, “but then again, maybe they just want you to drive up the valuations of their companies.”
The math on AI-driven unemployment likewise doesn’t add up, Marcus writes, including that by AI companies themselves. Take Anthropic, the company behind the AI chatbot Claude: its CEO Dario Amodei has made a spectacle of himself warning of a growing AI job apocalypse, even though his company’s own research department found “no systematic increase in unemployment for highly exposed workers since late 2022.”
What’s really happening, Marcus argues, is that corporations are AI-washing their investor reports to cash in on the hype. “In many cases AI may be serving as a fig leaf to cover layoffs that are actually driven by financial underperformance or earlier overhiring,” he explains.
Where mass layoffs are explicitly blamed on AI, they tend not to last, like at the finance tech company Klarna, which had to reverse course on its decision to automate customer service workers just 11 months after it declared them obsolete.
Unfortunately for anyone praying for a cyberpunk future, so far the AI apocalypse seems to have landed with a wet plop. Luckily, there are plenty of real life crises to keep us occupied until the singularity ever comes — if it does at all, that is.
Tomi Engdahl says:
Seeking Alpha
If You’re a Real Person Looking for a Job, the Flood of Fake AI Job Applications Will Make Your Blood Boil
“Within 12 hours of posting the role, we received more than 400 applications.”
https://futurism.com/artificial-intelligence/job-ai-applications-markup?fbclid=IwVERDUARAPPpleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR7HTViszNPU7NdGj8KFuXxxMhRkqXJ_lH9OFfwbXvMYcUiBNYIRQnA6Wka8lA_aem_otqwr46M2I9Uxy6cppSYOg
And beneath the official jobs data is a growing accessibility crisis. More and more job seekers are finding themselves shut out of the labor market — not because there are no jobs to be had, but because torrents of AI slop are crowding them out of consideration.
“Within 12 hours of posting the role, we received more than 400 applications,” Losowsky explained. “At first, most of these candidates seemed to be genuine. However, as the person who had to read them all, I quickly saw some red flags, which were all clear indicators of inauthenticity.”
Those “red flags” included repeating contact information, broken or nonworking links to LinkedIn profiles, repetitive resume formatting, and non-residential mailing addresses.
In a response to prompts on the company’s application form, most followed a “near-identical four-sentence pattern with minor variations.” A number of applications included “ChatGPT says” in their answers, or included information that “almost perfectly matched our job description,” Losowsky writes.
“In the most extreme case, one person claimed they had built our website and Blacklight [web privacy] tool (they hadn’t),” the editor continues.
The publication has since found their engineer, but not without significant headaches. If you extrapolate this out to the rest of the job market, it’s no wonder job seekers are calling 2025 the year of the “Great Frustration.” Barring any major changes, 2026 could be even worse.
Tomi Engdahl says:
Away With Ye
Job Seekers Sue Company Scanning Their Résumés Using AI
“I think I deserve to know what’s being collected about me and shared with employers.”
https://futurism.com/artificial-intelligence/ai-labor-scanning-eightfold?fbclid=IwVERDUARAPcZleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR7HTViszNPU7NdGj8KFuXxxMhRkqXJ_lH9OFfwbXvMYcUiBNYIRQnA6Wka8lA_aem_otqwr46M2I9Uxy6cppSYOg
Thanks to scores of competing AI systems clogging up online application portals, applying for a new job in 2026 can feel more like applying for a bank loan than seeking a job.
At least, that’s what a group of disgruntled job seekers is claiming in a lawsuit against an AI screening company called Eightfold AI. According to the New York Times, the plaintiffs allege that Eightfold’s employment screening software should be subjected to the Fair Credit Reporting Act — the regulations protecting information collected by consumer credit bureaus.
The reason, they say, can be found deep within Eightfold’s AI algorithm, which actively trolls LinkedIn to create a data set of “1 million job titles, 1 millions skills, and the profiles of more than 1 billion people working in every job, profession, industry, and geography.”
Using an AI model trained on that data, plaintiffs say, Eightfold scores job applications on a scale of one to five, based on their skills, experience, and the hiring manager’s goals. In sum, their argument is that it’s not at all unlike the opaque rules used to govern consumer credit scores.
In the case of Eightfold, however, applicants have no way of knowing what their final score even is, let alone the steps the system took to come up with it. That creates a “black box”: a situation where the people subjected to an algorithmic decision can only see the system’s outcome, not the process that led to it. And if Eightfold’s AI starts making things up on the fly — an issue AI models are infamous for — the job seeker has no way of knowing.
There’s also the issue of data retention. With no way to take a peek under the hood, there’s no telling how much data from job applicants’ résumés Eightfold collects, or what the AI company and its clients are doing with it.
Kistler, who has decades of experience working in computer science, told the publication she’s kept a close score of every application she’s sent over the last year. Out of “thousands of jobs” she’s applied for, only 0.3 percent moved on to a follow-up or interview, she said.
It all underscores the sad state of the job market, which has become the stuff of dystopian nightmares thanks to AI hiring tools. Whether the lawsuit can gain enough momentum to challenge the massive legal grey area of AI hiring remains to be seen.
Tomi Engdahl says:
Dude, Where’s My Return?
Majority of CEOs Alarmed as AI Delivers No Financial Returns
They’re worried they’re not spending enough on AI
https://futurism.com/artificial-intelligence/ceos-ai-returns?fbclid=IwVERDUARAPt1leHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR7HTViszNPU7NdGj8KFuXxxMhRkqXJ_lH9OFfwbXvMYcUiBNYIRQnA6Wka8lA_aem_otqwr46M2I9Uxy6cppSYOg
Tomi Engdahl says:
Bot Streams
Man Pleads Guilty to Making $8 Million by Creating Music With AI and Using Bots to Drive Zillions of Fake Streams
“Although the songs and listeners were fake, the millions of dollars Smith stole was real.”
https://futurism.com/artificial-intelligence/man-pleads-guilty-music-ai-bot-streams?fbclid=IwdGRjcARAW4pjbGNrBEBbIGV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHu_tZ7C_X0gLLNIswa1R6q0B2SqHhBffircYPk8Qxg4gb3ML5cDp4rcYcdJR_aem_qlMeHF3H7wTbgVA0aN7VfQ
Tomi Engdahl says:
OpenAI calls for robot taxes, a public wealth fund, and a 4-day workweek to tackle AI disruption
https://www.businessinsider.com/openai-superintelligence-ai-upheaval-tax-shorter-workweek-public-wealth-fund-2026-4?fbclid=IwdGRjcARApeRjbGNrBECk72V4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHj57pmGU3CZqHO3L_Edt10fE33Cj-Zs-KlYJ2ELVeg0hhMbbnmxFmav2tY8X_aem_znXkVNS0t881KAAuAFXoPw&utm_campaign=mrf-insider-marfeel-headline-graphic&mrfcid=2026040669d3b77e71332d4f3ec8d61f
OpenAI has released a list of policy ideas to combat AI disruption.
They include a public wealth fund and experimenting with a four-day workweek.
Fears are growing about a wave of AI-powered job losses as the technology continues to advance.
OpenAI has some big ideas about how to deal with AI disruption.
In a series of policy recommendations released on Monday, OpenAI said the rapid advance of AI would require far-reaching economic and political reforms, including a public wealth fund, taxes on automated labor, and a potential four-day workweek.
“We’re beginning a transition toward superintelligence: AI systems capable of outperforming the smartest humans even when they are assisted by AI. No one knows exactly how this transition will unfold. At OpenAI, we believe we should navigate it through a democratic process that gives people real power to shape the AI future they want,” the company wrote on Monday.
The company said the policy document offered a series of “initial ideas” to address the risk of “jobs and entire industries being disrupted” by the adoption of AI tools.
Among the core policy suggestions is a public wealth fund, which would see lawmakers and AI companies work together to invest in long-term assets linked to the AI boom, with returns distributed directly to citizens.
Another is that the government should encourage and incentivize employers to experiment with four-day workweeks with no loss in pay and offer “benefits bonuses” tied to productivity gains from new AI tools.
The policy document also suggests lawmakers modernize the tax system and shift the tax base to corporate income and capital gains, rather than relying on labor income and payroll taxes that could be hit by a wave of AI-powered job losses. It also recommends taxes related to automated labor.
OpenAI also called for the accelerated expansion of the US’s electricity grid, which is already feeling the strain from a wave of data center construction and energy demand for training ever more powerful AI models.
The growing popularity of new enterprise and coding tools from Anthropic and OpenAI has also hit the share price of major software companies in a so-called “SaaSpolcalypse,” and AI has already been cited as part of the calculation behind major layoffs at Block and Atlassian.
It’s not the first time one of the companies riding the AI boom has called for a New Deal-like overhaul of the social contract in response to the technology they are racing to develop.
Anthropic CEO Dario Amodei wrote in 2024 that the advent of AI superintelligence would mean that the way the global economy is organized “will no longer make sense,” and speculated that measures beyond a “large Universal Basic Income” program could be required.
In May 2024, Altman suggested a new version he dubbed Universal Basic Compute, where people receive a share of AI computing power rather than cash, which they could use, sell, or donate.
OpenAI’s policy document also advocates for a robust Social Security and Medicaid safety net and suggests a range of additional temporary measures, including expanded unemployment benefits, that could automatically kick in when metrics tied to AI disruption reach a certain level.
In February, a report detailing a hypothetical scenario in which AI advances might lead to a market crash and a consumer-led recession sparked a major stock market selloff.
The growing popularity of new enterprise and coding tools from Anthropic and OpenAI has also hit the share price of major software companies in a so-called “SaaSpolcalypse,” and AI has already been cited as part of the calculation behind major layoffs at Block and Atlassian.
Tomi Engdahl says:
OpenAI calls for robot taxes, a public wealth fund, and a 4-day workweek to tackle AI disruption : https://mrf.lu/gqdX
Tomi Engdahl says:
https://www.facebook.com/share/p/1EnoJZj1DP/
Sometimes the shitposts just write themselves. Sometimes themselves just write the shitposts. And other times, the Suno writes the shitposts? Or just shit? Wait…where was I going with this prattling?
The music industry faces a terrifying reality: Suno generates 7 million songs daily, outpacing Spotify’s catalog in just weeks. For $10 a month, anyone with a laptop produces studio-quality, multi-stem tracks in under 30 seconds…Jesus.
Read more about it in the comments.
Tomi Engdahl says:
“He’s unbelievably persuasive. Like, Jedi mind tricks.”
Janus Faced
Inside Sources Say Sam Altman Is a Sociopath
“He’s unbelievably persuasive. Like, Jedi mind tricks.”
https://futurism.com/artificial-intelligence/sources-sam-altman-sociopath?fbclid=IwdGRjcARA-UJjbGNrBED5MWV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHnwZDpb5EZvv4cEQV9D4t-MmeyRsf2TRdCkZn1fleQjpC4TG_uHJRamZpgJv_aem_6WAobHT5EVbqAq3l4uGUfg
You don’t build a trillion dollar AI empire by being a saint.
In a seeping new investigative piece from The New Yorker, numerous tech insiders paint a picture of OpenAI CEO Sam Altman as a relentless liar who wants everyone to like him while manipulating even the people closest to him to get what he wants. AI safety, in this slippery portrait of Altman, is merely a bargaining chip he dangles like a carrot to get concerned engineers — and anyone else worried about the tech’s far-reaching consequences — on board, before going back on his word.
Some of these insiders were strikingly blunt in their diagnoses: Altman was a literal “sociopath,” one OpenAI board member alleged.
“He’s unconstrained by truth,” they told The New Yorker. “He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.”
The New Yorker piece characterizes Altman as more of a businessman than an engineer, leveraging an almost singular ability to get skeptics, be they engineers or the public, to believe that he holds the same priorities as them.
“He’s unbelievably persuasive. Like, Jedi mind tricks,” a tech executive who has worked with Altman told The New Yorker. “He’s just next level.”
One alleged victim of Altman’s double dealing is Anthropic CEO Dario Amodei, who used to work at OpenAI but left to found his own safety-focused AI company over differences with Altman.
In notes viewed by The New Yorker, Amodei wrote about negotiating a billion-dollar investment from Microsoft in 2019.