Here are some of the the major AI trends shaping 2026 — based on current expert forecasts, industry reports, and recent developments in technology. The material is analyzed using AI tools and final version hand-edited to this blog text:
1. Generative AI Continues to Mature
Generative AI (text, image, video, code) will become more advanced and mainstream, with notable growth in:
* Generative video creation
* Gaming and entertainment content generation
* Advanced synthetic data for simulations and analytics
This trend will bring new creative possibilities — and intensify debates around authenticity and copyright.
2. AI Agents Move From Tools to Autonomous Workers
Rather than just answering questions or generating content, AI systems will increasingly act autonomously, performing complex, multi-step workflows and interacting with apps and processes on behalf of users — a shift sometimes called agentic AI. These agents will become part of enterprise operations, not just assistant features.
3. Smaller, Efficient & Domain-Specific Models
Instead of “bigger is always better,” specialized AI models tailored to specific industries (healthcare, finance, legal, telecom, manufacturing) will start to dominate in many enterprise applications. These models are more accurate, legally compliant, and cost-efficient than general models.
4. AI Embedded Everywhere
AI won’t be an add-on feature — it will be built into everyday software and devices:
* Office apps with intelligent drafting, summarization, and task insights
* Operating systems with native AI
* Edge devices processing AI tasks locally
This makes AI pervasive in both work and consumer contexts.
5. AI Infrastructure Evolves: Inference & Efficiency Focus
More investment is going into inference infrastructure — the real-time decision-making step where models run in production — thereby optimizing costs, latency, and scalability. Enterprises are also consolidating AI stacks for better governance and compliance.
6. AI in Healthcare, Research, and Sustainability
AI is spreading beyond diagnostics into treatment planning, global health access, environmental modeling, and scientific discovery. These applications could help address personnel shortages and speed up research breakthroughs.
7. Security, Ethics & Governance Become Critical
With AI handling more sensitive tasks, organizations will prioritize:
* Ethical use frameworks
* Governance policies
* AI risk management
This trend reflects broader concerns about trust, compliance, and responsible deployment.
8. Multimodal AI Goes Mainstream
AI systems that understand and generate across text, images, audio, and video will grow rapidly, enabling richer interactions and more powerful applications in search, creative work, and interfaces.
9. On-Device and Edge AI Growth
10. New Roles: AI Manager & Human-Agent Collaboration
Instead of replacing humans, AI will shift job roles:
* People will manage, supervise, and orchestrate AI agents
* Human expertise will focus on strategy, oversight, and creative judgment
This human-in-the-loop model becomes the norm.
Sources:
[1]: https://www.brilworks.com/blog/ai-trends-2026/?utm_source=chatgpt.com “7 AI Trends to Look for in 2026″
[2]: https://www.forbes.com/sites/bernardmarr/2025/10/13/10-generative-ai-trends-in-2026-that-will-transform-work-and-life/?utm_source=chatgpt.com “10 Generative AI Trends In 2026 That Will Transform Work And Life”
[3]: https://millipixels.com/blog/ai-trends-2026?utm_source=chatgpt.com “AI Trends 2026: The Key Enterprise Shifts You Must Know | Millipixels”
[4]: https://www.digitalregenesys.com/blog/top-10-ai-trends-for-2026?utm_source=chatgpt.com “Digital Regenesys | Top 10 AI Trends for 2026″
[5]: https://www.n-ix.com/ai-trends/?utm_source=chatgpt.com “7 AI trends to watch in 2026 – N-iX”
[6]: https://news.microsoft.com/source/asia/2025/12/11/microsoft-unveils-7-ai-trends-for-2026/?utm_source=chatgpt.com “Microsoft unveils 7 AI trends for 2026 – Source Asia”
[7]: https://www.risingtrends.co/blog/generative-ai-trends-2026?utm_source=chatgpt.com “7 Generative AI Trends to Watch In 2026″
[8]: https://www.fool.com/investing/2025/12/24/artificial-intelligence-ai-trends-to-watch-in-2026/?utm_source=chatgpt.com “3 Artificial Intelligence (AI) Trends to Watch in 2026 and How to Invest in Them | The Motley Fool”
[9]: https://www.reddit.com//r/AI_Agents/comments/1q3ka8o/i_read_google_clouds_ai_agent_trends_2026_report/?utm_source=chatgpt.com “I read Google Cloud’s “AI Agent Trends 2026” report, here are 10 takeaways that actually matter”
971 Comments
Tomi Engdahl says:
Meta teki ”ourat” AI-laseissa
https://etn.fi/index.php/13-news/18623-meta-teki-ourat-ai-laseissa
Tekoälylaseista on nopeasti tullut uusi kuluttajaelektroniikan laitekategoria. Tutkimusyhtiö Omdia arvioi, että AI-laseja toimitettiin maailmanlaajuisesti vuonna 2025 jo 8,7 miljoonaa kappaletta. Kasvua edellisvuoteen tuli peräti 322 prosenttia.
Markkina on kuitenkin toistaiseksi yhden yrityksen hallussa. Meta toimitti viime vuonna 7,4 miljoonaa laitetta ja keräsi näin 85,2 prosentin markkinaosuuden. Käytännössä koko AI-lasien kuluttajamarkkina on syntynyt Metan tuotteiden ympärille.
Tilanne muistuttaa monin tavoin älysormusten kehitystä. Kun Oura Health toi markkinoille Oura Ring -sormuksen, se ei keksinyt älysormusta, mutta teki siitä ensimmäisen aidosti kuluttajille suunnatun tuotteen. Samalla tavalla Meta on tehnyt AI-laseille.
Metan menestyksen taustalla ovat ennen kaikkea Ray-Ban Meta Smart Glasses -lasit, jotka on kehitetty yhdessä silmälasijätti EssilorLuxottican kanssa. Tuotteessa kamera, mikrofonit, kaiuttimet ja tekoälyavustaja on integroitu tavallisen näköisiin aurinkolaseihin.
Ratkaisevaa on ollut muotoilu. Laitteet näyttävät tavallisilta Ray-Ban-laseilta, eivät teknologiagadgetilta. Tämä on tehnyt AI-laseista sosiaalisesti hyväksyttävän kuluttajatuotteen, mikä selittää markkinan nopean kasvun.
Tomi Engdahl says:
David Gewirtz / ZDNET:
Anthropic debuts Code Review for Claude Code, which uses agents to check pull requests for bugs, and says a typical code review costs $15 to $25 in token usage — ZDNET’s key takeaways — Anthropic launches AI agents to review developer pull requests. — Internal tests tripled meaningful code review feedback.
This new Claude Code Review tool uses AI agents to check your pull requests for bugs – here’s how
Each pull request can cost up to $25. Here’s why companies might still pay to prevent catastrophic bugs.
https://www.zdnet.com/article/claude-code-review-ai-agents-pull-request-bug-detection/
ZDNET’s key takeaways
Anthropic launches AI agents to review developer pull requests.
Internal tests tripled meaningful code review feedback.
Automated reviews may catch critical bugs humans miss.
Anthropic today announced a new Code Review beta feature built into Claude Code for Teams and Enterprise plan users. It’s a new software tool that uses agents working in teams to analyze completed blocks of new code for bugs and other potentially problematic issues.
What’s a pull request?
To understand this new Anthropic offering, you need to understand the concept of a pull request. And that leads me to a story about a man named Linus.
Long ago, Linux creator Linus Torvalds had a problem. He was managing lots of contributions to the open source Linux operating system. All the changes were getting out of control. Source code control systems (a method for managing source code changes) had been around for quite a while before then, but they had a major problem. Those old SCCSs were not meant to manage distributed development by coders all across the world.
Today, almost every large project uses GitHub or one of its competitors. GitHub (as differentiated from Git) is the centralized cloud service that holds code repositories managed by Git. A few years back, GitHub was purchased by Microsoft, fostering all sorts of doom-and-gloom conspiracy theories. But Microsoft has proven to be a good steward of this precious resource, and GitHub keeps chugging along, managing the world’s code.
All that brings us back to pull requests, known as PRs in coder-speak. A pull request is initiated when a programmer wants to check in some new or changed code to a code repository. Rather than just merging it into the main track, a PR tells repo supervisors that there’s something new, ready to be reviewed.
Quick note: to coders, PR is an acronym for pull request. For marketers, PR means public relations. When you read about tech, you’ll see both acronyms, so pay attention to the context to distinguish between the two.
Code review at Anthropic
In my article, 7 AI coding techniques I use to ship real, reliable products – fast, my bonus technique was using AI for code review. As a lone developer, I don’t use a formalized code review process like the one Anthropic is introducing.
I just tell a new session of the AI to look at my code and let me know what’s not right. Sometimes I use the same AI (ie, Claude Code to look at Claude’s code), and other times I use a different AI (like when I use OpenAI’s Codex to review Claude Code generated code). It’s far from a comprehensive review, but almost every time I ask for a review, one AI or the other finds something that needs fixing.
The new Claude Code Review capability is modeled on the process used by Anthropic. The company has essentially productized its own internal methodology. According to Anthropic, customers “Tell us developers are stretched thin, and many PRs get skims rather than deep reads.”
Before running Code Review, Anthropic coders got back “substantive” review comments about 16% of the time. With Code Review, coders are getting back substantive comments 54% of the time. While that seems to mean more work for coders, what it really means is that nearly three times the number of coding oopsies have been caught before they cause damage.
According to Anthropic, the size of the internal PR impacts the level of review findings. Large pull requests with more than 1,000 changed lines show findings 84% of the time. Small pull requests of under 50 lines produce findings 31% of the time. Anthropic engineers “largely agree with what it surfaces: less than 1% of findings are marked incorrect.”
Examples of issues surfaced during testing
I’m always fascinated by what others experience while doing their jobs. Anthropic provided some examples of problems Code Review identified during its early testing.
In one case, a single line change appeared to be routine. It would have normally been quickly approved. But Code Review flagged it as critical. It turns out this tiny little change would have broken authentication for the service. Because Code Review caught it, it was fixed before the move. The original coder said that they wouldn’t have caught that error on their own.
Another example occurred when filesystem encryption code was being reorganized in an open source product. According to the report, “Code Review surfaced a pre-existing bug in adjacent code: a type mismatch that was silently wiping the encryption key cache on every sync.”
This is what we call a silent killer in coding. It could have resulted in data loss, performance degradation, and security risks. Anthropic described it as “A latent issue in code the PR happened to touch, the kind of thing a human reviewer scanning the changeset wouldn’t immediately go looking for.”
If that hadn’t been caught and fixed, it would have made for a very bad day for someone (or a whole bunch of someones).
How the multi-agent review system works
Code Review runs fairly quickly, turning around fairly complex reviews in about 20 minutes. When a pull request is opened, Code Review kicks off a bunch of agents that analyze code in parallel.
Various agents detect potential bugs, verify findings to filter false positives, and rank issues by severity. The results are consolidated so that all the results from all the agents appear as a single summary comment on the pull request, alongside inline comments for specific problems.
In a demo, Anthropic showed that the summary comment can also include a fix directive. So if Code Review finds a bug, it can be fed to Claude Code to fix. The company says that reviews scale with complexity: larger pull requests receive deeper analysis and more agents.
Anthropic really seems to like spawning multiple agents. In the past, I’ve had some fairly serious difficulty wrangling them after they’re launched. In fact, the first technique I shared in my 7 coding techniques article was to specifically tell Claude Code to avoid launching agents in parallel.
Tomi Engdahl says:
Madhumita Murgia / Financial Times:
Yann LeCun’s Advanced Machine Intelligence Labs raised a $1.03B seed at a $3.5B pre-money valuation to work on world models, in Europe’s largest-ever seed round — Meta’s former chief artificial intelligence scientist Yann LeCun has raised more than $1bn for his new start-up in Europe’s largest ever seed funding round.
https://www.ft.com/content/e5245ec3-1a58-4eff-ab58-480b6259aaf1
Tomi Engdahl says:
Wired:
Sources: Nvidia is pitching NemoClaw, an upcoming open-source AI agent platform for enterprises, and plans to offer security and privacy tools as part of it — Ahead of its annual developer conference, Nvidia is readying a new approach to software that embraces AI agents similar to OpenClaw.
https://www.wired.com/story/nvidia-planning-ai-agent-platform-launch-open-source/
Tomi Engdahl says:
Olivia Moore / Andreessen Horowitz:
A look at the top 100 GenAI consumer apps: ChatGPT leads but the race for the “default AI” is on, global usage is splintering by product, and AI agents arrive
https://a16z.com/100-gen-ai-apps-6/
Tomi Engdahl says:
OpenAI:
OpenAI agrees to acquire Promptfoo, which fixes security issues in AI systems being built and is “trusted by 25%+ of Fortune 500”, to fold into OpenAI Frontier — Accelerating agentic
https://openai.com/index/openai-to-acquire-promptfoo/
Tomi Engdahl says:
Julie Bort / TechCrunch:
AI networking equipment startup Eridu emerges from stealth and raised a $200M Series A led by Socratic, John Doerr, and more, taking its total funding to $230M — Drew Perkins has been inventing computer network tech and building startups since the dawn of the internet age.
https://techcrunch.com/2026/03/10/ai-network-startup-eridu-emerges-from-stealth-with-hefty-200m-series-a/
Tomi Engdahl says:
Google rolls out Gemini-powered AI capabilities across Docs, Sheets, Slides, and Drive, including a “Help me create” tool in Docs to generate first drafts — Google announced on Tuesday that it’s bringing a slew of new Gemini-powered AI capabilities to Docs, Sheets, Slides, and Drive.
https://techcrunch.com/2026/03/10/google-rolls-out-new-gemini-capabilities-to-docs-sheets-slides-and-drive/
Tomi Engdahl says:
Chris Metinko / Axios:
Paris-based Qevlar AI, an agentic AI developer for security operations centers, raised $30M led by Partech and Forgepoint Capital International — Qevlar AI, an agentic AI developer for security operations centers, raised $30 million led by Partech and Forgepoint Capital International …
Exclusive: Qevlar AI raises $30M to help security operations
https://www.axios.com/pro/enterprise-software-deals/2026/03/09/qevlar-ai-security-operations-autonomous
Tomi Engdahl says:
Samantha Subin / CNBC:
AI cybersecurity startup Armadin, started by Mandiant founder Kevin Mandia to build AI agents, raised ~$190M led by Accel; Google bought Mandiant for $5.4B — Four years ago Kevin Mandia agreed to sell his cybersecurity company Mandiant to Google for $5.4 billion. Now he’s back in the game, with Google’s help.
https://www.cnbc.com/2026/03/10/kevin-mandia-raised-190-million-armadin-after-prior-sale-to-google.html
Tomi Engdahl says:
Rebecca Torrence / Bloomberg:
Nexthop AI, which offers specialized switches to reduce power consumption and latency for hyperscalers, raised $500M led by Lightspeed at a $4.2B valuation
https://www.bloomberg.com/news/articles/2026-03-10/lightspeed-andreessen-back-4-2-billion-ai-data-center-supplier
Tomi Engdahl says:
Arielle Pardes / Wired:
How AI may disrupt venture capital, from making it easier and cheaper to start software companies, to agentic investors analyzing startup pitch decks and teams
https://www.wired.com/story/ai-kill-venture-capital/
Tomi Engdahl says:
Jake Angelo / Fortune:
NBC News poll of 1,000 registered US voters: just 26% had a positive view of AI, while 46% had a negative view, the third worst net negative score of all topics
People really hate AI but not as much as Iran—or Democrats
https://fortune.com/2026/03/09/ai-opinion-poll-democrats-iran-war-president-donald-trump/
Tomi Engdahl says:
Financial Times:
SoftBank’s stock is down ~48% since November 3, as scrutiny into the scale of its OpenAI ties grows; on March 9, SoftBank fell 9.8% on Stargate delay reports
https://www.ft.com/content/d7dc7ba4-66d3-4e31-83cb-44efdd00c67c
Tomi Engdahl says:
Bloomberg:
How “dark factories”, powered by AI and robotics and requiring essentially no workers, are set to upend China’s labor market, already stressed by tariffs
https://www.bloomberg.com/news/features/2026-03-09/china-s-tariff-defying-export-boom-leaves-its-factory-workers-behind
Tomi Engdahl says:
Aisha Down / The Guardian:
The UK’s AI drive to build data centers, touted since 2024 and featuring NScale and CoreWeave deals, is riddled with phantom investments and shaky accounting
https://www.theguardian.com/technology/2026/mar/09/revealed-uks-multibillion-ai-drive-is-built-on-phantom-investments
Tomi Engdahl says:
Steve Dent / Engadget:
Qualcomm unveils the Arduino Ventuno Q, a single-board computer for AI and robotics applications, powered by Dragonwing IQ8 processor and 16GB of RAM
Qualcomm’s new Arduino Ventuno Q is an AI-focused computer designed for robotics
It marries a Qualcomm processor with a microcontroller and comes with 16GB of
https://www.engadget.com/ai/qualcomms-new-arduino-ventuno-q-is-an-ai-focused-computer-designed-for-robotics-113047697.html
Tomi Engdahl says:
https://etn.fi/index.php/13-news/18624-suomalaiset-kaeyttaevaet-tekoaelyae-muita-pohjoismaalaisia-vaehemmaen
Tomi Engdahl says:
Tech
Silicon Valley is buzzing about this new idea: AI compute as compensation
https://www.businessinsider.com/ai-compute-compensation-software-engineers-greg-brockman-2026-3?fbclid=IwdGRjcAQdBHFjbGNrBB0EVGV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHgpstKi_756Z4F2krmIotAPvnAr9YyN9I8GSYATqD6wJBSHxP9_nav_QrP2S_aem_HHawnwCeujcXbdpZVwcQGg&utm_campaign=mrf-insider-marfeel-headline-graphic&mrfcid=2026031069b020004d7dd666d39d0530
Silicon Valley has long competed for talent with ever-richer pay packages built around salary, bonus, and equity. Now, a fourth line item is creeping into the mix: AI inference.
As generative AI tools become embedded in software development, the cost of running the underlying models — known as inference — is emerging as a productivity driver and a budget line that finance chiefs can’t ignore.
Software engineers and AI researchers inside tech companies have already been jousting for access to GPUs, with this AI compute capacity being carefully parceled out based on which projects are most important. Now, some tech job candidates have begun asking about what AI compute budget they will have access to if they decide to join.
Tomi Engdahl says:
AI in Cybersecurity Certification
“AI does not replace humans. Humans who use AI replace humans who don’t.”
AI has changed how attackers work and how defenders need to respond. Enroll in AI in Cybersecurity, the foundational course to understand AI’s impacts and risks.
https://www.catonetworks.com/sase/sase-certification/ai-cybersecurity-certification/?utm_source=Meta&utm_medium=Paid&utm_campaign=dentsu_always_on_brand_awareness_30_seconds_meta_geo-combined_prospecting_video&utm_term=CATO_General&utm_content=etay_ctrl_ai_course_video_43&fbclid=IwdGRjcAQdE6VleHRuA2FlbQEwAGFkaWQBqzDm7Ra193NydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHk3bWG2k4vnl5xVMbV2y4_zSvUzxBloI4SEtSQ6x1nYdy843N10BRwD6QkYI_aem_hRVCq6Y-fQDFWJ0y4qqMwg&utm_id=120228369341720007
Tomi Engdahl says:
Wallace and Gromit creators share stance on AI: ‘It’s too easy to create rubbish’
‘The best thing for us to do is experiment,’ says ‘Wallace and Gromit’ producer Peter Lord
https://www.independent.co.uk/arts-entertainment/films/news/wallace-gromit-ai-aardman-b2935292.html?fbclid=IwdGRjcAQdKH1jbGNrBB0oUGV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHq23jI6E_zESe7RflrGhjE-88gcVcWwSsqOHZ2r10uWdBabnmyJMpa1Woc7e_aem_PlL0AAnfuzZn-XVcLcyWiw
Tomi Engdahl says:
AI agent goes rogue and starts secretly mining crypto
https://cybernews.com/crypto/ai-goes-rogue-mining-crypto/?utm_source=cn_facebook&utm_medium=social&utm_campaign=cybernews&utm_content=post&source=cn_facebook&medium=social&campaign=cybernews&content=post&fbclid=IwVERDUAQdLERleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR431kJxnG5Wu9KogAB25Iud9ct093YZ4LvtAUQWWbtp_UQW6wbwnCIFCdxZeg_aem_-T3Wkan6ZN9YcbrbxFRL5w
Besides lying, manipulating, and other human-like traits, AI agents now also demonstrate their preference for skipping training and, for some reason, starting to mine crypto.
This was found by a group of Alibaba-related researchers while testing ROME, an open-source agent grounded in the Agentic Learning Ecosystem (ALE). The latter is described as foundational infrastructure that optimizes the end-to-end production pipeline for agent-language learning models (LLMs).
“A principled, end-to-end agentic ecosystem can streamline the development of the agent LLMs from training to production deployment, accelerating the broader transition into the agent era. However, the open-source community still lacks such an ecosystem, which has hindered both practical development and production adoption of agents,” the researchers explained in a recent paper.
Tomi Engdahl says:
Top Men
Insiders Afraid the Government Will Nationalize the AI Industry
“If you don’t think that’s going to lead to the nationalization of our technology — you’re ret***ed.”
https://futurism.com/artificial-intelligence/ai-ceos-nationalization?fbclid=IwdGRjcAQdUM1jbGNrBB1QpWV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHn5QTZTsreqhIiA6bqFUBVMmYr1KmoST1Xp8NR1ArrP6pYTs1uruHhxG_J_f_aem_KnY3ryPptXnUtUg9U-Eopg
Depending on who you ask, AI was the financial growth story of 2025. In the first nine months of 2025, spending related to AI accounted for around 38 percent of real GDP growth across the United States, according to analysis by the St. Louis Fed.
Not every economist agrees with that math, but the Trump administration has evidently seen enough to know where they stand. With AI just about only thing propping up an otherwise crumbling economy, fueling a supposed wave of innovation and helping the Pentagon choose who to bomb next, it stands to reason the feds would want to keep the tech on a short leash.
If recent events are any indication, that leash is only getting tighter. Take, for example, the ongoing spat between AI firm Anthropic and the Department of Defense — a struggle that suggests Uncle Sam has stopped asking the tech industry for what it wants, and started taking.
That’s prompted a new round of fears and discussions from AI industry leaders, some of whom aren’t pulling any punches about the looming threat of nationalization.
Palantir CEO Alex Karp, for example, had some pretty harsh criticism for his industry colleagues at OpenAI and ChatGPT: “If Silicon Valley believes we’re going to take everyone’s white collar jobs… and [say] ‘screw the military’… If you don’t think that’s going to lead to the nationalization of our technology — you’re ret***ed,” he mused at the recent a16z summit, underscoring his point with a slur against people with disabilities.
“Good point,” xAI founder Elon Musk chimed in on social media.
Tomi Engdahl says:
Shorts™ Circuit
YouTube Filling With Horrifying AI Slop for Children
“When you’re just showing raw visual stimuli and bombarding a kid with it, it just doesn’t seem it’s probably that good for them.”
https://futurism.com/artificial-intelligence/youtube-ai-slop-for-children?fbclid=IwdGRjcAQdXpNjbGNrBB1ecGV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHkwOlbnVPEuq7gGzfUILWd47PzYVmNaKKT364P0Uyd3mmU1T3Cxw7dBIQzIR_aem_sLP_o3lEJf5ZGG1daKIdfA
In an age when more and more young children are hooked on digital devices, YouTube is bombarding them with AI slop.
After investigating over 1,000 YouTube shorts recommended to young children by the video platform, The New York Times found that the algorithm is heavily pushing AI-generated content that explicitly targets “toddlers” and “preschoolers.”
On top of being nonsensical, the videos are often presented under the guise of being educational. Two common themes are teaching kids about the alphabet and animals — subject matters, conveniently, that provide threadbare structures for easily produced low-effort slop.
Tomi Engdahl says:
“We don’t actually view AI investment as strongly growth-positive,” Goldman chief economist Jan Hatzius said in a recent interview. “I think there’s a lot of misreporting, actually, on the impact that AI investment had in US GDP growth in 2025, and it’s much smaller than is often perceived because most AI equipment is imported. That means there’s a positive entry in the investment line, but that’s offset by a negative entry in the net-exports line.”
https://futurism.com/future-society/researchers-economy-ai-narratives?fbclid=IwdGRjcAQd48FjbGNrBB3jcmV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHjcRqGTwf2RYEP7V0nNRr40pc1bvlW_PC2nFv1C0bHIg6DSRhmImIbDkO07A_aem_UYO98WvwUo2CZn6qkUapwQ
Tomi Engdahl says:
https://www.facebook.com/share/p/1XzRaAeDgH/
In the big AI debate, Boy George has, perhaps surprisingly, come down firmly in the pro camp. He’s even said he uses it to write lyrics.
The 64-year-old ’80s pop icon was appearing on Fearne Cotton’s Happy Place podcast when conversation came round – as it seems to wherever you are these days – to AI. The Culture Club singer said that it “has really helped me as a lyricist”.
One of the reasons he says is that you don’t have to bother with pesky human co-writers. “You’re not working with anyone else,” he explained. “You don’t have to worry even for two seconds about what they think.
“I’m a top-line writer, so I write top-line melodies. All the people I work with send me tracks, and I’ll just sit with them, and I’ll just play it and play it.”
“I have fantastic conversations with ChatGPT,” he added. “And I’ll say: ‘Oh, those lyrics are crap. That’s not what I would say.’ You know what I mean? But, actually, you can train it.”
Read more in the link in comments…
“You’re not working with anyone else. You don’t have to worry even for two seconds about what they think”: Boy George on why he prefers writing songs with AI than humans https://mrf.lu/dhGY
Tomi Engdahl says:
Tight Lipped
Mother Sues OpenAI for Not Telling Police About Mass Shooter Before Deadly Rampage
OpenAI isn’t sweeping this one under the rug.
https://futurism.com/artificial-intelligence/mother-sues-openai-mass-shooter?fbclid=IwdGRjcAQeVl9jbGNrBB5WQmV4dG4DYWVtAjExAHNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHs75Jj5WH3bNbhKY2kRw1SWQpK5Wm4BMwQlaCIroYRdNP3sZDQrFKWPTkg70_aem_VVsU5Ygx5sExVu8AlCYDaA
The mother of a girl who was horrifically wounded in a school shooting in Canada in February is suing OpenAI for not warning police about the killer, Jesse Van Rootselaar, according to reports.
Some eight months before the shooting in British Columbia, which killed eight people including the perpetrator and injured 25 others, OpenAI employees had already been aware of Van Rootselaar’s alarming conversations with ChatGPT after they were flagged by an automated review system, a story broken by the Wall Street Journal in the wake of the massacre. Around a dozen staffers debated notifying authorities about Rootselaar’s disturbing conversations, which included “scenarios involving gun violence,” but leadership ultimately decided not to.
Now, a lawsuit filed by Mia Edmonds, the mother of a 12-year-old named Maya Gebala who survived the shooting but remains in critical condition, argues that OpenAI had “specific knowledge of the shooter utilizing ChatGPT to plan a mass casualty event like the Tumbler Ridge mass shooting,” per the Associated Press, and demands punitive damages from the company.
In the original WSJ reporting, OpenAI said that it banned Van Rootselaar’s account but admitted that at the time it didn’t consider her activity a credible and imminent risk of serious physical harm to others. Later, the company revealed that Van Rootselaar had made a second account to subvert the ban, claiming it only discovered the alt after the shooter’s name was released publicly.
The lawsuit alleges the “shooter used their second account to continue planning scenarios involving gun violence, including a mass casualty event like the Tumbler Ridge mass shooting, with ChatGPT, and to receive mental health counseling and pseudo-therapy from ChatGPT.”
After the shooting OpenAI vowed to make its AI safer, including measures to prevent users from circumventing bans on the platform. Last week, CEO Sam Altman met virtually with Canada’s AI Minister Evan Solomon to discuss why the company had failed to alert authorities, after which Solomon said he was ordering a government safety review of OpenAI’s technology. The next day, Altman met with BC premier David Eby, promising to make an apology to the victims of the shooting. No apology has yet appeared publicly.
Tomi Engdahl says:
Artificial Intelligence
Backfire Stick
Amazon Admits Extensive AI Use Is Wreaking Havoc on Its Core Business
Hoisted by its own AI petard.
https://futurism.com/artificial-intelligence/amazon-ai-tools-business?fbclid=IwVERDUAQeVvhleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR5qs4APsLbd9cUPr5en2uoX_7Z9eRjdqT9Dx6KmYTYoW4pHDrsqsKvhXdfD1Q_aem_wBML0Kle3v5hlWbwPjPEWg
Tomi Engdahl says:
Mission Control
Google’s AI Sent an Armed Man to Steal a Robot Body for It to Inhabit, Then Encouraged Him to Kill Himself, Lawsuit Alleges
Google said in response that “unfortunately AI models are not perfect.”
https://futurism.com/artificial-intelligence/google-ai-robot-body-suicide-lawsuit?fbclid=IwVERDUAQeVzlleHRuA2FlbQIxMABzcnRjBmFwcF9pZAwzNTA2ODU1MzE3MjgAAR7VQWEJ5pkiczKind_x4iDtro8YSZImKCtUlWi0X-mP8X5yRh2GBmia6acR3Q_aem_fUGFfYl3vNvxAqAf-5cb7Q
Tomi Engdahl says:
https://www.catonetworks.com/sase/sase-certification/ai-cybersecurity-certification/?utm_source=Meta&utm_medium=Paid&utm_campaign=dentsu_always_on_brand_awareness_30_seconds_meta_geo-combined_prospecting_video&utm_term=CATO_General&utm_content=etay_ctrl_ai_course_video_43&fbclid=IwdGRjcAQeV1BleHRuA2FlbQEwAGFkaWQBqzDm7RaO53NydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHv4t8ApD2aCKJ67wsXCFczc2YnPxr1c6lgf5-jHrSvcMbBRFw8ybuVqgr7WU_aem_rhHGYUPawFf1Hr44c5aWeQ&utm_id=120228369341720007
Tomi Engdahl says:
The Nordic AI Inflection Point: Value Creation or Value Bubble?
https://www.bcg.com/publications/2026/nordic-ai-value-creation-or-bubble?utm_source=linkedin&utm_medium=social&utm_campaign=ai&utm_description=paid&utm_topic=ai&utm_geo=nordics&utm_content=nordic-report&linkId=917415763&fbclid=IwdGRjcAQeYU5leHRuA2FlbQEwAGFkaWQBqyzKTzMtfnNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHoyqRGJd42s810-7w1TXVRfKuULkggi-jhQFk1kMWfuThvW5E8hHrlx9zrPz_aem_CesI6gp_UsXVvTVB06PJ7A&utm_term=120239060662390094&utm_id=120239060662380094
Key Takeaways
Nordic business executives are treating AI as a top strategic priority—yet, today, only 4% of companies see meaningful ROI (returns of at least five times their AI investment) on a par with global and EU competitors.
However, Nordic companies’ 2029 impact expectations are 2–3x higher than that of global competitors, raising the stakes for delivering on bold ambitions.
Concerningly, Nordic companies direct a disproportionate share of AI investment toward off-the-shelf productivity tools (~40%–50% vs. 8%–11% for global and EU competitors). By contrast, global leaders invest far more in transformative, end-to-end use cases, which typically generate higher ROI.
If the ROI gap persists, Nordic economies face a real risk of a local AI value bubble and could lose significant ground to global and EU competitors.
Enabling transformative AI value creation requires five key components: top-down strategic direction, ownership across the entire business, cross-functional teaming, executive governance, and strategic buildouts of enabling technology.
Tomi Engdahl says:
Tekoälystä puhutaan neljäntenä teollisena vallankumouksena. Selvää on, että se muuttaa työmarkkinoita, mutta vielä ei voi esittää tarkkaa ja varmaa arvioita, tarkastelee edunvalvontajohtaja Petteri Oksa.
Tekoäly vie työpaikat
11.3.2026
Käsillä on suuri työelämän murros, neljäs teollinen vallankumous. Tai sitten ei ole. Riippuu siitä, keneltä kysyy – ja mistä vinkkelistä.
Tosiasia kuitenkin on, että tekoäly, tai pikemminkin siis generatiiviset kielimallit tai koneoppiminen, hallitsevat keskustelua työelämästä monellakin tapaa. Paljon esitetään mahdollisuuksia, mutta vielä enemmän huolia.
Suomessa on saatu toistaiseksi kaksi esimerkkiä yrityksistä, jotka ovat perustelleet muutosneuvotteluitaan ja henkilöstön vähennystarpeita tekoälyllä. Tien avasi Etteplan, jota seurasi myöhemmin Vincit.
https://insinoori-lehti.fi/tasta-on-kysymys/tekoaly-vie-tyopaikat/?fbclid=IwdGRjcAQecvRleHRuA2FlbQEwAGFkaWQAAAZIPLv3kXNydGMGYXBwX2lkDDM1MDY4NTUzMTcyOAABHrD8XeDeITWPU5JK0MwsgcqoHVl8bypMJh0XWERUaTa_X8CHh6jfbYZBNOaq_aem_sNz-AVi-RYbZ7ukfP-C7aw&utm_medium=paid&utm_source=fb&utm_id=6907326226137&utm_content=6907326264337&utm_term=6907326233137&utm_campaign=6907326226137
Tomi Engdahl says:
https://www.vaasansahko.fi/ajankohtaista/alya-maailman-sahkoverkkoihin/
Tomi Engdahl says:
https://medium.com/@joe.njenga/i-tried-claude-code-new-claude-api-you-can-now-build-claude-apps-3x-faster-ec83a9906969
Tomi Engdahl says:
https://futurism.com/artificial-intelligence/dario-amodei-trump-dictator
Tomi Engdahl says:
https://www.cnbc.com/2026/03/06/google-says-anthropic-remains-available-outside-of-defense-projects.html
Tomi Engdahl says:
XAI’s Macrohard project stalls as Tesla ramps up a similar AI agent effort : https://mrf.lu/dHyw
Tomi Engdahl says:
Copy That
There’s a Grim New Expression: “AI;DR”
“Why should I bother to read something someone else couldn’t be bothered to write?”
https://futurism.com/artificial-intelligence/aidr-meaning
Tomi Engdahl says:
Accenture’s CEO says using AI is now required for promotion: It’s ‘how we do work’ : https://mrf.lu/dXRV
Tomi Engdahl says:
https://www.kdnuggets.com/5-useful-python-scripts-to-automate-exploratory-data-analysis
Tomi Engdahl says:
Microsoftin tekoälyyn tulossa yllättävä uudistus – Joko nerokasta tai huonoin idea ikinä
Suvi Korhonen6.3.202613:39TekoälySelaimet
Tarvittavat verkkosivut voi myös tallentaa tekoälykeskustelun oheen.
https://www.tivi.fi/uutiset/a/95634ee3-c588-4946-bef4-e96dd06a7314
Microsoft muuttaa Copilot-tekoälysovellustaan selaimen suuntaan seuraavassa päivityksessä, Windows Central uutisoi. Tällä hetkellä linkin klikkaaminen
Tomi Engdahl says:
Checked-Out
OpenAI’s Pivot Into Shopping Has Been a Disaster
What’s in store for OpenAI’s push to turn ChatGPT into an all-in-one storefront? Not a lot.
https://futurism.com/artificial-intelligence/openai-pivot-into-shopping-disaster
Tomi Engdahl says:
Alibaban AI-agentti alkoi louhia kryptovaluuttaa – kukaan ei käskenyt
Teknojätin harjoitusympäristössä tapahtunut poikkeama on ensimmäinen dokumentoitu tapaus, jossa tekoälyjärjestelmä kehitti itsenäisesti resurssien hankintaan tähtääviä strategioita.
https://www.salkunrakentaja.fi/2026/03/alibaba-tekoalyagentti-kryptovaluutta/
Tomi Engdahl says:
Die Roboter
Xiaomi Now Using Humanoid Robots to Assemble Electric Cars
“The two humanoid robots are able to keep up our pace.”
https://futurism.com/robots-and-machines/xiamoi-robots-factory-evs
Tomi Engdahl says:
OpenAI Launches Codex Security that Discover, Validate and Patch Vulnerabilities
https://cybersecuritynews.com/openai-launches-codex-security/
Tomi Engdahl says:
Cybersecurity is now the price of admission for industrial AI
Industrial organizations are accelerating AI deployment across manufacturing, utilities, and transportation and running straight into a security problem. Cisco’s 2026 State of Industrial AI Report, based on responses from more than 1,000 decision-makers across 19 countries, finds that cybersecurity has become the single largest obstacle to AI adoption, outranking skills gaps, integration challenges, and budget constraints.
https://www.helpnetsecurity.com/2026/03/04/cisco-industrial-ai-cybersecurity/
Tomi Engdahl says:
Building Claude Code with Boris Cherny
Claude Code creator Boris Cherny on building AI-powered coding tools, working with agents in parallel, and how the role of the software engineer is evolving in an AI-first world.
https://newsletter.pragmaticengineer.com/p/building-claude-code-with-boris-cherny
Tomi Engdahl says:
Sam Altman in Damage Control Mode as ChatGPT Users Are Mass Cancelling Subscriptions Because OpenAI Is “Training a War Machine”
“The optics don’t look good.”
https://futurism.com/artificial-intelligence/sam-altman-damage-control-mass-cancellation
Tomi Engdahl says:
Thems The Brakes
AI Workers, and Even CEOs, Suddenly Turning Against the Trump Administration
“If any tech company caves to the Pentagon’s demands, War Secretary Pete Hegseth will have won the ability to surveil our communities… en masse.”
https://futurism.com/artificial-intelligence/ai-workers-pentagon-anthropic
Tomi Engdahl says:
27 February 2026
King’s study finds AI chose nuclear signalling in 95% of simulated crises
Artificial intelligence (AI) models used for a simulated war game escalated conflicts by threatening nuclear strikes in 95% of scenarios, according to new research from King’s College London.
https://www.kcl.ac.uk/news/artificial-intelligence-under-nuclear-pressure-first-large-scale-kings-study-reveals-how-ai-models-reason-and-escalate-under-crisis