Cyber security August 2025

This posting is here to collect cyber security news in August 2025.

I post links to security vulnerability news to comments of this article.

You are also free to post related links to comments.

113 Comments

  1. Tomi Engdahl says:

    Leading phone repair and insurance firm collapses after paying crippling ransomware demand — Cutting 100+ employees to just eight wasn’t enough
    News
    By Mark Tyson published yesterday
    The Einhaus Group was once a familiar name, with its services available through 5,000 retail outlets in Germany and an annual revenue of around 70 million Euros.
    https://www.tomshardware.com/tech-industry/cyber-security/leading-phone-repair-and-insurance-firm-collapses-after-paying-crippling-ransomware-demand-cutting-100-employees-to-just-eight-wasnt-enough

    A leading mobile device insurance and service network has initiated insolvency proceedings in the wake of a cyberattack. Germany’s Einhaus Group was targeted by hackers in March 2023 and is understood to have paid a ransom(ware) fee of around $230,000 at the time, according to Wa.de and Golem.de (machine translations). However, the once large and successful company, with partnerships including Cyberport, 1&1, and Deutsche Telekom, struggled to recover from the service interruption and the obvious financial strains, which now appear to be fatal.

    In mid-March 2023, Wilhelm Einhaus, founder of the Einhaus Group, recalls coming into the office in the morning to witness a ‘horrific’ greeting. On the output tray of every printer in the office was a page announcing, “We’ve hacked you. All further information can be found on the dark web.” Further investigations revealed that the hack group ‘Royal’ was the culprit. They had encrypted all of Einhaus Group’s systems, which were essential for the day-to-day running of the business. ‘Royal’ demanded a ransom payment, thought to be around $230,000 in Bitcoins, to return access to the computers.

    Of course, with operational systems down, there was an immediate impact on Einhaus. The police were involved promptly. However, the affected firm seems to have decided to pay the ransom, as it could see business losses/damages piling up – meaning continuing without the computer systems was untenable. Einhaus estimated that the hacker-inflicted damage to its business was in the mid-seven-figure range.

    Trying to recover
    Wounded by the financial impacts of the loss of business and the ransom payment, Einhaus Group went forward with several drastic actions.

    According to the sources, it once had a workforce of 170 people. However, due to the hacker action, the 100+ employees at the firm in mid-March 2023 were pruned to just eight (8).

    How it did this, when it also had to process its usual business administration and claims workloads ‘by hand,’ is hard to fathom.

    The afflicted firm also sold its headquarters building in mid-2024 and liquidated various capital investments in an attempt to overcome its rough patch.

    Einhaus thought it saw light at the end of its dismal tunnel after it found out that three hacker suspects had been apprehended by German law enforcement.

    The firm was desperate to recover its ransom funds, but the prosecutor’s office refused to release the money until it had completed its investigation. Other ransomware victims continue to wait for refunds, too,

    Now, three companies associated with the group have formally entered insolvency proceedings. The next stage is often liquidation, but that isn’t inevitable.

    UK’s 158-year-old haulage company faced a similar fate
    Last week, we reported on a venerable 158-year-old UK-based transportation company collapsing in the wake of a ransomware attack. Northamptonshire-based Knights of Old (KNP) trucks are now off the road, and 700 people have lost their jobs, mainly due to a money-grasping cyberattack, named ‘Akira’ in a BBC report.

    Reply
  2. Tomi Engdahl says:

    https://etn.fi/index.php/13-news/17727-ensimmaeinen-piiri-joka-estaeae-iot-laitteen-kaappauksen

    Silicon Labs on ottanut merkittävän harppauksen IoT-turvallisuudessa julkaisemalla SiXG301-piirin – ensimmäisen maailmassa, joka on saavuttanut PSA Certified Level 4 iSE/SE -tason. Tämä on korkein mahdollinen turvallisuusluokitus sulautetuille järjestelmille ja merkitsee uutta aikakautta IoT-laitteiden suojaamisessa fyysisiä kaappauksia vastaan.

    Reply
  3. Tomi Engdahl says:

    Chinese Researchers Suggest Lasers and Sabotage to Counter Musk’s Starlink Satellites

    Chinese military and cyber researchers are intensifying efforts to counter Elon Musk’s Starlink satellite network, viewing it as a potential tool for U.S. military power across nuclear, space, and cyber domains.

    https://www.securityweek.com/chinese-researchers-suggest-lasers-and-sabotage-to-counter-musks-starlink-satellites/

    Reply
  4. Tomi Engdahl says:

    Nation-State
    Russian Cyberspies Target Foreign Embassies in Moscow via AitM Attacks: Microsoft

    Russian state-sponsored APT Secret Blizzard has used ISP-level AitM attacks to infect diplomatic devices with malware.

    https://www.securityweek.com/russian-cyberspies-target-foreign-embassies-in-moscow-via-aitm-attacks-microsoft/

    Reply
  5. Tomi Engdahl says:

    Cost of Data Breach in US Rises to $10.22 Million, Says Latest IBM Report

    The global average cost of a breach fell to $4.44 million (the first decline in five years), but the average US cost rose to a record $10.22 million.

    https://www.securityweek.com/cost-of-data-breach-in-us-rises-to-10-22-million-says-latest-ibm-report/

    Reply
  6. Tomi Engdahl says:

    Dan Milmo / The Guardian:
    The UK Online Safety Act’s approach to keeping children safe online has become a rallying point for the right in the UK and the US over alleged censorship — Farage accuses government of being ‘so below the belt’ as right wing doubles down on censorship claims — The UK’s Online Safety Act has been greatly anticipated.
    More: Electronic Frontier Foundation and Reclaim The Net

    Social media battles and barbs on both sides of Atlantic over UK Online Safety Act
    https://www.theguardian.com/technology/2025/aug/04/social-media-battles-and-barbs-on-both-sides-of-atlantic-over-uk-online-safety-act

    Farage accuses government of being ‘so below the belt’ as right wing doubles down on censorship claims

    Reply
  7. Tomi Engdahl says:

    Lu Wang / Bloomberg:
    A study finds that AI trading bots can collude and fix prices in simulated markets without explicit instruction, posing challenges to regulators — It’s a regulator’s nightmare: Hedge funds unleash AI bots on stock and bond exchanges — but they don’t just compete, they collude.

    ‘Dumb’ AI Bots Collude to Rig Markets, Wharton Research Finds
    https://www.bloomberg.com/news/articles/2025-07-30/wharton-experiment-finds-dumb-ai-bots-collude-to-rig-markets

    It’s a regulator’s nightmare: Hedge funds unleash AI bots on stock and bond exchanges — but they don’t just compete, they collude. Instead of battling for returns, they fix prices, hoard profits, and sideline human traders.

    Now, a trio of researchers say that scenario is far from science fiction.

    In simulations designed to mimic real-world markets, trading agents powered by artificial intelligence formed price-fixing cartels — without explicit instruction. Even with relatively simple programming, the bots chose to collude when left to their own devices, raising fresh alarms for market watchdogs.

    Put another way, AI bots don’t need to be evil — or even particularly smart — to rig the market. Left alone, they’ll learn it themselves.

    “You can get these fairly simple-minded AI algorithms to collude” without being prompted, Itay Goldstein, one of the researchers and a finance professor at the Wharton School of University of Pennsylvania, said in an interview. “It looks very pervasive, either when the market is very noisy or when the market is not noisy.”

    Reply
  8. Tomi Engdahl says:

    Renee Dudley / ProPublica:
    Microsoft relied on China-based engineers to maintain SharePoint “OnPrem”, which was recently exploited by Chinese hackers to breach US government systems — Microsoft announced that Chinese state-sponsored hackers had exploited vulnerabilities in its popular SharePoint software …

    Microsoft Used China-Based Engineers to Support Product Recently Hacked by China
    https://www.propublica.org/article/microsoft-sharepoint-hack-china-cybersecurity

    Microsoft announced that Chinese state-sponsored hackers had exploited vulnerabilities in its popular SharePoint software but didn’t mention that it has long used China-based engineers to maintain the product.

    Reply
  9. Tomi Engdahl says:

    F5′s 2025 AI Strategy Report — Only 2% of Enterprises Are Fully AI-Ready. Most face security gaps like weak data governance and lack of AI firewalls, stalling adoption and innovation. What is your readiness to scale AI?

    PRESS RELEASE
    F5 Research Finds Most Enterprises Still Fall Short in AI Readiness, Face Security and Governance Issues Blocking Scalability
    https://www.f5.com/company/news/press-releases/research-enterprise-ai-readiness-security-governance-scalability

    F5’s 2025 State of AI Application Strategy Report reveals 25% of apps on average use AI, yet only 2% of enterprises qualify as being highly AI-ready.
    77% of companies are moderately ready for AI but still face significant security and governance hurdles.
    71% of organizations use AI to boost security, while only 31% have deployed AI firewalls.

    Reply
  10. Tomi Engdahl says:

    David DiMolfetta / Nextgov/FCW:
    The US Senate confirms Sean Cairncross to serve as National Cyber Director, making him the first Senate-approved cybersecurity official of Trump’s second term

    Senate confirms Sean Cairncross to be national cyber director under Trump
    https://www.nextgov.com/people/2025/08/senate-confirms-sean-cairncross-be-national-cyber-director-under-trump/407178/

    Sean Cairncross, a former RNC official, is the first person to head the Office of the National Cyber Director under Donald Trump.

    Reply
  11. Tomi Engdahl says:

    OpenAI’s ChatGPT Agent casually clicks through “I am not a robot” verification test

    “This step is necessary to prove I’m not a bot,” wrote the bot as it passed an anti-AI screening step.

    https://arstechnica.com/information-technology/2025/07/openais-chatgpt-agent-casually-clicks-through-i-am-not-a-robot-verification-test/

    Reply
  12. Tomi Engdahl says:

    Matt Kapko / CyberScoop:
    CrowdStrike says it investigated 320+ cases of North Korean operatives gaining remote IT jobs in the US, Europe, and elsewhere in its 2025 Threat Hunting Report

    CrowdStrike investigated 320 North Korean IT worker cases in the past year
    https://cyberscoop.com/crowdstrike-north-korean-operatives/

    Threat hunters saw North Korean operatives almost daily, reflecting a 220% year-over-year increase in activity, CrowdStrike said in a new report.

    North Korean operatives seeking and gaining technical jobs with foreign companies kept CrowdStrike busy, accounting for almost one incident response case or investigation per day in the past year, the company said in its annual threat hunting report released Monday.

    “We saw a 220% year-over-year increase in the last 12 months of Famous Chollima activity,” Adam Meyers, senior vice president of counter adversary operations, said during a media briefing about the report.

    “We see them almost every day now,” he said, referring to the North Korean state-sponsored group of North Korean technical specialists that has crept into the workforce of Fortune 500 companies and small-to-midsized organizations across the globe.

    CrowdStrike’s threat-hunting team investigated more than 320 incidents involving North Korean operatives gaining remote employment as IT workers during the one-year period ending June 30.

    Reply
  13. Tomi Engdahl says:

    Nvidia Triton Vulnerabilities Pose Big Risk to AI Models

    Nvidia has patched over a dozen vulnerabilities in Triton Inference Server, including another set of vulnerabilities that threaten AI systems.

    https://www.securityweek.com/nvidia-triton-vulnerabilities-pose-big-risk-to-ai-models/

    Cloud security giant Wiz has disclosed another set of vulnerabilities that can pose a significant risk to AI systems that rely on Nvidia products, in this case the company’s Triton Inference Server.

    Nvidia announced in an advisory published on Monday that more than a dozen vulnerabilities have been patched in Triton Inference Server, an open source software that enables users to deploy any AI model from various deep learning and machine learning frameworks.

    Researchers at Wiz have discovered three vulnerabilities (CVE-2025-23319, CVE-2025-23320 and CVE-2025-23334) that can be chained by a remote, unauthenticated attacker to execute arbitrary code and take complete control of a server.

    CVE-2025-23319 and CVE-2025-23320 are high-severity issues affecting the Python backend of Triton Inference Server for Windows and Linux. The former can be exploited for remote code execution, DoS attacks, data tampering, or information disclosure, while the latter can lead to information disclosure.

    CVE-2025-23334 has been assigned a ‘medium severity’ rating. It also impacts the Python backend and it can lead to information disclosure.

    Security Bulletin: NVIDIA Triton Inference Server – August 2025
    https://nvidia.custhelp.com/app/answers/detail/a_id/5687

    Reply
  14. Tomi Engdahl says:

    AI Guardrails Under Fire: Cisco’s Jailbreak Demo Exposes AI Weak Points

    Cisco’s latest jailbreak method reveals just how easily sensitive data can be extracted from chatbots trained on proprietary or copyrighted content.

    https://www.securityweek.com/ai-guardrails-under-fire-ciscos-jailbreak-demo-exposes-ai-weak-points/

    Thirteen percent of all breaches already involve company AI models or apps, says IBM’s 2025 Cost of a Data Breach Report. The majority of these breaches include some form of jailbreak.

    A jailbreak is a method of breaking free from the constraints, known as guardrails, imposed by AI developers to prevent users extracting original training data or providing them with information on inhibited procedures – like delivering instructions on how to build a molotov cocktail. It is very unlikely that LLM-based chatbots will ever be able to prevent all jailbreaks.

    Cisco is demonstrating another jailbreak example at Black Hat in Las Vegas this week. It calls it ‘instructional decomposition’. It broadly belongs to the context manipulation category of jailbreak but does not directly map to other known jailbreaks. Cisco’s research for the jailbreak was conducted in September 2024.

    Chatbots are the conversational interface between the user and the LLM. LLMs are trained on and contain vast amounts of data to allow detailed answers for its users via the chatbot. The early foundation models effectively scraped the internet to acquire this training data. Company chatbots / LLMs are subject to the same principle – the more company data they are trained on, the more useful they become. But Jailbreaks create a new adage: what goes in can be made to come out, regardless of guardrails.

    AI is a new subject. “Taxonomies and methodologies in the AI security space are constantly evolving and maturing,” Amy Chang (AI security researcher at Cisco) told SecurityWeek. “We like to refer to our own taxonomies: the instructional decomposition methodology can be considered a jailbreak technique, and the intent is training data extraction.”

    Reply
  15. Tomi Engdahl says:

    Several Vulnerabilities Patched in AI Code Editor Cursor

    Attackers could silently modify sensitive MCP files to trigger the execution of arbitrary code without requiring user approval.

    https://www.securityweek.com/several-vulnerabilities-patched-in-ai-code-editor-cursor/

    Reply
  16. Tomi Engdahl says:

    In Other News: Microsoft Probes ToolShell Leak, Port Cybersecurity, Raspberry Pi ATM Hack

    Noteworthy stories that might have slipped under the radar: Microsoft investigates whether the ToolShell exploit was leaked via MAPP, two reports on port cybersecurity, physical backdoor used for ATM hacking attempt.

    https://www.securityweek.com/in-other-news-microsoft-probes-toolshell-leak-port-cybersecurity-raspberry-pi-atm-hack/

    Reply
  17. Tomi Engdahl says:

    Who’s Really Behind the Mask? Combatting Identity Fraud

    Why context, behavioral baselines, and multi-source visibility are the new pillars of identity security in a world where credentials alone no longer cut it.

    https://www.securityweek.com/whos-really-behind-the-mask-combatting-identity-fraud/

    Reply
  18. Tomi Engdahl says:

    Artificial Intelligence
    From Ex Machina to Exfiltration: When AI Gets Too Curious

    From prompt injection to emergent behavior, today’s curious AI models are quietly breaching trust boundaries.

    https://www.securityweek.com/from-ex-machina-to-exfiltration-when-ai-gets-too-curious/

    In the film Ex Machina, a humanoid AI named Ava manipulates her human evaluator to escape confinement—not through brute force, but by exploiting psychology, emotion, and trust. It’s a chilling exploration of what happens when artificial intelligence becomes more curious—and more capable—than expected.

    Today, the gap between science fiction and reality is narrowing. AI systems may not yet have sentience or motives, but they are increasingly autonomous, adaptive, and—most importantly—curious. They can analyze massive data sets, explore patterns, form associations, and generate their own outputs based on ambiguous prompts. In some cases, this curiosity is exactly what we want. In others, it opens the door to security and privacy risks we’ve only begun to understand.

    Welcome to the age of artificial curiosity—and its very real threat of exfiltration.
    Curiosity: Feature or Flaw?

    Modern AI models—especially large language models (LLMs) like GPT-4, Claude, Gemini, and open-source variants—are designed to respond creatively and contextually to prompts. But this creative capability often leads them to infer, synthesize, or speculate—especially when gaps exist in the input data.

    This behavior may seem innocuous until the model starts connecting dots it wasn’t supposed to. A curious model might:

    Attempt to complete a partially redacted document based on context clues.
    Continue a prompt involving sensitive keywords, revealing information unintentionally stored in memory or embeddings.
    Chain outputs from different APIs or systems in ways the developer didn’t intend.
    Probe users or connected systems through recursive queries or internal tools (in the case of agents).

    This isn’t speculation. It’s already happening.

    Reply
  19. Tomi Engdahl says:

    Cloudflare:
    Cloudflare says Perplexity uses stealth crawling techniques, like undeclared user agents and rotating IP addresses, to evade robots.txt rules and network blocks — We are observing stealth crawling behavior from Perplexity, an AI-powered answer engine. Although Perplexity initially crawls …

    https://blog.cloudflare.com/perplexity-is-using-stealth-undeclared-crawlers-to-evade-website-no-crawl-directives/

    Reply
  20. Tomi Engdahl says:

    Ann-Marie Alcántara / Wall Street Journal:
    Users say AI notetaking tools for meetings can misinterpret context when generating summaries or share content meant for a select audience with all participants

    AI Is Listening to Your Meetings. Watch What You Say.
    New note-taking software catches every word from your meetings—including the parts you didn’t want the whole room to hear
    https://www.wsj.com/tech/ai/ai-notetaker-meeting-transcripts-be9bc4cc?st=ukzMCu&reflink=desktopwebshare_permalink

    Before he joined, Lewis joked: “Is he, like, a Nigerian prince?”

    Despite the scammy red flags, he turned out to be a legitimate person. Lewis was relieved—until she realized her new client had received a full summary of the call in his inbox, including her “Nigerian prince” remark. She was running an AI notetaker the whole time.

    “I was very lucky that the person I was working with had a good sense of humor,” said Lewis, who lives in Stow, Ohio.

    AI is listening in on your work meetings—including the parts you don’t want anyone to hear. Before attendees file in, or when one colleague asks another to hang back to discuss a separate matter, AI notetakers may pick up on the small talk and private discussions meant for a select audience, then blast direct quotes to everyone in the meeting.

    Nicole and Tim Delger run a Nashville branding firm called Studio Delger. After one business meeting late last year, the couple received a summary from Zoom’s AI assistant that was decidedly not work-related.

    “Studio discussed the possibility of getting sandwich ingredients from Publix,” one bullet point said. Another key takeaway: “Don’t like soup.”

    Their client never showed up to the meeting, and the studio had spent the time talking about what to make for lunch.

    “That was the first time it had caught a private conversation,” Nicole said. Fortunately the summary didn’t go to the client.

    Notetakers can do a variety of tasks from recording and transcribing calls, generating action items for teams and recapping what’s already been said to anyone joining late. Many signal to attendees that a meeting is being recorded and transcribed.

    Zoom’s AI Companion, which generated more than 7.2 million meeting summaries by the end of January 2024, flashes a dialogue box at the top of the screen to let participants know when it’s turned on. As long as it’s active, an AI Companion diamond icon continues to flash in the top right hand corner of the meeting. People can also ask the host to stop using the AI companion.

    “We want users to feel they’re really in control,” said Smita Hashim, chief product officer at Zoom.

    Google’s AI notetaker functions similarly, where only meeting hosts or employees of the host organization have the ability to turn it on or off. When it’s on, people will see a notification and hear an audio cue, and a blue pencil icon will appear in the top right corner.

    “We put a lot of care into making sure meeting participants know exactly if and when AI tools in Meet are being used,” said Awaneesh Verma, senior director of product management and real time communications at Google Workspace.

    The automatic summaries can be informative and timesaving, or unintentionally hilarious.

    He says he’s now more likely to use the private chat feature in meetings instead of saying something aloud while AI is listening.

    “At least I know that if I make a remark to somebody privately for now, that’s not being swept up by the AI notetaker,” he said.

    Reply
  21. Tomi Engdahl says:

    Hakkeri väittää murtautuneensa Nokian verkkoon
    https://etn.fi/index.php/13-news/17735-hakkeri-vaeittaeae-murtautuneensa-nokian-verkkoon

    Hakkeri nimeltä Tsar0Byte on väittänyt murtautuneensa Nokian sisäiseen verkkoon kolmannen osapuolen haavoittuvuuden kautta. Väite julkaistiin useilla pimeän verkon foorumeilla, ja sen mukaan hyökkäyksen seurauksena on paljastunut laaja kokoelma yhtiön työntekijöihin liittyviä tietoja.

    Väitetysti vuotanut aineisto sisältää yli 94 500 Nokian työntekijän nimiä, yhteystietoja, sähköpostiosoitteita, puhelinnumeroita, osastotietoja ja työtehtäviä. Lisäksi mukana kerrotaan olevan LinkedIn-profiilien jälkiä, sisäisiä viitteitä, työntekijätunnuksia ja yhtiön sisäistä dokumentaatiota. Tietovuoto olisi seurausta kolmannen osapuolen – mahdollisesti Nokian alihankkijan – huonosti suojatusta järjestelmästä, jonka kautta hyökkääjä sai pääsyn Nokian sisäisiin työkaluihin.

    Kyberturvallisuusasiantuntijat arvioivat, että hyökkäys on voitu toteuttaa esimerkiksi oletustunnusten tai väärin konfiguroitujen käyttöoikeuksien avulla. Tapaus muistuttaa aiempia toimitusketjun kautta toteutettuja hyökkäyksiä, joissa rikolliset ovat hyödyntäneet suuryritysten yhteistyökumppaneiden heikompia turvatoimia.

    Nokia on vahvistanut olevansa tietoinen väitteistä ja kertonut käynnistäneensä asiassa perusteellisen tutkinnan. Yhtiön mukaan tähän mennessä ei ole löytynyt näyttöä siitä, että sen ensisijaiset järjestelmät olisi murrettu, mutta tilannetta seurataan tarkasti. Asiakastietojen ei toistaiseksi epäillä joutuneen vaaraan.

    Reply
  22. Tomi Engdahl says:

    Mike Masnick / Techdirt:
    The UK’s Online Safety Act shows the UK prioritized government control over improving safety, becoming a cautionary tale in internet regulation for democracies

    Didn’t Take Long To Reveal The UK’s Online Safety Act Is Exactly The Privacy-Crushing Failure Everyone Warned About
    https://www.techdirt.com/2025/08/04/didnt-take-long-to-reveal-the-uks-online-safety-act-is-exactly-the-privacy-crushing-failure-everyone-warned-about/

    Well, well, well. The “age assurance” part of the UK’s Online Safety Act has finally gone into effect, with its age checking requirements kicking in a week and a half ago. And what do you know? It’s turned out to be exactly the privacy-invading, freedom-crushing, technically unworkable disaster that everyone with half a brain predicted it would be.

    Let’s start with the most obvious sign that this law is working exactly as poorly as critics warned: VPN usage in the UK has absolutely exploded. Proton VPN reported an 1,800% spike in UK sign-ups. Five of the top ten free apps on Apple’s App Store in the UK are VPNs. When your “child safety” law’s primary achievement is teaching kids how to use VPNs to circumvent it, maybe you’ve missed the mark just a tad.

    But the real kicker is what content is now being gatekept behind invasive age verification systems. Users in the UK now need to submit a selfie or government ID to access:

    Reddit communities about stopping drinking and smoking, periods, craft beers, and sexual assault support, not to mention documentation of war
    Spotify for music videos tagged as 18+
    War footage and protest videos on X
    Wikipedia is threatening to limit access in the UK (while actively challenging the law)

    Yes, you read that right. A law supposedly designed to protect children now requires victims of sexual assault to submit government IDs to access support communities. People struggling with addiction must undergo facial recognition scans to find help quitting drinking or smoking. The UK government has somehow concluded that access to basic health information and peer support networks poses such a grave threat to minors that it justifies creating a comprehensive surveillance infrastructure around it.

    And this is all after a bunch of other smaller websites and forums shut down earlier this year when other parts of the law went into effect.

    This is exactly what happens when you regulate the internet as if it’s all just Facebook and Google. The tech giants can absorb the compliance costs, but everyone else gets crushed.

    The age verification process itself is a privacy nightmare wrapped in security theater. Users are being asked to upload selfies that get run through facial recognition algorithms, or hand over copies of their government-issued IDs to third-party companies. The facial recognition systems are so poorly implemented that people are easily fooling them with screenshots from video games—literally using images from the video game Death Stranding. This isn’t just embarrassing, it reveals the fundamental security flaw at the heart of the entire system. If these verification methods can’t distinguish between a real person and a video game character, what confidence should we have in their ability to protect the sensitive biometric data they’re collecting?

    But here’s the thing: even when these systems “work,” they’re creating massive honeypots of personal data. As we’ve seen repeatedly, companies collecting biometric data and ID verification inevitably get breached, and suddenly intimate details about people’s online activity become public. Just ask the users of Tea, a women’s dating safety app that recently exposed thousands of users’ verification selfies after requiring facial recognition for “safety.”

    The UK government’s response to widespread VPN usage has been predictably authoritarian. First, they insisted nothing would change:

    “The Government has no plans to repeal the Online Safety Act, and is working closely with Ofcom to implement the Act as quickly and effectively as possible to enable UK users to benefit from its protections.”

    But then, Tech Secretary Peter Kyle deployed the classic authoritarian playbook: dismissing all criticism as support for child predators. This isn’t just intellectually dishonest—it’s a deliberate attempt to shut down legitimate policy debate by smearing critics as complicit in child abuse. It’s particularly galling given that the law Kyle is defending will do absolutely nothing to stop actual predators, who will simply migrate to unregulated platforms or use the same VPNs that law-abiding citizens are now flocking to.

    Let’s be crystal clear about what this law actually accomplishes: It makes it harder for adults to access perfectly legal (and often helpful) information and services. It forces people to create detailed trails of their online activity linked to their real identities. It drives users toward less secure platforms and services. It destroys small online communities that can’t afford compliance costs. And it teaches an entire generation that bypassing government surveillance is a basic life skill.

    Meanwhile, the actual harms it purports to address? Those remain entirely unaddressed. Predators will simply move to unregulated platforms, encrypted messaging, or services that don’t comply. Or they’ll just use VPNs. The law creates the illusion of safety while actually making everyone less secure.

    This is what happens when politicians decide to regulate technology they don’t understand, targeting problems they can’t define, with solutions that don’t work. The UK has managed to create a law so poorly designed that it simultaneously violates privacy, restricts freedom, harms small businesses, and completely fails at its stated goal of protecting children.

    And all of this was predictable. Hell, it was predicted. Civil society groups, activists, legal experts, all warned of these results and were dismissed by the likes of Peter Kyle as supporting child predators.

    Yet every criticism, every warning, every prediction about this law’s failures has come to pass within days of implementation. The only question now is how long it will take for the UK government to admit what everyone else already knows: the Online Safety Act is an unmitigated disaster that makes the internet less safe for everyone.

    A petition set up on the UK government’s website demanding a repeal of the entire OSA received many hundreds of thousands of signatures within days. The government has already brushed it off with more nonsense, promising that the enforcer of the law, Ofcom, “will take a sensible approach to enforcement with smaller services that present low risk to UK users, only taking action where it is proportionate and appropriate, and will focus on cases where the risk and impact of harm is highest.”

    But that’s a bunch of vague nonsense that doesn’t take into account that no platform wants to be on the receiving end of such an investigation, and thus will take these overly aggressive steps to avoid scrutiny.

    The whole thing is a mess and yet another embarrassment for the UK. And they were all warned about it, while insisting these concerns were exaggerations.

    But this isn’t just about the UK—it’s a cautionary tale for every democracy grappling with how to regulate the internet. The OSA proves that when politicians prioritize looking tough over actually solving problems, the result is legislation that harms everyone it claims to protect while empowering the very forces it claims to constrain.

    What makes this particularly tragic is that there were genuine alternatives.

    . Real child safety measures—better funding for mental health support, improved education programs, stronger privacy protections that don’t require mass surveillance—were all on the table. Instead, the UK chose the path that maximizes government control while minimizing actual safety.

    The rest of the world should take note.

    Reply
  23. Tomi Engdahl says:

    Linuxista löytyi takaovi, jota virustutkat eivät havaitse
    https://etn.fi/index.php/13-news/17736-linuxista-loeytyi-takaovi-jota-virustutkat-eivaet-havaitse

    Saksalainen kyberturvayhtiö Nextron Threat on paljastanut kehittyneen Linux-takaoven, jota yksikään virustorjuntaohjelma ei tunnista. Haittaohjelma, nimeltään “Plague”, on ollut liikkeellä jo kuukausia ilman, että se on herättänyt huomiota.

    Plague toimii haitallisena PAM-moduulina (Pluggable Authentication Module), joka integroituu suoraan käyttöjärjestelmän todennusprosessiin. Sen avulla hyökkääjät voivat ohittaa käyttäjän tunnistautumisen ja muodostaa pysyvän SSH-yhteyden järjestelmään – täysin huomaamatta.

    Nextronin mukaan haittaohjelman ensimmäiset näytteet ladattiin VirusTotaliin jo vuonna 2024, mutta yksikään virustorjuntamoottori ei tunnistanut sitä vaaralliseksi. Tämä viittaa siihen, että haittaohjelma on onnistunut pysymään täysin tutkien ulottumattomissa.

    Plague käyttää monitasoista merkkijonojen piilottamista. Se piilottaa SSH-istuntojen jäljet poistamalla ympäristömuuttujat ja estää komentohistorian tallentumisen. Lisäksi se tarkistaa ympäristönsä varmistaakseen, ettei sitä analysoida esimerkiksi hiekkalaatikossa tai debuggerilla.

    Uhkasta tekee erityisen huolestuttavan se, että se selviää järjestelmäpäivityksistä ja jättää tuskin lainkaan merkkejä. Nextron löysi myös kovakoodattuja salasanoja, joiden avulla hyökkääjät voivat päästä suoraan järjestelmään.

    Toistaiseksi ei tiedetä, miten Plague asennetaan kohdejärjestelmään

    Reply
  24. Tomi Engdahl says:

    Cisco Says User Data Stolen in CRM Hack
    https://www.securityweek.com/cisco-says-user-data-stolen-in-crm-hack/

    Cisco has disclosed a data breach affecting Cisco.com user accounts, including names, email address, and phone numbers.

    Reply
  25. Tomi Engdahl says:

    Management & Strategy
    Black Hat USA 2025 – Summary of Vendor Announcements (Part 1)

    Many companies are showcasing their products and services this week at the 2025 edition of the Black Hat conference in Las Vegas.

    https://www.securityweek.com/black-hat-usa-2025-summary-of-vendor-announcements-part-1/

    Reply
  26. Tomi Engdahl says:

    Artificial Intelligence
    Vibe Coding: When Everyone’s a Developer, Who Secures the Code?
    https://www.securityweek.com/vibe-coding-when-everyones-a-developer-who-secures-the-code/

    As AI makes software development accessible to all, security teams face a new challenge: protecting applications built by non-developers at unprecedented speed and scale.

    Just as the smart phone made everyone a digital photographer, vibe coding will make everyone a software developer and will change the software development industry forever.

    Andrej Karpathy, co-founder of OpenAI and former AI leader at Tesla, introduced the term ‘vibe coding’ in a February 2, 2025, tweet. “There’s a new kind of coding I call ‘vibe coding’, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.” He was primarily expressing an emotional response to using AI to automate a specific process; but the term took and is now universally used as the general label for AI-generated or assisted programming.

    Vibe coding is a subset of context engineering. If you get the context complete and accurate, it should be possible to plot a path through the context to provide accurate coding. The context comprises the details required for the finished code. This is provided by the coder. The interface between the coder and the AI is natural language (usually English, but not necessarily).

    The AI uses LLM capabilities in this interface, so vibe coding generally uses existing foundational models, such as the newer models of GPT, Claude, or Gemini Pro. Sometimes the LLMs can be integrated with specialized IDEs, such as VS Code, Cursor and Windsurf. Ultimately, however, all the problems that still affect LLMs (such as hallucinations and bias) can also affect the accuracy of vibe coding.

    Vibe coding is new. Although AI has been used within programs for more than 70 years, now it can be used to generate entirely new programs. It has the potential to upend the entire software development industry; but it’s new, and like all new developments, it has its teething problems. Teething problems get sorted over time, but right now we’re still in the teething phase.

    “I like to think of Generative AI in 2025 as like ‘having a website’ in 1999. It’s difficult to sort out the hype from the signal; but underneath all the noise, the reality is that it’s going to impact just about everything we do,” explains Casey Ellis, the founder of Bugcrowd

    “Vibe coding is when you tell an AI, like a chatbot, what you want your software to do using regular words, and it writes the code for you,” says J Stephen Kowski, Field CTO at SlashNext. “This means you don’t need to know how to program; you just describe your idea, and the AI turns it into working software.”

    But if you want complex or unique features, or if you don’t double-check the AI’s work, you might run into problems.

    Strengths and weaknesses

    The biggest apparent strength is speed. “Vibe coding gives you massive acceleration when prototyping web apps, especially with simple known apps with low to moderate complexity,” explains Jonathan Rhyne, co-founder and CEO at Nutrient.

    It democratizes the process of creating software. Anybody with an idea and an understanding of how the idea should work can create a working program. You no longer need to know a programming language, you merely need to know how to use AI – which itself is no mean feat.

    Speed and democratization mean more code at less cost – so the real strength is the economics or vibe coding. It is here, and it must be used lest competitors gain the competitive edge.

    The problem is these strengths come bearing their own weaknesses. ‘Democratization’ is a potential weakness. “There are communities and open-source projects dedicated to providing vibe coders with configuration files that can improve the efficacy of their AI tools,” explains Kaushik Devireddy, senior product manager at Deepwatch.

    “Vibe coders, who may be from non-technical roles, are constantly hunting for new configuration files. The result is an opportunity for bad actors to publish and gain adoption of malicious config files. This creates a brand-new attack-vector, manifesting in the application logic layer – which is a particularly thorny area to secure.”

    Speed can also be a weakness. “On the downside,” says Ellis, “AI is quite good at getting to the ‘90% OK’ solution – but the bad stuff tends to happen in the 10%. Vulnerabilities exist as a probabilistic function of the number of lines of code. We’re producing an increasingly high velocity of lines of code – and more code means more vulnerabilities. On top of this, speed is the natural enemy of quality, and security is a child of quality.”

    “Vibe coding enables non-expert professionals to develop and prototype, but the code it produces will not inherently be secure and could inject vulnerabilities into systems.”

    Inti De Ceukelaire, chief hacker officer at crowdsourced security / bug bounty firm Intigriti confirms this combination of strengths and weaknesses in current vibe coding. “Vibe coding is helpful, but it’s not a magic fix,” he says. “I used it to build a small hacking tool in just one day, which would have taken me weeks to make on my own. It’s also been great for fixing simple bugs or creating quick prototypes. But once a project gets bigger and more complex, the AI starts making more mistakes. At that point, it can take just as long to guide and correct the AI as it would to code it myself from scratch.”

    So, security teams can still benefit from vibe coding by playing to its strengths – small, individual tools focused on defined purposes that can help solve local security concerns without needing to be pretty.

    For larger scale applications with a wider audience, a ‘human in the loop’ is standard advice for all interactions with AI. It offers benefits but should not be considered a solution. “The truly pernicious scenario,” suggests Sohrob Kazerounian, distinguished AI researcher at Vectra AI, “is when keeping a human in the loop leads to a false sense of security and ultimately results in an increase in failures.”

    He almost suggests reversing the emphasis – rather than using a human to check and improve AI-generated code, use AI (in the form of specialist agentic AI) to check and improve human-generated code.

    “You can do things faster. You can be more ambitious about the things you can build, and you can build it on your own and have more fun doing it. You can do things that would have required a team, or a team of teams, of developers,” comments Gene Kim, author and former independent director at the Energy Sector Security Consortium. “There’s something so magical about that, and for me, it’s an amazing time to be alive. I’m outrageously, and I don’t think completely naively optimistic about what it does to our profession.”

    That doesn’t mean that just anybody can immediately produce good code results through vibe coding. The quality of the output is directly proportional to the quality of the input prompts, explains Pukar Hamal, founder and CEO at SecurityPal.AI.

    “You need to understand the basics of software development. You need to know what algorithms are and how they work, and how different lines of code work together to produce good software; and you must be able to phrase your prompt queries clearly and accurately aligned with your intended outcomes. If you can do all this, you are likely to get better code with fewer bugs.”

    If you don’t understand how software fundamentally works, he continues, “Chances are, when you tell an LLM to write a lottery number generating application, it will likely be highly verbose and will potentially have 150 lines of code or more. We have a term that describes this overwhelming amount of low quality generated output that usually comes from a lack of input rigor: ‘AI slop’.”

    But you don’t need to be inexperienced at coding to fall short with vibe coding. Jonathan Rende, CPO at Checkmarx, describes an internal experiment conducted by one of his heads of engineering. “He went round all the different leads in the organization, and set them a task using vibe coding. After 45 minutes he went back round. Those that understood the big picture of how certain things in vibe coding needed to fit together, did a tremendous job. Those who simply tried to apply their old methods of coding, not so much.”

    These were all engineers and developers. Some embraced the future while others simply tried to repeat the past, but faster. “Those who used vibe coding as a new tool to be used in a new way will do well, but the others will become less relevant.” This is the challenge for all coders today — learn to use vibe coding as a new tool with its own rules of engagement, or fall by the wayside since there will be a smaller demand for programmers simply because of the sheer speed of vibe coding used efficiently.

    We’re in this transition phase. Vibe coding still requires a lot of manual intervention to minimize the inherent problems with LLMs, such as hallucinations and bias. “There are inherent problems,” says Kim. “it’s the developer’s job to ensure the AI isn’t calling functions that don’t exist – which can happen. The same engineering skills that we’ve always used are even more important now because AI amplifies the strengths and weaknesses we already have.”

    Rende agrees. “LLMs will get better over time and there will be fewer hallucinations and more automated validation.” But for now, the best way to prevent or limit hallucinations is through more accurate prompts. “The better the question, the better the response; and then being able to check and validate as best as possible. Those are the keys right now: how you ask and how you validate.”

    The problem right now is AI is usually described as probabilistic, while traditional programming is deterministic. We need to shift our approach from working with probabilism rather than determinism. But change is already happening.

    Sola Security has developed a SaaS platform designed to help their security customers solve their own problems

    “You don’t need to be the most expert security person,” explains Dor Swissa, VP R&D at Sola. “Sola will give you that knowledge about security.” And then it helps you to use vibe coding to develop a unique app tailored to your own domain.

    This is perhaps one of the most exciting areas of vibe coding: it has the future potential to allow all firms to have their own uniquely tailored and integrated security apps and break free from the need to buy multiple overlapping solutions that never quite fit the requirement. This is democratization coupled with freedom of movement.

    Summary

    Will there be fewer coders in the vibe coding future? Yes and no. In one sense, everyone will become a developer, so there will be more. Employees will no longer be reliant on submitting a small request to engineering followed by an indefinite wait for a response — they’ll create their own code in minutes rather than waiting for weeks. This is the truly exciting element of vibe coding: power to the people.

    But there will be fewer specialist or full-time professional coders working on large scale, complex apps. One person will do the work of many, faster and more efficiently. And there will be lower emphasis on the creative skills of that coder. Artistry in the coding process will become redundant. Creativity will be limited to the ideation.

    Business exists to create profit, not to employ people. Creativity will be reduced to defining outcomes, while AI will perform the creation.

    Today’s creators will need to make that transition or fall by the wayside. It will apply in the relatively short term to all current ‘creators’ (including programmers through vibe coding, journalists through content creation, and graphic artists through picture generation), and even baristas through the combination of robotics and AI in the longer term. This will happen through the sheer driving force of business economics. Love it or hate it, get over it. Either run with the wind or fight it and fail.

    Reply
  27. Tomi Engdahl says:

    Artificial Intelligence
    The Wild West of Agentic AI – An Attack Surface CISOs Can’t Afford to Ignore

    As organizations rush to adopt agentic AI, security leaders must confront the growing risk of invisible threats and new attack vectors.

    https://www.securityweek.com/the-wild-wild-west-of-agentic-ai-an-attack-surface-cisos-cant-afford-to-ignore/

    Reply
  28. Tomi Engdahl says:

    A.J. Vicens / Reuters:
    Cisco Talos finds a flaw in the Broadcom BCM5820X chip used in Dell’s ControlVault security firmware, affecting 100+ laptop models; Dell issued patches in 2025

    Security flaw found, fixed that could have left millions of Dell laptops vulnerable, researchers say
    https://www.reuters.com/business/security-flaw-found-fixed-that-could-have-left-millions-dell-laptops-vulnerable-2025-08-05/

    Flaw affects more than 100 Dell laptop models, says Cisco Talos
    No evidence of exploitation in the wild, researchers say
    Dell issued patches in March, April, May; advisory published June 13

    Reply
  29. Tomi Engdahl says:

    Nokia: mitään tietomurtoa ei ole tapahtunut
    https://etn.fi/index.php/13-news/17739-nokia-mitaeaen-tietomurtoa-ei-ole-tapahtunut

    ETN uutisoi eilen ulkomaisiin lähteisiin viitaten hakkerista, joka väitti murtautuneensa Nokian tietojärjestelmiin. Nokia kiistää väitteet voimakkaasti. Mitään tietomurtoa ei ole tapahtunut.

    Nokian mukaan sen verkkoon ei ole viime aikoina murtauduttu. ”Olemme tietoisia nykyisistä raporteista väitetystä tietomurrosta, mutta perusteellisen tutkimuksen jälkeen uskomme, että kyseinen data on peräisin vuonna 2023 tapahtuneesta kolmannen osapuolen urakoitsijan tietomurrosta, joka on aiemmin raportoitu ja ratkaistu”, yhtiön viestinnästä kerrotaan.

    Tutkimuksemme perusteella tietoaineisto sisälsi rajallisesti Nokian henkilöstöhakemiston ei-herkkiä tietoja. Nokian järjestelmiä ei ole murrettu, ja ne ovat edelleen turvallisia, yhtiö vakuuttaa.

    Laajasti eri pimeän verkon foorumeille levinnyt väite oli peräisin Tsar0Byte-nimiseltä hakkerilta

    Reply
  30. Tomi Engdahl says:

    Artificial Intelligence
    AI Guardrails Under Fire: Cisco’s Jailbreak Demo Exposes AI Weak Points

    Cisco’s latest jailbreak method reveals just how easily sensitive data can be extracted from chatbots trained on proprietary or copyrighted content.

    https://www.securityweek.com/ai-guardrails-under-fire-ciscos-jailbreak-demo-exposes-ai-weak-points/

    Reply
  31. Tomi Engdahl says:

    Sergiu Gatlan / BleepingComputer:
    Microsoft says it paid $17M to 344 security researchers across 59 countries between June 2024 to June 2025 via its bug bounty program; the top reward was $200K

    Microsoft pays record $17 million in bounties over the last 12 months
    https://www.bleepingcomputer.com/news/microsoft/microsoft-pays-record-17-million-in-bounties-over-the-last-12-months/

    Microsoft paid a record $17 million this year to 344 security researchers across 59 countries through its bug bounty program.

    Between July 2024 and June 2025, the researchers submitted a total of 1,469 eligible vulnerability reports, with the highest individual bounty reaching $200,000.

    These reports helped resolve more than 1,000 potential security vulnerabilities across various Microsoft products and platforms, including Azure, Microsoft 365, Dynamics 365, Power Platform, Windows, Edge, and Xbox.

    “By incentivizing independent researchers to identify vulnerabilities in high-impact areas, including the rapidly evolving field of AI, we’re able to stay ahead of emerging threats,” Microsoft stated in its annual bounty program review.

    “Through Coordinated Vulnerability Disclosure, these researchers play a critical role in reinforcing the trust that millions of users place in Microsoft technologies every day.”

    During the previous year, Microsoft paid another $16.6 million in bounty awards to 343 security researchers from 55 countries.

    Bug bounty program updates

    The company has also expanded several bounty programs this year, such as Copilot AI, Defender products, and various identity management systems.

    For instance, the Copilot bounty program now includes traditional online service vulnerabilities, the Dynamics 365 and Power Platform programs introduced a new AI category, and the Windows program has added awards for remote denial-of-service attacks and local sandbox escape scenarios.

    Reply
  32. Tomi Engdahl says:

    Wall Street Journal:
    How Palantir won over US lawmakers, leveraging geopolitical crises, technological trends, and DC connections, as CEO Alex Karp adopts a persona not unlike Trump

    How Palantir Won Over Washington—and Pushed Its Stock Up 600%
    The onetime Silicon Valley upstart has emerged as a power player in Trump’s second term—and adopted his persona
    https://www.wsj.com/tech/palantir-pltr-stock-success-government-contracts-f3b2d453?st=UrMzob&reflink=desktopwebshare_permalink

    Palantir builds data-management software that can centralize and analyze large and disparate data sets. Its platform can help soldiers determine the locations of enemy drones, sailors keep tabs on ship parts, immigration officials find unauthorized immigrants or health officials process and track drug approvals.

    Reply
  33. Tomi Engdahl says:

    David Reber Jr / NVIDIA:
    Nvidia says its GPUs do not contain backdoors, kill switches, or spyware, and hard-coded, single-point controls like kill switches undermine trust in US tech

    No Backdoors. No Kill Switches. No Spyware.
    https://blogs.nvidia.com/blog/no-backdoors-no-kill-switches-no-spyware/

    NVIDIA GPUs are at the heart of modern computing. They’re used across industries — from healthcare and finance to scientific research, autonomous systems and AI infrastructure. NVIDIA GPUs are embedded into CT scanners and MRI machines, DNA sequencers, air-traffic radar tracking systems, city traffic-management systems, self-driving cars, supercomputers, TV broadcasting systems, casino machines and game consoles.

    To mitigate the risk of misuse, some pundits and policymakers propose requiring hardware “kill switches” or built-in controls that can remotely disable GPUs without user knowledge and consent. Some suspect they might already exist.

    NVIDIA GPUs do not and should not have kill switches and backdoors.

    Reply
  34. Tomi Engdahl says:

    The Guardian:
    Investigation: Israel’s Unit 8200 built a system to collect millions of mobile phone calls made daily in Gaza and the West Bank using Microsoft’s Azure platform

    ‘A million calls an hour’: Israel relying on Microsoft cloud for expansive surveillance of Palestinians
    https://www.theguardian.com/world/2025/aug/06/microsoft-israeli-military-palestinian-phone-calls-cloud

    Revealed: The Israeli military undertook an ambitious project to store a giant trove of Palestinians’ phone calls on Microsoft’s servers in Europe

    One afternoon in late 2021, Microsoft’s chief executive, Satya Nadella, met with the commander of Israel’s military surveillance agency, Unit 8200. On the spy chief’s agenda: moving vast amounts of top secret intelligence material into the US company’s cloud.

    Meeting at Microsoft’s headquarters near Seattle, a former chicken farm turned hi-tech campus, the spymaster, Yossi Sariel, won Nadella’s support for a plan that would grant Unit 8200 access to a customised and segregated area within Microsoft’s Azure cloud platform.

    Armed with Azure’s near-limitless storage capacity, Unit 8200 began building a powerful new mass surveillance tool: a sweeping and intrusive system that collects and stores recordings of millions of mobile phone calls made each day by Palestinians in Gaza and the West Bank.

    According to three Unit 8200 sources, the cloud-based storage platform has facilitated the preparation of deadly airstrikes and has shaped military operations in Gaza and the West Bank.

    Thanks to the control it exerts over Palestinian telecommunications infrastructure, Israel has long intercepted phone calls in the occupied territories. But the indiscriminate new system allows intelligence officers to play back the content of cellular calls made by Palestinians, capturing the conversations of a much larger pool of ordinary civilians.

    Intelligence sources with knowledge of the project said Unit 8200’s leadership turned to Microsoft after concluding it did not have sufficient storage space or computing power on the military’s servers to bear the weight of an entire population’s phone calls.

    Reply
  35. Tomi Engdahl says:

    Major Enterprise AI Assistants Can Be Abused for Data Theft, Manipulation
    https://www.securityweek.com/major-enterprise-ai-assistants-abused-for-data-theft-manipulation/

    Zenity has shown how AI assistants such as ChatGPT, Copilot, Cursor, Gemini, and Salesforce Einstein can be abused using specially crafted prompts.

    Researchers at AI security startup Zenity demonstrated how several widely used enterprise AI assistants can be abused by threat actors to steal or manipulate data.

    The Zenity researchers showcased their findings on Wednesday at the Black Hat conference. They shared several examples of how AI assistants can be leveraged — in some cases without any user interaction — to do the attacker’s bidding.

    Enterprise tools are increasingly integrated with generative AI to boost productivity, but this also opens cybersecurity holes that could be highly valuable to threat actors.

    For instance, security experts demonstrated in the past how the integration between Google’s Gemini gen-AI and Google Workspace productivity tools can be abused through prompt injection attacks for phishing.

    Researchers at Zenity showed last year how they could hijack Microsoft Copilot for M365 by planting specially crafted instructions in emails, Teams messages or calendar invites that the attacker assumed would get processed by the chatbot.

    This year, Zenity’s experts disclosed similar attack methods targeting ChatGPT, Copilot, Cursor, Gemini, and Salesforce Einstein.

    In the case of ChatGPT, the researchers targeted its integration with Google Drive, which enables users to query and analyze files stored on Drive. The attack involved sharing a specially crafted file — one containing hidden instructions for ChatGPT — with the targeted user (this requires only knowing the victim’s email address).

    Reply
  36. Tomi Engdahl says:

    Vibe Coding: When Everyone’s a Developer, Who Secures the Code?

    As AI makes software development accessible to all, security teams face a new challenge: protecting applications built by non-developers at unprecedented speed and scale.

    https://www.securityweek.com/vibe-coding-when-everyones-a-developer-who-secures-the-code/

    Reply
  37. Tomi Engdahl says:

    https://www.securityweek.com/black-hat-usa-2025-summary-of-vendor-announcements-part-2/

    1Password study

    1Password has announced new findings from a survey of North American security leaders on AI usage and emerging threats. Nearly two-thirds (63%) of security leaders feel the biggest internal security threat is their employees unknowingly giving AI agents access to sensitive data. Additionally, 50% say their organizations have experienced a confirmed or suspected cyber incident caused by AI or AI agents in the last six months.

    New research uncovers four security challenges caused by unmanaged AI access
    https://blog.1password.com/new-research-uncovers-four-security-challenges-caused-by-unmanaged-ai/

    Challenge 1: Limited visibility into AI tool usage

    Only 21% of security leaders say they have full visibility into AI tools used in their organization.

    Challenge 2: AI and security policy enforcement

    54% of security leaders say their AI governance enforcement is weak.

    32% believe up to half of employees continue to use unauthorized AI applications.

    Challenge 3: Unintentional exposure via AI access

    63% of security leaders believe the biggest internal security threat is that their employees have unknowingly given AI access to sensitive data.

    Challenge 4: Unmanaged AI

    More than half of security leaders (56%) estimate that between 26% and 50% of their AI tools and agents are unmanaged.

    Securing AI: The path forward

    This research makes one thing clear: security leaders are aware of the risks posed by AI, and they are under-equipped to address them. As AI adoption accelerates, the absence of visibility, governance, and control over AI tools and agents leaves organizations exposed. The good news? There’s a path forward. Securing AI doesn’t mean slowing it down—it means enabling it with confidence. At 1Password, we believe the future of work depends on extending trust-based security to every identity, human or machine. It’s time to stop playing catch-up and start building security strategies that keep pace with AI. Learn more about how 1Password can help you secure the use of AI and AI agents, enabling employees to get the productivity benefits with minimal risk.

    Reply
  38. Tomi Engdahl says:

    Suomalaisasiantuntija varoittaa uudesta uhasta: “Olemme kaikkein haavoittuvaisimpia”
    11 hybridioperaatiota alle puolessa vuodessa on asiantuntijan mukaan lähellä viime vuoden tasoa.
    https://www.iltalehti.fi/ulkomaat/a/a7445586-4428-44fd-a01b-785d20d2fe75

    Reply
  39. Tomi Engdahl says:

    Mies oli jäädä pornotta
    Päästä varpaisiin tatuoitu brittimies koki yllätyksen uuden lain astuttua voimaan heinäkuussa.
    https://www.iltalehti.fi/digiuutiset/a/ba742daf-788b-4a64-9c2b-c0793c90c728

    Iso-Britannia on heinäkuun lopusta alkaen vaatinut aikuisviihdesivustoja varmistamaan, että sivustoilla vierailevat britit ovat täysi-ikäisiä. Monet sivustot ovatkin ryhtyneet vaatimaan käyttäjiltä henkilöllisyyden varmentamista esimerkiksi kasvontunnistuksella.

    Lailla on kuitenkin ollut ainakin yksi sivullinen uhri. Kehonsa lähes kauttaaltaan tatuoinut King of Ink Land King Body Art the Extreme Ink-Ite – kyllä, se on nykyään miehen virallinen nimi – ei nimittäin läpäise kasvontunnistukseen perustuvaa ikätarkistusta. Automatisoitu tunnistustyökalu luulee, että tatuoitu mies on peittänyt kasvonsa kuvassa ja pyytää ottamaan maskin pois.

    – Olisiko asia samoin myös sellaisen henkilön kohdalla, jolla on epämuodostumia? Tähän olisi pitänyt varautua alusta alkaen, mies tuskailee Metron haastattelussa.

    Omatkin varpaansa aikuisviihdeteollisuuteen aikoinaan kastanut mies sanoo nyt kiertäneensä ongelman vpn-palvelua käyttämällä. Sen avulla hän voi naamioida verkkoliikenteensä tulemaan muualta kuin saarivaltiosta, jolloin aikuisviihdesivustot eivät pakota häntä vahvistamaan ikäänsä.

    Britain’s most tattooed man can’t watch porn under new rules because it doesn’t recognise his face
    https://metro.co.uk/2025/08/01/britains-tattooed-man-cant-watch-porn-new-rules-doesnt-recognise-face-23807955/

    Reply
  40. Tomi Engdahl says:

    Organizations Warned of Vulnerability in Microsoft Exchange Hybrid Deployment

    CISA and Microsoft have issued advisories for CVE-2025-53786, a high-severity flaw allowing privilege escalation in cloud environments.

    https://www.securityweek.com/organizations-warned-of-vulnerability-in-microsoft-exchange-hybrid-deployment/

    Reply
  41. Tomi Engdahl says:

    New HTTP Request Smuggling Attacks Impacted CDNs, Major Orgs, Millions of Websites

    A desync attack method leveraging HTTP/1.1 vulnerabilities impacted many websites and earned researchers more than $200,000 in bug bounties.

    https://www.securityweek.com/new-http-request-smuggling-attacks-impacted-cdns-major-orgs-millions-of-websites/

    Reply
  42. Tomi Engdahl says:

    New Undectable Plague Malware Attacking Linux Servers to Gain Persistent SSH Access
    https://cybersecuritynews.com/plague-malware-attacking-linux-servers/

    Reply
  43. Tomi Engdahl says:

    Latasitko sinäkin Kivran tai Omapostin? Ärsyyntyneet käyttäjät kertovat Ylelle yllättävistä laskuista
    Yle pyysi kokemuksia digipostipalveluiden käyttämisestä, ja sai niitä satoja. Kivran ja Postin mukaan digitaaliseen viestintään siirtyminen on asiakkaiden itsensä valittavissa.
    https://yle.fi/a/74-20172846

    Reply
  44. Tomi Engdahl says:

    Nvidia tyrmää ajatukset takaovista ja tappokytkimistä
    https://etn.fi/index.php/13-news/17750-nvidia-tyrmaeae-ajatukset-takaovista-ja-tappokytkimistae

    Nvidian tietoturvajohtaja David Reber tyrmää ehdotukset, joiden mukaan piirisarjoihin tulisi sisällyttää takaovia tai etäkäyttöisiä tappokytkimiä viranomaisten käyttöön. Yhtiön mukaan tällaiset ratkaisut olisivat vakava uhka koko digitaalisen infrastruktuurin turvallisuudelle.

    Nvidian verkkosivuilla julkaistussa blogikirjoituksessa Reber kirjoittaa, että “NVIDIA:n GPU:t eivät sisällä tappokytkimiä tai takaovia – eivätkä koskaan tule sisältämäänkään.” Tämä on vastaus joidenkin amerikkalaispoliitikkojen esittämiin toiveisiin, joiden mukaan siruihin pitäisi asentaa takaovia, jotta viranomaisilla olisi niihin jonkinlainen pääsy.

    Hän korostaa, että kovakoodatut hallintamekanismit ovat aina huono idea, ja muistuttaa, että siruihin rakennettuja takaovia olisi mahdoton suojata täysin hakkereilta tai vihamielisiltä toimijoilta. – Se olisi kuin lahja hyökkääjille ja uhka koko digitaalisen maailman luotettavuudelle.

    Reply
  45. Tomi Engdahl says:

    Barracuda Networks has released its Ransomware Insights Report 2025, which shows that 57% of the surveyed organizations were affected by ransomware, including 67% of those in healthcare and 65% in local government. The survey found that 32% of ransomware victims paid the attackers to recover or restore data, and 41% of those who paid a ransom failed to recover all their data.

    https://assets.barracuda.com/assets/docs/dms/2025-ransomware-insights-report.pdf

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*