The Politics of Security

Well known security guru Bruce Schneier has an interesting blog posting titled The Politics of Security in a Democracy. It tells that terrorism causes fear, and we overreact to that fear. Our brains aren’t very good at probability and risk analysis: We think rare risks are more common than they are, and we fear them more than probability indicates we should. Our leaders are just as prone to this overreaction as we are. But aside from basic psychology, there are other reasons that it’s smart politics to exaggerate terrorist threats, and security threats in general. Neatly summarized. Great essay.


  1. Tomi Engdahl says:

    culture Hacking July 2013
    Silent War

    On the hidden battlefields of history’s first known cyber-war, the casualties are piling up. In the U.S., many banks have been hit, and the telecommunications industry seriously damaged, likely in retaliation for several major attacks on Iran. Washington and Tehran are ramping up their cyber-arsenals, built on a black-market digital arms bazaar, enmeshing such high-tech giants as Microsoft, Google, and Apple. With the help of highly placed government and private-sector sources, Michael Joseph Gross describes the outbreak of the conflict, its escalation, and its startling paradox: that America’s bid to stop nuclear proliferation may have unleashed a greater threat.

  2. Tomi says:

    Secret to Prism program: Even bigger data seizure

    In the months and early years after 9/11, FBI agents began showing up at Microsoft Corp. more frequently than before, armed with court orders demanding information on customers.

    The agents wanted email archives, account information, practically everything, and quickly. Engineers compiled the data, sometimes by hand, and delivered it to the government.

    Often there was no easy way to tell if the information belonged to foreigners or Americans. So much data was changing hands that one former Microsoft employee recalls that the engineers were anxious about whether the company should cooperate.

    Inside Microsoft, some called it “Hoovering” — not after the vacuum cleaner, but after J. Edgar Hoover, the first FBI director, who gathered dirt on countless Americans.

    The revelation of Prism this month by the Washington Post and Guardian newspapers has touched off the latest round in a decade-long debate over what limits to impose on government eavesdropping, which the Obama administration says is essential to keep the nation safe.

    Whether by clever choice or coincidence, Prism appears to do what its name suggests.

    The fact that it is productive is not surprising; documents show it is one of the major sources for what ends up in the president’s daily briefing. Prism makes sense of the cacophony of the Internet’s raw feed. It provides the government with names, addresses, conversation histories and entire archives of email inboxes.

    Deep in the oceans, hundreds of cables carry much of the world’s phone and Internet traffic. Since at least the early 1970s, the NSA has been tapping foreign cables. It doesn’t need permission. That’s its job.

    But Internet data doesn’t care about borders. Send an email from Pakistan to Afghanistan and it might pass through a mail server in the United States, the same computer that handles messages to and from Americans. The NSA is prohibited from spying on Americans or anyone inside the United States. That’s the FBI’s job and it requires a warrant.

    Tapping into those cables allows the NSA access to monitor emails, telephone calls, video chats, websites, bank transactions and more.

    “You have to assume everything is being collected,” said Bruce Schneier

    The New York Times disclosed the existence of this effort in 2005.

    Unlike the recent debate over Prism, however, there were no visual aids, no easy-to-follow charts explaining

    The Bush administration shut down its warrantless wiretapping program in 2007 but endorsed a new law, the Protect America Act, which allowed the wiretapping to continue with changes: The NSA generally would have to explain its techniques and targets to a secret court in Washington, but individual warrants would not be required.

    Protect America Act gave birth to a top-secret NSA program, officially called US-98XN.

    It was known as Prism. Though many details are still unknown, it worked like this:

    Every year, the attorney general and the director of national intelligence spell out in a classified document how the government plans to gather intelligence on foreigners overseas.

    By law, the certification can be broad. The government isn’t required to identify specific targets or places.

    A federal judge, in a secret order, approves the plan.

    With that, the government can issue “directives” to Internet companies to turn over information.

    With Prism, the government gets a user’s entire email inbox. Every email, including contacts with American citizens, becomes government property.

    “You can’t have 100 percent security and also then have 100 percent privacy and zero inconvenience,” the president said.

    Obama’s administration, echoing his predecessor’s, credited the surveillance with disrupting several terrorist attacks.

  3. Tomi says:

    Source: Obama Considering Releasing NSA Court Order

    NPR has learned that the Obama administration, under pressure to lift a cloak of secrecy, is considering whether to declassify a court order that gives the National Security Agency the power to gather phone call record information on millions of Americans.

    The document, known as a “primary order,” complements a shorter Foreign Intelligence Surveillance court document leaked to The Guardian newspaper . That document revealed the U.S. government had been asking Verizon Business Network Services Inc. to turn over, on a daily basis, phone call records for its subscribers, for 90 days.

  4. Tomi Engdahl says:

    NSA’s Role In Terror Cases Concealed From Defense Lawyers

    “‘Confidentiality is critical to national security.’ So wrote the Justice Department in concealing the NSA’s role in two wiretap cases.”

    New York attorney Joshua Dratel: ‘National security is about keeping illegal conduct concealed from the American public until you’re forced to justify it because someone ratted you out.’

  5. Tomi Engdahl says:

    Justice Department Fought to Conceal NSA’s Role in Terror Case From Defense Lawyers

    When a senior FBI official told Congress the role the NSA’s secret surveillance apparatus played in a San Diego terror financing case today, nobody was more surprised to hear it than the defense attorney who fought a long and futile court battle to get exactly the same information while defending the case in court.

    “His lawyers — who all have security clearances — we can’t learn about it until it’s to the government’s tactical advantage politically to disclose it,” says New York attorney Joshua Dratel. “National security is about keeping illegal conduct concealed from the American public until you’re forced to justify it because someone ratted you out.”

    “Indeed, to the Government’s knowledge, no court has ever suppressed FISA- obtained or -derived information, or held an adversarial hearing on motions to disclose or to suppress,”

  6. Tomi Engdahl says:

    Critics: collection of data does not prevent terrorism – “too Sci-Fi’

    Recently discovered U.S. and British authorities in respect of calls and network traffic spy is not an effective way to prevent terrorism, argue critics.

    The collection of data makes it easy to know many people a lot, but a danger to persons is difficult to recognize, claims the U.S. Federal Bureau of Investigation Special Agent and former civil rights puolustuvan ACLU The representative Mike German.

    A massive collection innocent people will not tell you how to threaten people behave, German believes.

    According to him, terrorists do not have to chase it has been a problem that data should be too low, but primarily in that the data has failed to analyze the necessary information.

    Urkinnalla collected masses of data collection is the “needle in a haystack bigger houses,” says the author of it, Wendy Grossman.

    White House’s former civil rights in the Timothy Edgar, however, defend the part of the data collection.

    Edgargaan not completely lost his NSA’s spying activities. According to him, the matter should have been made public.

    It the fashion industry term for big data would also be charmed authorities.

    Technology and information security consultant Ashkan Soltani according to analytics are not guaranteed to work just as it is annealed.

    “The trouble is, if you claim to be able to predict the future, as long as you only have enough for a lot of data.”


  7. click for source says:

    Heya i’m for the primary time here. I came across this board and I in finding It truly helpful & it helped me out a lot. I hope to offer one thing again and aid others like you helped me.

  8. Tomi says:

    The Shortest Internet Censorship Debate Ever

    “When a politician starts talking about defending the innocence of children, there’s bound to be a great policy initiative ahead. That’s how British PM David Cameron introduced the British porn block.”

    “Polish Prime Minister and the Minister of Administration and Digitization denounced any such ideas: ‘We shall not block access to legal content regardless of whether or not it appeases us aesthetically or ethically.’”

  9. Tomi says:

    More Encryption Is Not the Solution

    Cryptography as privacy works only if both ends work at it in good faith

    The recent exposure of the dragnet-style surveillance of Internet traffic has provoked a number of responses that are variations of the general formula, “More encryption is the solution.” This is not the case. In fact, more encryption will probably only make the privacy crisis worse than it already is.

    Inconvenient Fact #1 about Privacy
    Politics Trumps Cryptography

    Nation-states have police forces with guns. Cryptographers and the IETF (Internet Engineering Task Force) do not.

    Inconvenient Fact #2 about Privacy
    Not Everybody Has a Right to Privacy

    The privacy of some strata of the population has been restricted. In many nation-states, for example, prisoners are allowed private communication only with their designated lawyers; all other communications must be monitored by a prison guard.

    Many employees sign away most of their rights to privacy while “on the clock,”

    Inconvenient Fact #3 about Privacy
    Encryption Will Be Broken, If Need Be

    if a nation-state decides that somebody should not have privacy, then it will use whatever means available to prevent that privacy.

    With expenditures of this scale, there are a whole host of things one could buy to weaken encryption. I would contact providers of popular cloud and “whatever-as-service” providers and make them an offer they couldn’t refuse: on all HTTPS connections out of the country, the symmetric key cannot be random; it must come from a dictionary of 100 million random-looking keys that I provide. The key from the other side? Slip that in there somewhere, and I can find it (encrypted in a Set-Cookie header?).

    In the long run, nobody is going to notice that the symmetric keys are not random

    Major operating-system vendors could be told to collect the keys to encrypted partitions as part of their “automatic update communication,” and nobody would notice

    Politics, Not Encryption, Is the Answer

    As long as politics trumps encryption, fighting the battle for privacy with encryption is a losing proposition.

  10. Tomi Engdahl says:

    Our Newfound Fear of Risk

    We’re afraid of risk. It’s a normal part of life, but we’re increasingly unwilling to accept it at any level. So we turn to technology to protect us. The problem is that technological security measures aren’t free. They cost money, of course, but they cost other things as well. They often don’t provide the security they advertise, and — paradoxically — they often increase risk somewhere else. This problem is particularly stark when the risk involves another person: crime, terrorism, and so on. While technology has made us much safer against natural risks like accidents and disease, it works less well against man-made risks.

    the general point is that we tend to fixate on a particular risk and then do everything we can to mitigate it, including giving up our freedoms and liberties.

    There’s a subtle psychological explanation. Risk tolerance is both cultural and dependent on the environment around us. As we have advanced technologically as a society, we have reduced many of the risks that have been with us for millennia.

    Our notions of risk are not absolute; they’re based more on how far they are from whatever we think of as “normal.” So as our perception of what is normal gets safer, the remaining risks stand out more. When your population is dying of the plague, protecting yourself from the occasional thief or murderer is a luxury. When everyone is healthy, it becomes a necessity.

    Some of this fear results from imperfect risk perception. We’re bad at accurately assessing risk; we tend to exaggerate spectacular, strange, and rare events, and downplay ordinary, familiar, and common ones. This leads us to believe that violence against police, school shootings, and terrorist attacks are more common and more deadly than they actually are — and that the costs, dangers, and risks of a militarized police, a school system without flexibility, and a surveillance state without privacy are less than they really are.

    Some of this fear stems from the fact that we put people in charge of just one aspect of the risk equation. No one wants to be the senior officer who didn’t approve the SWAT team for the one subpoena delivery that resulted in an officer being shot.

    We also expect that science and technology should be able to mitigate these risks, as they mitigate so many others. There’s a fundamental problem at the intersection of these security measures with science and technology; it has to do with the types of risk they’re arrayed against. Most of the risks we face in life are against nature: disease, accident, weather, random chance. As our science has improved — medicine is the big one, but other sciences as well — we become better at mitigating and recovering from those sorts of risks.

    Security measures combat a very different sort of risk: a risk stemming from another person.

    When you implement measures to mitigate the effects of the random risks of the world, you’re safer as a result. When you implement measures to reduce the risks from your fellow human beings, the human beings adapt and you get less risk reduction than you’d expect — and you also get more side effects, because we all adapt.

    We need to relearn how to recognize the trade-offs that come from risk management, especially risk from our fellow human beings. We need to relearn how to accept risk, and even embrace it, as essential to human progress and our free society.

  11. Tomi Engdahl says:

    Survey: Almost 90 percent of Internet users have taken steps to avoid surveillance

    A majority of U.S. Internet users polled in a recent survey report taking steps to remove or mask their digital footprints online, according to a report from the Pew Research Center’s Internet Project and Carnegie Mellon University.

    While 86 percent of the Internet users polled said they made some attempt hide what they do online, more than half of the Web users also said they have taken steps to avoid observation by organizations, specific people or the government, according to the survey.

    People use a variety of measures to decrease their online visibility, the study showed. The most popular one is clearing cookie and browser history, which 64 percent of Internet users polled said they did. Forty-one percent said they deleted or edited something they had posted in the past and 41 percent said they disabled or turned off their browsers’ use of cookies, Pew said.

    Beyond general measures taken to go online more or less anonymously, the majority of Internet users polled (55 percent) have tried to avoid observation by specific people or groups. “Hackers, criminals and advertisers are at the top of the list of groups people wish to avoid,” Pew said.

    But a minority of Web users said they tried to hide their online activities from certain friends, people form their past, family members or partners as well as their employers, coworkers, supervisors, companies, people that might want payment for downloaded files and to a lesser extent the government (5 percent) and law enforcement (4 percent).

    Discovering that many Internet users have tried to conceal their identity or their communications from others was the biggest surprise to the research team, they said in a news release. Not only hackers, but almost everyone has taken some action to avoid surveillance and despite their knowing that anonymity is virtually impossible, most Internet users think they should be able to avoid surveillance online, they said.

    Most U.S. citizens would like to be anonymous and untracked online, at least every once in a while, but many think it is not possible to be completely anonymous online, Pew said.

    A majority of Web users polled, 66 percent, said they think current privacy laws are not good enough to provide reasonable protections for people’s privacy on their online activities.

    “Interestingly, there are not noteworthy differences in answers to this question associated with political or partisan points of view.”

  12. Tomi Engdahl says:

    Bruce Schneier: NSA Spying Is Making Us Less Safe

    The security researcher Bruce Schneier, who is now helping the Guardian newspaper review Snowden documents, suggests that more revelations are on the way.

    The NSA mission is national security. How is the snooping really affecting the average person?

    The NSA’s actions are making us all less safe. They’re not just spying on the bad guys, they’re deliberately weakening Internet security for everyone—including the good guys.

    There have been many allusions to NSA efforts to put back doors in consumer products and software. What’s the reality?

    The reality is that we don’t know how pervasive this is; we just know that it happens. I have heard several stories from people and am working to get them published. The way it seems to go, it’s never an explicit request from the NSA. It’s more of a joking thing: “So, are you going to give us a back door?” If you act amenable, then the conversation progresses. If you don’t, it’s completely deniable. It’s like going out on a date. Sex might never be explicitly mentioned, but you know it’s on the table.

    Great. So you’ve recently suggested five tips for how people can make it much harder, if not impossible, to get snooped on. These include using various encryption technologies and location-obscuring methods. Is that the solution?

    My five tips suck. They are not things the average person can use. One of them is to use PGP [a data-encryption program]

    Basically, the average user is screwed. You can’t say “Don’t use Google”—that’s a useless piece of advice. Or “Don’t use Facebook,”

    The Internet has become essential to our lives, and it has been subverted into a gigantic surveillance platform. The solutions have to be political. The best advice for the average person is to agitate for political change.

  13. Tomi Engdahl says:

    Former NSA Honcho Calls Corporate IT Security “Appalling”

    “Former NSA technology boss Prescott Winter has a word for the kind of security he sees even at large, technologically sophisticated companies: Appalling. Companies large enough to afford good security remain vulnerable to hackers, malware and criminals because they tend to throw technological solutions at potential areas of risk rather than focusing on specific and immediate threats, Winter said during his keynote speech Oct. 1 at the Splunk Worldwide User’s Conference in Las Vegas.”

    “In my experience, it’s much more rare to find a company that knows about security than to find one that doesn’t.”
    “Most of them don’t. Sometimes the companies that do know just consider it a risk of doing business, easier to pay when things go wrong than to try to secure it. An example of this is credit card companies. Bruce Schenier points out that he would never trust a credit card online because of the security holes, except they promise to reimburse him when things go wrong.”
    “You got that right. Security is hard. Security is expensive. Security does not improve profits (as long as they continue to be lucky). The company that spends money on security while their competitors are not, will lose out. Therefore, who needs it? There’s no sense of living dangerously without some really spectacular examples…”

  14. Tomi Engdahl says:

    Former NSA Honcho Calls Enterprise Security ‘Appalling’

    IT security people focus on infrastructure, not prevention; decent security means identifying and countering actual threats in real time.

    Former NSA technology boss Prescott Winter has a word for the kind of security he sees even at large, technologically sophisticated companies: Appalling.

    Companies large enough to afford good security remain vulnerable to hackers, malware and criminals because they tend to throw technological solutions at potential areas of risk rather than focusing on specific and immediate threats, Winter said during his keynote speech Oct. 1 at the Splunk Worldwide User’s Conference in Las Vegas.

    “As we look at the situation in the security arena… we see an awful lot of big companies – Fortune 100-level companies – with, to be perfectly candid, appalling security. They have fundamentally no idea what they’re doing,” Winter said, according to an Oct. 2 story in U.K. tech-news site Computing.

    Digital security threats to large companies are becoming far more common and far more serious, Winter said.

    The most effective approach is to match IT security to a company’s lines of business and most valuable assets, not simply reinforce security built to match a network or system topology. Making good rules for security isn’t enough, either: Ttey have to be enforced. “You’ve got to audit and make sure that people are following the rules. Minor mistakes lead to vulnerabilities,” he said.

    Even figuring out what to protect requires the same kind of big-data analysis many companies use to identify new markets or develop new products, but that few actually employ to identify their own most valuable assets – both physical and intellectual property – and define how those assets contribute to key strategic business goals, Winter said.

    But it’s not enough to do that analysis and protect those potential targets once in a while; it has to be done regularly, almost continually, using information that is close to real time rather than archived. “Big data is the thing that makes the risk management approach work. It’s being able to see enough of your enterprise with enough information that you can actually understand what’s going on,” he said

  15. Tomi Engdahl says:

    A Court Order is an Insider Attack

    Commentators on the Lavabit case, including the judge himself, have criticized Lavabit for designing its system in a way that resisted court-ordered access to user data. They ask: If court orders are legitimate, why should we allow engineers to design services that protect users against court-ordered access?

    The answer is simple but subtle: There are good reasons to protect against insider attacks, and a court order is an insider attack.

    To see why, consider two companies, which we’ll call Lavabit and Guavabit. At Lavabit, an employee, on receiving a court order, copies user data and gives it to an outside party—in this case, the government. Meanwhile, over at Guavabit, an employee, on receiving a bribe or extortion threat from a drug cartel, copies user data and gives it to an outside party

    From a purely technological standpoint, these two scenarios are exactly the same: an employee copies user data and gives it to an outside party. Only two things are different: the employee’s motivation, and the destination of the data after it leaves the company. Neither of these differences is visible to the company’s technology—it can’t read the employee’s mind to learn the motivation, and it can’t tell where the data will go once it has been extracted from the company’s system. Technical measures that prevent one access scenario will unavoidably prevent the other one.

    Insider attacks are a big problem. You might have read about a recent insider attack against the NSA by Edward Snowden. Similar but less spectacular attacks happen all the time

    In the end, what led to Lavabit’s shutdown was not that the company’s technology was too resistant to insider attacks, but that it wasn’t resistant. The government got an order that would have required Lavabit to execute the ultimate insider attack, essentially giving the government a master key to unlock the data of any Lavabit user at any time. Rather than do this, Lavabit chose to shut down.

    Had Lavabit had in place measures to prevent disclosure of its master key, it would have been unable to comply with the ultimate court order—and it would have also been safe against a rogue employee turning over its master key to bad actors.

  16. Tomi Engdahl says:

    November 04, 2013, 06:00 am
    NSA chief likely to lose cyber war powers

    Senior military officials are leaning toward removing the National Security Agency director’s authority over U.S. Cyber Command, according to a former high-ranking administration official familiar with internal discussions.

    Keith Alexander, a four star general who leads both the NSA and Cyber Command, plans to step down in the spring.

    No formal decision has been made yet, but the Pentagon has already drawn up a list of possible civilian candidates for the next NSA director, the former official told The Hill. A separate military officer would head up Cyber Command, a team of military hackers that trains for offensive cyberattacks and protects U.S. computer systems.

    “Some things are better to have two centers of power,” Healey said. “If you have just one, it’s more efficient, but you end up making dumb decisions.”

    He argued the government would never, for example, put one general in charge of gathering intelligence in China, commanding covert forces against China and setting policy toward China.

    “We’ve now created a center of power that we would never allow in any other area,” Healey said. “And it certainly shouldn’t be allowed in something so critical to our future and national security as the Internet and cyberspace.”

  17. Tomi Engdahl says:

    Kaspersky: National strategy does not guarantee security

    International Cybercrime must be handled with agreements between states.

    This is the only way to arkaluontoisinta countries to protect their data from hackers, Russian Kaspersky Lab’s chairman and CEO Eugene Kaspersky said on Thursday in Canberra.

    Kaspersky keeps security attacks extremely dangerous.

    “Cyber attacks erode international confidence. Domestic players will be tempted to resort to two systems, one public and one reserved for businesses and the public sector operators,” Kaspersky said.

    In his opinion, the two systems is a dangerous option for you.

    “Businesses and public sector organizations are initially satisfied, but how to budget your money and human resources are sufficient in the end these double systems,” Kaspersky asks.

    “It is an unhappy fact that the internet has no boundaries. Therefore launch cyber-attacks are easily spread from country to country. Known that there are hot spots in the world security attacks. Nevertheless, all use similar systems at long last, to facilitate international kyberrikollisten mole of work.”

    “Because cyberspace is similar to all work countries, not the individual states can not resort to the national security strategies behind. Data security is required of state and government co-operation,” Kaspersky says.

    According to him, the national security organizations today are people in need, even scared.

    “They just do not know how to cope with alone. Moreover, the weapons used to attack public sector organizations do not realize that security attacks work like a boomerang: Attacks returning to the thrower’s hand,”


  18. Tomi Engdahl says:

    Cyber espionage ‘extremely dangerous’ for international trust: Kaspersky

    Summary: Individual national strategies for ‘cyber resilience’ have no place on the borderless internet.

    “If nations don’t trust each other in cyberspace, the next step is to separate it [into] two networks. One public network, and one enterprise and government. It’s an obvious step, and I’m not the first man to talk about that,” he said.

    “I’m afraid it’s a very bad option … governments and enterprises will be happier, because they have a secure, unhackable network. Good news? No. First of all, there will be much less investment in the public segment. Governments and enterprises leaving the public space means that the budget’s running away. Second, do you have enough engineers to build an Australian national network?”

    Kaspersky called for more education for network engineers and security specialists several times during his speech.

    He also reinforced his oft-repeated message that attacks against critical infrastructure have the potential to cause collateral damage, as systems other than the intended targets can become infected, and that once a cyber weapon has been deployed, it can easily be reverse-engineered and used by others.

    “Unfortunately, the internet doesn’t have borders, and the attacks on very different systems somewhere far, far away from you in the very ‘hot’ areas of this world — maybe in the Middle East, or somewhere in Pakistan or India, or in Latin America, it doesn’t matter — they have the very same computer systems, they have the very same operating systems, the very same hardware,” he said.

    “Unfortunately, it is very possible for other nations, which are not in the conflict, will be victims of the cyber attacks on the critical infrastructure.”

    “Departments which are responsible for national security, for national defence, they’re scared to death. They don’t know what to do,” Kaspersky said.

    “Departments which are responsible for offence, they see it as opportunity. They don’t understand that in cyberspace, everything you do, it’s a boomerang. It will get back to you.”

  19. Tomi says:

    Paradise Lost: Paranoia Has Undermined US Democracy

    While far from a dictatorship, the United States has employed a number of paranoid tactics that delegitimize its democracy. This phenomenon is on display in the fictional TV series “Homeland,” which depicts hysterical CIA agents in a hysterical country.

    Political paranoia requires an enemy, or at least the concept of an enemy.

    After the Soviet Union collapsed in 1991, the United States experienced a relatively relaxed decade, until hijacked jetliners crashed into the World Trade Center and destroyed parts of the Pentagon on Sept. 11, 2001.

    While far from all democracies are paranoid, virtually all dictatorships are. For dictators, paranoia helps shape and preserve their autocratic systems. Autocrats need an enemy — always an internal enemy and sometimes an external one, too — to legitimize violence and coercion, and to generate allegiance.

    The United States cannot be compared with Nazi Germany or with China. Unfortunately, however, a paranoid democracy tends to use tools that are beneath a democracy, the tools of a dictatorship, and they include as much surveillance as possible.

    Information is the most valuable thing in a paranoid world. Those who feel threatened want to know as much as possible about potential threats, so as to be able to control their fears and prepare preventive attacks.

    Now the intelligence services have developed a giant information procurement machine, which is also useful in industrial espionage. To ensure that nothing escapes their notice, they violate the privacy of millions and millions of people and alienate allied nations and their politicians.

    Another form of paranoid information procurement is torture, used by American intelligence agencies to gain information about terrorists.

    While paranoia legitimizes a dictatorship, it can achieve the opposite effect in a democracy. The United States is no longer a model of liberal democracy. That much has been made clear in light of mass surveillance, torture, the extralegal detention camp at Guantanamo and an isolationist ideology

  20. Tomi says:

    The Case for a Compulsory Bug Bounty

    Security experts have long opined that one way to make software more secure is to hold software makers liable for vulnerabilities in their products. This idea is often dismissed as unrealistic and one that would stifle innovation in an industry that has been a major driver of commercial growth and productivity over the years. But a new study released this week presents perhaps the clearest economic case yet for compelling companies to pay for information about security vulnerabilities in their products.

    Earlier this month, I published a piece called How Many Zero-Days Hit You Today, which examined a study by vulnerability researcher Stefan Frei about the bustling market for “zero-day” flaws — security holes in software that not even the makers of those products know about. These vulnerabilities — particularly zero-days found in widely-used software like Flash and Java — are extremely valuable because attackers can use them to slip past security defenses unnoticed.

    Frei’s analysis conservatively estimated that private companies which purchase software vulnerabilities for use by nation states and other practitioners of cyber espionage provide access to at least 85 zero-day exploits on any given day of the year. That estimate doesn’t even consider the number of zero-day bugs that may be sold or traded each day in the cybercrime underground.

    At the end of that post, I asked readers whether it was possible and/or desirable to create a truly global, independent bug bounty program that would help level the playing field in favor of the defenders and independent security researchers. Frei’s latest paper outlines one possible answer.

    Frei proposes creating a multi-tiered, “international vulnerability purchase program” (IVPP), in which the major software vendors would be induced to purchase all of the available and known vulnerabilities at prices well above what even the black market is willing to pay for them. But more on that in a bit.

    “Because the IVPP would be handling highly sensitive information, checks and balances are critical,” the two wrote. “They would make it difficult for any party to circumvent the published policy of vulnerability handling. A multi-tiered structure prevents any part of the organization, or legal entity within which it is operating, from monopolizing the process or the information being analyzed. Governments could still share vulnerabilities with their agencies, but they would no longer have exclusive access to this information and for extended periods of time.”

    Frei’s elaborate system is well thought-out, but it glosses over the most important catalyst: The need for government intervention. While indeed an increasing number of software and Internet companies have begun offering bug bounties (Google and Mozilla have for some time, and Microsoft began offering a limited bounty earlier this year), few of them pay anywhere near what private vulnerability brokers can offer, and would be unlikely to up the ante much absent a legal requirement to do so.

    “The amount we’re losing from malicious hacking is a lot less than what we gain from the free and open nature of Internet,” Graham said. “And that includes the ability of companies to quickly evolve their products because they don’t have to second-guess every decision just so they can make things more secure.”

    “Commercial software is a tiny part of the whole vulnerability problem,” Graham said.

    Graham acknowledged that the mere threat of governments imposing some kind of requirement is often enough to induce businesses and entire industries to self-regulate and take affirmative steps to avoid getting tangled in more bureaucratic red tape.

  21. Tomi Engdahl says:

    An online Magna Carta: Berners-Lee calls for bill of rights for web
    Exclusive: web’s inventor warns neutrality under sustained attack from governments and corporations

    The inventor of the world wide web believes an online “Magna Carta” is needed to protect and enshrine the independence of the medium he created and the rights of its users worldwide.

    Sir Tim Berners-Lee told the Guardian the web had come under increasing attack from governments and corporate influence and that new rules were needed to protect the “open, neutral” system.

    Barners-Lee’s Magna Carta plan is to be taken up as part of an initiative called “the web we want”, which calls on people to generate a digital bill of rights in each country – a statement of principles he hopes will be supported by public institutions, government officials and corporations.

    Principles of privacy, free speech and responsible anonymity would be explored in the Magna Carta scheme. “These issues have crept up on us,” Berners-Lee said

    The web constitution proposal should also examine the impact of copyright laws and the cultural-societal issues around the ethics of technology.

    “But we need our lawyers and our politicians to understand programming, to understand what can be done with a computer. We also need to revisit a lot of legal structure, copyright law – the laws that put people in jail which have been largely set up to protect the movie producers … “

  22. Tomi Engdahl says:

    Study Finds US Is an Oligarchy, Not a Democracy

    “Researchers from Princeton University and Northwestern University have concluded, after extensive analysis of 1,779 policy issues, that the U.S. is in fact an oligarchy and not a democracy.”

  23. Tomi Engdahl says:

    John Kerry Claims US Is On The ‘Right Side Of History’ When It Comes To Online Freedom And Transparency
    from the might-still-making-right,-despite-technological-developments dept

    Now, with the NSA’s programs exposed, along with this administration’s quest to punish whistleblowers and maintain the opacity left behind by the Bush administration, there’s no approaching the high ground. But that didn’t stop John Kerry

    “[L]et me be clear – as in the physical space, cyber security cannot come at the expense of cyber privacy. And we all know this is a difficult challenge.”

    First off, almost every “cyber security” bill has pushed for security at the expense of privacy. CISPA has done this twice. The new CISPA, being presented by the Senate, does the same thing.

    Second, the reforms set up by the administration are hardly “concrete and meaningful.” They’re shallow and limited and do very little to walk back the expansive readings of outdated laws

    This administration has prosecuted more whistleblowers — the people who “hold their government to standards” — than all other administrations combined. And this administration isn’t done yet.

    a very chilling statement, one that suggests the Freedom Online Coalition needs to side with the US government if it wishes to “wind up on the right side of history.”

  24. Tomi Engdahl says:

    Life sentences for serious cyberattacks are proposed in Queen’s speech
    Any cyberattackers who cause ‘loss of life, serious injury or damage to national security’ could face full sentence

    The UK government has said it wants to hand out life sentences to anyone found guilty of a cyberattack that has a catastrophic effect, under plans announced in the Queen’s speech.

    Any hackers that manage to carry out “cyberattacks which result in loss of life, serious illness or injury or serious damage to national security, or a significant risk thereof” would face the full life sentence, according to the serious crime bill proposed in Wednesday’s Queen’s speech.

    As well as targeting cyberterrorists, the new offence in the proposed update to the Computer Misuse Act 1990 would also hand harsher sentences to those hackers carrying out industrial espionage, believed to be a growing menace affecting UK business.

    The law would have a maximum sentence of 14 years for attacks that create “a significant risk of severe economic or environmental damage or social disruption”. Currently, the section of the CMA covering such an offence carries a 10-year sentence.

    Jim Killock, executive director of the Open Rights Group, said the bill would be difficult to justify, given current laws already carry punishments for those who carry out significant acts of terrorism, whether via computers or other means.

    “If a supposed cyberterrorist endangers life or property, there are existing laws that can be used to prosecute them,” Killock said.

    The government has also not addressed complaints over the application of current computer crime law, which some in the security industry claim actually makes the internet less safe.

    This is because certain kinds of research could be deemed illegal. Experts known as penetration testers, who look for weaknesses in internet infrastructure, often carry out similar actions to real cybercriminals in their attempts to improve the security of the web, such as scanning for vulnerabilities.

    But such research is punishable under British law, even if it is carried out for altruistic ends, leaving potential weaknesses unresolved, critics of the CMA said.

    “It’s good to see government trying to be proactive to put specific law enforcement tools in place before they’re needed, but they should be careful to not accidentally criminalise good faith efforts,”

  25. Tomi Engdahl says:

    Are Digital Retailers Focusing Their Security in the Wrong Place?
    Digital retailers spend the lion’s share of their IT security budget on network security, but most experts say they’d be better off focusing elsewhere.

    High-profile data breaches have plagued retail this year — Target, Neiman Marcus, Michael’s and other U.S. retailers have seen headlines about their woes splashed across both digital and print media.

    “SQL injection is a likely component of retailer attacks,” says Larry Ponemon, founder and chairman of Ponemon Institute. “SQL injection has been around for ages, and some of these vulnerabilities are not because of lacking tools.”

    Sixty-five percent of the organizations represented in the study had experienced a SQK injection attack in the past 12 months that had successfully evaded their perimeter defenses, and 49 percent of respondents said the SQL injection threat facing their company is significant.

    The majority of these experts — 65 percent — believe the best way to defend against SQL injection attacks and avoid mega data breaches like the one suffered by Target is through continuous monitoring of the database network followed by advanced database activity monitoring (56 percent) and database encryption (49 percent). And yet, when asked how the IT security budget is allocated in their organizations, these experts said the lion’s share (40 percent) is allocated to network security, 23 percent is allocated to Web server security and only 19 percent is allocated to database security

    Ponemon notes that this misalignment in the allocation of security budget may be a result of old-think in the security profession.

    “Older security professionals have done most of their training around network security and the perimeter,” Ponemon says. “That’s what they know.”

    “We have always been concerned about the perimeter,” Durbin says. “It’s an easier message for the board or the risk management committee to understand. Increasingly, we are seeing the question being asked around cybersecurity: ‘How protected are we?’ The easy answer is that our perimeter is secure.”

    “The pursuit of 100 percent security is just folly,” Durbin says. “It’s a fool’s goal. You have to assume that even though you’re doing your best, you’re going to be breached at some point in time. That is not a palatable message to deliver to the board.”

    And that often leads security professionals to focus on initiatives that appeal to the board rather efforts to mitigate the damage when breaches do occur.

  26. Philipp says:

    Heya i’m for the first time here. I found this board and I find It truly useful & it
    helped me out much. I hope to give something back and help
    others like you aided me.

  27. Tomi Engdahl says:

    Snowden, Dotcom, throw bombs into NZ election campaign

    Edward Snowden and Kim Dotcom have joined hands and waded into New Zealand politics ahead of the nation’s forthcoming election, by alleging prime minister John Key has told fibs about his government’s involvement with the NSA’s nasties.

    Snowden has released a new missive in which he claims that the many tools with which he worked at the NSA well and truly covered New Zealand.

  28. Tomi Engdahl says:

    Greenwald Advises Market-Based Solution To Mass Surveillance

    In his latest Intercept piece Glenn Greenwald considers the recent defeat of the Senate’s USA Freedom Act. He remarks that governments “don’t walk around trying to figure out how to limit their own power.” Instead of appealing to an allegedly irrelevant Congress Greenwald advocates utilizing the power of consumer demand to address the failings of cyber security.

    Congress Is Irrelevant on Mass Surveillance. Here’s What Matters Instead.

    The boredom of this spectacle was simply due to the fact that this has been seen so many times before—in fact, every time in the post-9/11 era that the U.S. Congress pretends publicly to debate some kind of foreign policy or civil liberties bill. Just enough members stand up to scream “9/11″ and “terrorism” over and over until the bill vesting new powers is passed or the bill protecting civil liberties is defeated.

    So watching last night’s Senate debate was like watching a repeat of some hideously shallow TV show. The only new aspect was that the aging Al Qaeda villain has been rather ruthlessly replaced by the show’s producers with the younger, sleeker ISIS model.

    There is a real question about whether the defeat of this bill is good, bad, or irrelevant.

    All of that illustrates what is, to me, the most important point from all of this: the last place one should look to impose limits on the powers of the U.S. government is . . . the U.S. government. Governments don’t walk around trying to figure out how to limit their own power, and that’s particularly true of empires.

    The entire system in D.C. is designed at its core to prevent real reform.

    Ever since the Snowden reporting began and public opinion (in both the U.S. and globally) began radically changing, the White House’s strategy has been obvious. It’s vintage Obama: Enact something that is called “reform”—so that he can give a pretty speech telling the world that he heard and responded to their concerns—but that in actuality changes almost nothing, thus strengthening the very system he can pretend he “changed.” That’s the same tactic as Silicon Valley, which also supported this bill: Be able to point to something called “reform” so they can trick hundreds of millions of current and future users around the world into believing that their communications are now safe if they use Facebook, Google, Skype and the rest.

    But it has been clear from the start that U.S. legislation is not going to impose meaningful limitations on the NSA’s powers of mass surveillance, at least not fundamentally. Those limitations are going to come from—are now coming from —very different places:

    1) Individuals refusing to use internet services that compromise their privacy.
    2) Other countries taking action against U.S. hegemony over the internet.
    3) U.S. court proceedings. A U.S. federal judge already ruled that the NSA’s domestic bulk collection program likely violates the 4th Amendment
    4) Greater individual demand for, and use of, encryption.

  29. Tomi Engdahl says:

    How Congress Secretly Just Legitimized Questionable NSA Mass Surveillance Tool
    from the just-slipped-it-right-in dept
    Fri, Dec 12th 2014

    We recently noted that, despite it passing overwhelmingly, Congress quietly deleted a key bit of NSA reform that would have blocked the agency from using backdoors for surveillance. But this week something even more nefarious happened, and it likely would have gone almost entirely unnoticed if Rep. Justin Amash’s staffers hadn’t caught the details of a new provision quietly slipped into the Intelligence Authorization Act, which effectively “legitimized” the way the NSA conducts most of its mass surveillance.

    For a while now, we’ve discussed executive order 12333, signed by President Ronald Reagan, which more or less gives the NSA unchecked authority to tap into any computer system not in the US.

    The NSA’s surveillance is almost entirely done under this authority, which has no Congressional oversight. All those other programs we’ve been arguing about — Section 215 of the Patriot Act or Section 702 of the FISA Amendments Act — are really nothing more than ways to backfill the data the NSA has been unable to access under 12333.

    Yet, what Amash and his staffers found is that a last minute change by the Senate Intelligence Committee to the bill effectively incorporated key parts of EO 12333 into law, allowing for “the acquisition, retention, and dissemination” of “nonpublic communications.”

    This seems particularly nefarious. In trying to claim that they’re putting a limit on this activity (that’s already happening) they can claim that they’re not really expanding the power of the NSA and the surveillance state. But, by putting it in law, rather than just having it in an executive order, they’re effectively legitimatizing the practice, and making it much harder to roll back.

    And they did it all quietly without any debate.

  30. Tomi Engdahl says:

    The US Needs To Stop Pretending The Sony Hack Is Anything Less Than An Act Of War

    The most devastating cyberattack ever on a US-based company wasn’t an act of war, according to established guidelines of cyberwarfare.

    NATO’s Tallinn Manual defines an act of cyberwar that permits a military response as “a cyber operation, whether offensive or defensive, that is reasonably expected to cause injury or death to persons or damage or destruction to objects.”

    The world after the Sony Pictures hack may require a new perspective.

    Dave Aitel, a former NSA research scientist and CEO of the cybersecurity firm Immunity, argues that while the attack “doesn’t meet the threshold for a response by our military,” it should still be viewed as an act of war.

    “We need to change the way we think about cyberattacks,” Aitel told Business Insider in an email. “In many cases, these aren’t ‘crimes’ — they’re acts of war. A non-kinetic attack (i.e., destructive malware, destructive computer network attack) that causes just as much damage as a kinetic attack (i.e., a missile or bomb) should be viewed at the same level of urgency and need for US government/military response.”

    Nevertheless, one proactive move the US should consider, according to Aitel, is “declaring certain cyberattacks terrorist acts and the groups behind them terrorists,” which would “set in motion a wider range of legal authority, US government/military resources, and international options.”

    One way to bolster US cyber defenses, according to Aitel, would be for the government to provide companies “with the option to have their web hosting and security provided by the federal government itself.”

    And even though turning over the “IT keys” to the government would be an unpopular idea — especially after the revelations by Edward Snowden — Aitel calls it “the most effective model the cybersecurity industry would have to protect against state-sponsored attacks like the one that hit Sony or the millions of cyber-espionage attacks that occur yearly against other key US entities.”

    That’s because a critical attack on a US-based company would be treated, legally and politically, as an attack on the US itself.

  31. Tomi Engdahl says:

    Robert Graham / Errata Security:
    Obama’s proposed laws against hacking will negatively impact cybersecurity professionals, create a cyber police state

    Obama’s War on Hackers
    By Robert Graham

    In next week’s State of the Union address, President Obama will propose new laws against hacking that could make either retweeting or clicking on the above link illegal. The new laws make it a felony to intentionally access unauthorized information even if it’s been posted to a public website. The new laws make it a felony to traffic in information like passwords, where “trafficking” includes posting a link.

    You might assume that things would never become that bad, but it’s already happening even with the current laws.

    Even if you don’t do any of this, you can still be guilty if you hang around with people who do. Obama proposes upgrading hacking to a “racketeering” offense, means you can be guilty of being a hacker by simply acting like a hacker (without otherwise committing a specific crime). Hanging out in an IRC chat room giving advice to people now makes you a member of a “criminal enterprise”, allowing the FBI to sweep in and confiscate all your assets without charging you with a crime. If you innocently clicked on the link above, and think you can defend yourself in court, prosecutors can still use the 20-year sentence of a racketeering charge in order to force you to plea bargain down to a 1-year sentence for hacking. (Civil libertarians hate the police-state nature of racketeering laws).

    Obama’s proposals come from a feeling in Washington D.C. that more needs to be done about hacking in response to massive data breaches of the last couple years. But they are blunt political solutions which reflect no technical understanding of the problem.

    Internet innovation happens by trying things first then asking for permission later. Obama’s law will change that.

    The most important innovators this law would affect are the cybersecurity professionals that protect the Internet. If you cared about things such as “national security” and “cyberterrorism”, then this should be your biggest fear. Because of our knowledge, we do innocent things that look to outsiders like “hacking”. Protecting computers often means attacking them. The more you crack down on hackers, the more of a chilling effect you create in our profession. This creates an open-door for nation-state hackers and the real cybercriminals.

  32. Tomi Engdahl says:

    What David Cameron just proposed would endanger every Briton and destroy the IT industry – Boing Boing

    David Cameron: I’m off to the US to get my bro Barack to ban crypto – report
    Plans to pressure President for tighter surveillance controls, sources say

    UK Prime Minister David Cameron is hoping to gain the support of US President Barack Obama in his campaign-year crusade to outlaw encrypted communications his spies can’t break, sources claim.

  33. Tomi Engdahl says:

    Don’t use Charlie Hebdo to justify Big Brother data-slurp – Data protection MEP
    Plans to monitor all flight passengers should be ditched

    The European Parliament’s data protection supremo says calls from national leaders to monitor all airline passengers are “playing into terrorists’ hands”.

    German MEP Jan Philipp Albrecht, who heads the Parliament’s overhaul of EU data protection laws, described the plans for mass storage of PNR (passenger name record) data as Orwellian.

    “EU home affairs ministers are demanding Big Brother measures entailing blanket data retention without justification,” he said. “This approach is a distraction from the actual measures needed to deal with security and terrorist threats and provides a false sense of security for citizens, at the expense of their civil liberties.”

  34. Tomi Engdahl says:

    We have no self-control: America’s most powerful men explain why they’re scared of email
    Hillary Clinton email-gate gives Senators Luddite, Graham and McCain enough rope

    Two of the most powerful men in the United States have revealed they don’t use email – because they’re scared of what they might say.

    “I don’t email. You can have every email I’ve ever sent. I’ve never sent one,” Senator Lindsey Graham told NBC’s Meet the Press yesterday. Graham’s statement follows a similar admission by Senator John McCain late last week who confirmed he also doesn’t email, telling MSNBC: “I’d rather use the phone, I’d rather use tweets.”

    Even more bizarre is the reason both Senators give for not using email: they lack the necessary self-control not to say something stupid.

    Graham told a confused Bloomberg News: “I’ve tried not to have a system where I can just say the first dumb thing that comes to my mind. I’ve always been concerned. I can get texts, and I call you back, if I want.”

    McCain meanwhile said this: “I’m afraid that if I was emailing, given my solid, always calm temperament that I might email something that I might regret. You could send out an email that you would regret later on and would be maybe taken out of context.”

    But while the original Luddites took to smashing up the machinery of the 19th century, Graham and McCain are happy to do something much more dangerous: allow internet technologies to be abused by the government agencies they are supposed to be overseeing.

    The solution to vast intrusions into privacy, in the senior lawmakers’ eyes, is seemingly not to protect citizens from those carrying out surveillance but to simply opt out of using technology altogether.

    And that is far more disturbing that the use of personal email by a former secretary of state.

  35. Tomi Engdahl says:

    Technology should be used to create social mobility – not to spy on citizens

    NSA and GCHQ mass surveillance is more about disrupting political opposition than catching terrorists

    Why spy? That’s the several-million pound question, in the wake of the Snowden revelations. Why would the US continue to wiretap its entire population, given that the only “terrorism” they caught with it was a single attempt to send a small amount of money to Al Shabab?

    One obvious answer is: because they can. Spying is cheap, and cheaper every day. Many people have compared NSA/GCHQ mass spying to the surveillance programme of East Germany’s notorious Stasi, but the differences between the NSA and the Stasi are more interesting than the similarities.

    The most important difference is size. The Stasi employed one snitch for every 50 or 60 people it watched. We can’t be sure of the size of the entire Five Eyes global surveillance workforce, but there are only about 1.4 million Americans with Top Secret clearance, and many of them don’t work at or for the NSA, which means that the number is smaller than that (the other Five Eyes states have much smaller workforces than the US). This million-ish person workforce keeps six or seven billion people under surveillance – a ratio approaching 1:10,000. What’s more, the US has only (“only”!) quadrupled its surveillance budget since the end of the Cold War: tooling up to give the spies their toys wasn’t all that expensive, compared to the number of lives that gear lets them pry into.

    IT has been responsible for a 2-3 order of magnitude productivity gain in surveillance efficiency. The Stasi used an army to surveil a nation; the NSA uses a battalion to surveil a planet.

    Spying, especially domestic spying, is an aspect of what the Santa Fe Institute economist Samuel Bowles calls guard labour: work that is done to stabilise property relationships, especially the property belonging to the rich.

  36. Tomi Engdahl says:

    Turns Out Feds Actually Tracked Most International Calls For Nearly A Decade Before 9/11 — Didn’t Stop The Attack

    One of the big arguments trotted out repeatedly by surveillance state defenders concerning the NSA’s Section 215 program to collect records on all phone calls is that such a thing “would have prevented 9/11″ if it had been in place at the time. Here’s former FBI boss Robert Mueller making just that argument right after the initial Snowden leaks. Here’s Dianne Feinstein making the argument that if we had that phone tracking program before September 11th, we could have stopped the attacks. And here’s former NSA top lawyer and still top NSA supporter Stewart Baker arguing that the program is necessary because the lack of such a program failed to stop 9/11.

    Except, it turns out, the feds did have just such a program prior to 9/11 — run by the DEA.

    “The now-discontinued operation, carried out by the DEA’s intelligence arm, was the government’s first known effort to gather data on Americans in bulk, sweeping up records of telephone calls made by millions of U.S. citizens regardless of whether they were suspected of a crime. It was a model for the massive phone surveillance system the NSA launched to identify terrorists after the Sept. 11 attacks. That dragnet drew sharp criticism that the government had intruded too deeply into Americans’ privacy after former NSA contractor Edward Snowden leaked it to the news media two years ago.”

  37. Tomi Engdahl says:

    Sam Machkovech / Ars Technica:
    Presidential candidate Rand Paul promises to end warrantless searches of phone, computer records; campaign site sells $15 “NSA spy cam blocker” sticker

    Rand Paul sells “NSA spy cam blocker” as presidential bid fundraiser
    Bid announcement video taken off YouTube due to copyright claim over a song.

  38. Tomi Engdahl says:

    Will Obama’s cybersecurity executive order make a difference?

    Will Obama’s Cybersecurity Executive Order Make a Difference?

    We continue to live in a world that is exciting with new, easy-to-use technology that allows all of us to be more effective and efficient in our business and personal lives. Yet the ease of use of this technology also puts all of us at risk.

    President Obama and many in government and the private sector realize there is so much more that all of us could and should do to ensure we can be confident that our most sensitive personal data is safe. It is our right and we need to take action against the cyber adversaries that wish to do us harm.

    The EO and the legislation previously passed by Congress is a great start. But in order for the actions taken to increase information sharing among the public and private sectors to really be effective, additional legislation is necessary. We need to see liability relief along with codified roles and responsibilities for the public and private sector regarding information sharing. In addition, the President has called for a national breach process and updated criminal laws to support today’s security needs and the future environment. We support that. With this approach, information sharing can, in fact, truly become actionable and allow the good guys to operate inside the bad guy’s decision cycle.

  39. Tomi Engdahl says:

    Natasha Lomas / TechCrunch:
    In the aftermath of Paris attacks, intelligence agencies scapegoat encryption to mask the failures of mass surveillance — Encryption Is Being Scapegoated To Mask The Failures Of Mass Surveillance — Well that took no time at all. Intelligence agencies rolled right into the horror …

    Encryption Is Being Scapegoated To Mask The Failures Of Mass Surveillance

    Well that took no time at all. Intelligence agencies rolled right into the horror and fury in the immediate wake of the latest co-ordinated terror attacks in the French capital on Friday, to launch their latest co-ordinated assault on strong encryption — and on the tech companies creating secure comms services — seeking to scapegoat end-to-end encryption as the enabling layer for extremists to perpetrate mass murder.

    There’s no doubt they were waiting for just such an ‘opportune moment’ to redouble their attacks on encryption after recent attempts to lobby for encryption-perforating legislation foundered. (A strategy confirmed by a leaked email sent by the intelligence community’s top lawyer, Robert S. Litt, this August — and subsequently obtained by the Washington Post — in which he anticipated that a “very hostile legislative environment… could turn in the event of a terrorist attack or criminal event where strong encryption can be shown to have hindered law enforcement”. Et voila Paris… )

    Speaking to CBS News the weekend in the immediate aftermath of the Paris attacks, former CIA deputy director Michael Morell said: “I think this is going to open an entire new debate about security versus privacy.”

    Elsewhere the fast-flowing attacks on encrypted tech services have come without a byline — from unnamed European and American officials who say they are “not authorized to speak publicly”. Yet are happy to speak publicly, anonymously.

    The NYT published an article on Sunday alleging that attackers had used “encryption technology” to communicate — citing “European officials who had been briefed on the investigation but were not authorized to speak publicly”. (The paper subsequently pulled the article from its website, as noted by InsideSources, although it can still be read via the Internet Archive.)

    The irony of government/intelligence agency sources briefing against encryption on condition of anonymity as they seek to undermine the public’s right to privacy would be darkly comic if it weren’t quite so brazen.

    Here’s what one such unidentified British intelligence source told Politico: “As members of the general public get preoccupied that the government is spying on them, they have adopted these applications and terrorists have found them tailor-made for their own use.”

    “Seeking to outlaw technology tools that are used by the vast majority of people to protect the substance of law-abiding lives is not just bad politics, it’s dangerous policy.”

    In the same Politico article, an identified source — J.M. Berger, the co-author of a book about ISIS — makes a far more credible claim: “Terrorists use technology improvisationally.”

    Of course they do. The co-founder of secure messaging app Telegram, Pavel Durov, made much the same point earlier this fall when asked directly by TechCrunch about ISIS using his app to communicate. “Ultimately the ISIS will always find a way to communicate within themselves. And if any means of communication turns out to be not secure for them, then they switch to another one,” Durov argued. “I still think we’re doing the right thing — protecting our users privacy.”

    Bottom line: banning encryption or enforcing tech companies to backdoor communications services has zero chance of being effective at stopping terrorists finding ways to communicate securely. They can and will route around such attempts to infiltrate their comms, as others have detailed at length.

    Here’s a recap: terrorists can use encryption tools that are freely distributed from countries where your anti-encryption laws have no jurisdiction. Terrorists can (and do) build their own securely encrypted communication tools. Terrorists can switch to newer (or older) technologies to circumvent enforcement laws or enforced perforations. They can use plain old obfuscation to code their communications within noisy digital platforms like the Playstation 4 network, folding their chatter into general background digital noise (of which there is no shortage). And terrorists can meet in person, using a network of trusted couriers to facilitate these meetings, as Al Qaeda — the terrorist group that perpetrated the highly sophisticated 9/11 attacks at a time when smartphones were far less common, nor was there a ready supply of easy-to-use end-to-end encrypted messaging apps — is known to have done.

    Point is, technology is not a two-lane highway that can be regulated with a couple of neat roadblocks — whatever many politicians appear to think. All such roadblocks will do is catch the law-abiding citizens who rely on digital highways to conduct more and more aspects of their daily lives. And make those law-abiding citizens less safe in multiple ways.

  40. Tomi Engdahl says:

    Surveillance after Paris

    There’s little evidence that “mass surveillance” catches potential terrorists, but it does risk catching innocents. More conventional police methods are more effective against terrorism.

    The Friday 13 events in Paris were horrendous – bloody and heartless attacks without warning on innocent civilians enjoying a warm November evening, in restaurants and bars, at a concert and a soccer game. A few days before, I had been walking through neighbouring districts, taking in the sights and sounds of a great city.

    Anyone who knows about security-and-surveillance could guess what would happen. Security authorities would want more powers and governments would gauge how far they could go. Of course, it’s understandable that any government in that position will wish to reassure the population with visible signs of security. But governments do tend to rush headlong into new counter-terrorism measures after a major attack like that in Paris. Sometimes they are desperate to show that they are doing something.

    Unfortunately, French responses thus far follow a familiar pattern… So border checks were increased with ramped-up databases and reinforced surveillance.

    Events like these are also viewed as ideal opportunities to make policy changes. As Naomi Klein shows in Shock Doctrine, some governments have for many years exploited crises to push through controversial law or policy

    Learning from the past

    While tightened security may seem like a good plan, changing the rules and demanding greater powers for security and intelligence services in the wake of attacks may not be wise. Sober judgment, not knee-jerk responses, is called for. After the Charlie Hebdo attacks earlier in 2015, the French introduced new laws for warrantless searches, ISPs to collect communications metadata, and trimmed oversight for agencies. This time – in November – some borders were temporarily closed and a 3-month state of emergency was declared.

    We saw it, classically, in the US after 9/11, but also in the UK after 7/7

    The evidence that what Snowden and others call “mass surveillance” produces better ways of tracking terrorists is hard to find.

    Does mass surveillance work?

    Does allowing intelligence agencies to collect more data – what Snowden and others call “mass surveillance” – or increasing powers of arrest and detention, and removing checks and balances, really improve things? The evidence that these produce better ways of tracking terrorists is hard to find. More is not necessarily better when it comes to data. In 2013, the White House claimed that more than 50 terror plots had been uncovered by the NSA’s methods. But in 2014 two independent reviews showed that in 255 terrorism cases investigated by the NSA, only four were the result of using phone metadata and none of the four prevented attacks.

    Edward Snowden is clear about this. Reflecting on the Charlie Hebdo attacks in Paris, he pointed out that these occurred despite the mass surveillance programs introduced there in 2013. Observing that French law was now one of the most intrusive and expansive laws in Europe, he commented that it still didn’t stop the attack. They’re simply “burying people under too much data,” he said.

    After the November attacks in Paris, American agencies were quick to blame Snowden – along with internet encryption and Silicon Valley privacy policies.

    The New York Times argued that in fact Paris was a case of failure to act on information the authorities already had. As for encryption, the Paris attackers used none.

    By and large, conventional police work, based on targeted surveillance of suspects, is what produces results.

    In case after case, we have seen that intelligence agencies knew about those who committed atrocities but failed to – as they say – connect the dots.

    Collateral damage

    At the same time, indiscriminate surveillance creates new risks; innocent “suspects,” the chilling effects of everyone being tracked and checked and the denial of democracy – which ironically is a victory for terrorists. Terrorism arises, it seems, from groups that despise diversity and who seek national, political or religious homogeneity.

    Security-driven surveillance today is very enamoured of big data ‘solutions’ – seen especially in the application of new analytics to seeking out suspects.

    While there may well be appropriate ways of using the so-called ‘data deluge’ created by internet and particularly social media use, the current trend is towards prediction and preemption.

    Fears and futures

    After 9/11, I argued that one of the worst outcomes of the various responses to terrorism is the fomenting of fear. Without for a moment discounting the appalling suffering and loss associated with the Paris attacks – or any others – it must be said that some responses to such atrocities are also highly dangerous. At the far end of fear-mongering is the proposal from US presidential contender Donald Trump to establish a database of American Muslims. If he were not so popular this could be discounted as fascist fanaticism.

    But the trouble with many surveillance responses is that they do so well what marks surveillance today – a process of social sorting that classifies populations in order to treat different groups differently. Thus what is done requires utmost care. Categories have consequences.

    When security agencies make their case for more data, more sophisticated analytics, they often make it sound as if these were neutral technologies.

    Making a difference

    Snowden insists – and proves it by his own example – that any and all can help to make a difference. These are not problems that can be solved overnight by some hastily concocted laws or a furious rush to foreclose freedoms. Indeed, these exacerbate our situation. Surveillance today touches us all and we all need to take action, however small, to change things for the better.

  41. Tomi Engdahl says:

    Pavel Alpeyev / Bloomberg Business:
    Samsung says customer privacy is “extremely important” and backdoors would undermine trust, but stops short of openly supporting Apple, won’t file amicus brief — Samsung Echoes Apple’s Arguments on Importance of User Privacy — The world’s largest smartphone vendor also opposes backdoors

    Samsung Echoes Apple’s Arguments on Importance of User Privacy

    New York Times:
    Several tech execs were initially worried about supporting Apple in FBI case because of its potential to backfire, concerns over public perception — Apple Gets Tech Industry Backing in iPhone Dispute, Despite Misgivings — It is a remarkable moment for the technology industry …

    Apple Gets Tech Industry Backing in iPhone Dispute, Despite Misgivings

  42. Tomi Engdahl says:

    FBI is asking courts to legalize crypto backdoors because Congress won’t
    The most lawmakers have done is float bill to create a “commission” to study issue.

    James Comey, the FBI director, told a House panel on Tuesday that the so-called “Going Dark” problem is “grave, growing, and extremely complex.” (PDF)

    His prepared testimony to the House Judiciary Committee is not surprising. There’s been a chorus of government actors singing that same song for years. But what we didn’t hear was the bureau director ask Congress for legislation authorizing encryption backdoors. That’s because there’s no congressional support—which underscores why the President Obama administration is now invoking a 1789 obscure law in federal courthouses asking judges to do what Congress has declined to do.

    “If I didn’t do that, I oughta be fired,” Comey told the panel during his live testimony. The panel’s hearing, “Encryption Tightrope: Balancing Americans’ Security and Privacy,” was largely dedicated to the FBI’s legal battle with Apple. He said if the bureau had the capability to bypass iPhone passcode locks in the dozens of pending cases where they’ve gone to court, “We wouldn’t be litigating if we could.”


Leave a Comment

Your email address will not be published. Required fields are marked *