logo
EverydayChaos
Everyday Chaos
Too Big to Know
Too Big to Know
Cluetrain 10th Anniversary edition
Cluetrain 10th Anniversary
Everything Is Miscellaneous
Everything Is Miscellaneous
Small Pieces cover
Small Pieces Loosely Joined
Cluetrain cover
Cluetrain Manifesto
My face
Speaker info
Who am I? (Blog Disclosure Form) Copy this link as RSS address Atom Feed

April 2, 2015

[shorenstein][liveblog] Juliette Kayyem on communicating about security

Juliette Kayyem, a former Boston Globe columnist, a commentator, Homeland Security advisor to Gov. Deval Patrick, and a former candidate for governor of Massachusetts, is giving a Shorenstein Center talk about how to talk with the public about security issues.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Juliette oversaw the local Homeland Security response to the Marathon Bombing and had participated in the security planning. After the bombing, she became a CNN commentator on terrorism. She stresses her personal connection to the event: “It’s my home.” After she left the Obama administration, the Boston Globe asked her to be a columnist. She did not see herself as a professional writer. In her twice-weekly columns she tried to show how global events affect Boston locally. She took on topics she didn’t feel comfortable with, which she attributes to “woman insecurity.” [Do I need to mention that she is insanely qualified?]

In her column and on CNN her rules are: 1. Bring it home. 2. Don’t create strawmen. 3. Tell it to them as if you’re sitting with them at the kitchen table. She learned that last lesson from her security experience. Journalists and security experts have a common goal of engaging the public in ownership of something that matters to them and their children. “The security apparatus is to blame” for the failure to engage the public. “Stuff happens.” There’s no such thing as perfect security.

The security apparatus created with the media “a total lose-lose situation.” Two lessons about communications:

1. She got an email from her cousin on the 10th anniversary of 9/11. “Juliette, I am a little nervous now. Can you help?” Her daughter was heading to NYC, and had heard rumors about a planned attack. “Would you send your kids?” She wanted Juliette to just talk to her without jargon or defensiveness.

2. Juliette was director of the BP Oil Spill team, overseeing 70 people. There were two narratives, she says. First, the narrative we all heard. Second, we saved an ocean. That second narrative was not a foregone conclusion: “Much of our slowness at the start was due to our fear that the well would explode.” [Yikes!] The administration failed to “bring it home,” i.e., make it understandable and relatable.

“Never again!” about disasters is not possible. It’s delusional. E.g., we focused on never again letting 19 terrorists on planes, but we were hit by Katrina. Also, we tended to spend money on things — e.g., tanks — far more readily than on training, support, etc. Worse, the govt said “Never again!” but failed to involved the public. Worse still, it makes a narrative that says “Only 20 people died instead of 200” very difficult to sell.

Here’s some of what she’s learned:

First, There are black swans — freakish events that cannot be predicted or stopped. But we should be able to learn lessons.

Second, you have to define success. During the BP Spill, the President should have said early on that oil will hit the shore so when some did, it didn’t look like failure. We should not define success or failure as binary.

Third, we need resilient, layered defenses and redundancy. We as a nation thankfully are getting away from “Never again!” to “Stuff happens.” The question is how these layered defenses are being built. And not just for terrorism but for pandemics, climate change…

Fourth, public engagement is an operational requirement. E.g., Occupy Sandy did great work, but it was reported in a binary way as a failure of FEMA.

Fifth, we need to tell these stories as you would tell your best friend at the kitchen table. There’s no such thing as no risk. Stuff happens. There are things we can do prepare ourselves at home.

Q & A

Q: [alex jones] Are you speaking for yourself or are you reporting on the lessons learned by the security establishment?

A: The apparatus is headed in this direction. The response agencies are better than the intelligence apparatus in this regard. It matters to have a separate director of resiliency. We can’t stop everything. Politically it’s incredibly hard. Obama has tried talking about resiliency. It goes better with governors and mayors. We’re starting to see political leadership saying that there’s a limit to what they can do for us. The public needs not to be asses, e.g., surfing during Sandy, so the public safety apparatus can be used to help people who really need it. The agencies need to acknowledge their own limits and errors. “Are you safe? Now, of course not? What world do you live in? We can make you a little safer, but …”

Q: [alex] If there’s a series of bombings at malls, what will happen?

A: We can’t prevent everything. We’re in a world of whack-a-mole. Part of the grip you saw during the manhunt in Boston was laid out in a series of prior decisions? Why did people in Boston feel “We got this”? That’s because of decisions that were made, planned out. The police immediately moved people off the street and began the process of family unification, which is really important. Also, the public health apparatus kicked in. Six hundred emergency patients and not one of them died. [I.e., if you made it to emergency care, you didn’t die.] You have to prepare for the disasters that will happen.

Q: [alex] At the Marathon the emergency apparatus was there already. But longer term, what would a mall bombing do to the economy?

A: It comes down to how political leadership communicates. And it’s important to prepare people so they’re not surprised.

Q: Where do you see some of the vulnerabilities? Are there plans underway?

A: The Boston Globe today has a story that the failure of the T during the snowstorms was not inevitable. We have an infinite number of vulnerabilities because we have infinite soft targets. We have them because we chose to make them soft, which is a totally reasonable choice. E.g., no security gates in the MBTA. Terrorism is a threat, but in my lifetime climate change will change the way we live in ways we’re not addressing. It’s about zoning, planning, getting people to live in particular ways. I’ve advocated changing how we compensate those who are harmed by disasters. They used to be rare and random. Not any more. We keep bailing out people who build on shorelines and are flooded out. We shouldn’t pay for the same behavior but should pay for altered behavior, e.g., building a sea wall.

Q: ?

A: Security apparatuses are inherently conservative. We can’t have systems that have single points of failure. Also, there’s something to closure to families that have suffered in disasters. Also, why can’t black boxes beam their info to someone on the ground.

Q: [nick sinai] People in OSTP in the White House worked on disaster relief, etc. From your point of view, what was working and what wasn’t?

A: It’s important to engage people, not for feel-good reasons but to help relieve the burden on the official apparatus. FEMA has only 3,700 employees. It’s a coordination agency. The shared economy is very exciting. E.g., AirBnB is helpful about housing in an emergency. Could Uber move first responders to centers? Also, using social media to communicate info. FEMA is doing a good job with this.

Q: [alex] JournalistResource.org was helpful during the BP crisis.

Q: Does Boston have the capacity to hold the Olympics? There’s no security in the transportation system.

A: I’m the senior security advisor to the Boston Olympics committee. Security in a complex system is about risk reduction but also being welcome. You can’t have an unwelcoming Olympics. The Olympics are one of the last forums on the globe in which people come together and don’t fight. Four major pieces of security for Boston: 1. Intelligence. Feds will run that. 2. Response. If something bad happens, can we minimize the harm? 3. Cyber attacks. London suffered 27K cyber attacks during the 2 wks of the Olympics. 4. It sucks to come into this country if you’re not an American. Can we have a safe and secure immigration system? And we’ll increase the security for the transportation system without creating a police state. BTW, it’s looking good for Boston getting the nod, although a growing majority of Bostonians are against it. If the populace favors it, it’s ours to lose. (She favors a referendum.)

Q: How successful were those 27,000 cyber attacks in London? And what about our infrastructure?

A: Cyber defenses for the Olympics were strong. Our infrastructure is at risk. We’re going to have to make a major commitment to, e.g., putting our wires underground. But we seem unwilling to make the investment.

Q: How about the Massachusetts infrastructure?

A: It’s not in great shape. We have to prioritize. Everyone has an equal voice but not all bridges are equal.

Tweet
Follow me

Categories: liveblog Tagged with: liveblog • security • shorenstein Date: April 2nd, 2015 dw

1 Comment »

March 3, 2015

[liveblog] David Sanger on cybersecurity. And Netanyahu

David Sanger of the NY Times is giving a Shorenstein Center lunchtime talk about covering security.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

David begins by honoring Alex Jones, the retiring head of the Shorenstein Center with whom he worked at the Times.

David tells us that he wrote his news analysis of the Netanyahu speech to Congress last night, before the talk, because people now wake up and expect it to read about it. His articles says that a semantic difference has turned into a strategic chasm: we’ve gone from preventing Iran from having the capability of building a weapon to preventing Iran from building a weapon. Pres. Obama dodged this question when David asked him about it in 2010. If the Iran deal goes through, says David, it will be the biggest diplomatic step since Nixon went to China.

Probably six years ago David had just come back from writing The Inheritance, which disclosed that GW Bush had engaged in the first computer attacks on Iran. He came back to the newsroom saying that we need to start thinking about the strategic uses of cyber as a weapon, beyond worrying about kids in a basement hacking into your bank account. This was an uphill struggle because it’s extremely difficult to get editors to think about a nontraditional form of warfare. Drones we understand: it’s an unmanned aircraft with familiar consquences when it goes wrong. We all understand nuclear weapons because we saw Hiroshima. Cyber is much harder to get people to understand. To make matters worse, there are so many different kinds of cyber attacks.

When you think about cyber you have to think about three elements, he says. 1. Cyber for espionage, by states or by thieves. 2. Cyber for economic advantage, on the cusp between business and govt. E.g., Chinese steal IP via operations run out of the Chinese Army. The US thinks that’s out of bounds but the Chinese think “What’s more important to our national interest than our economy? Of course we’ll steal IP!” 3. Cyber for political coercion, e.g. Stuxnet. This tech is spreading faster than ever, and it’s not just in the hands of states. We have no early concept of how we’re going to control this. We now claim Iran was behind cyberattacks on Las Vegas casinos. And, of course, the Sony hack. [He recounts the story.] “This was not a little drive-by attack.”

He says he would have predicted that if we got into a cyber war with another country, it would be an attack on the grid or some such, not an attempt to stop the release of a “terrible” commercial movie. “We’re in a new era of somewhat constant conflict.” Only now is the govt starting to think about how this affects how we interact with other companies. Also, it’s widened the divide Snowden has opened between Silicon Valley and the govt. Post-Snowden, companies are racing to show that they’re not going to cooperate with the US govt for fear that it will kill their ability to sell overseas. E.g., iPhone software throws away the keys that would have enabled Apple to turn over your decrypted data if the FBI comes along with a warrant. The head of the FBI has objected to this for fear that we’re entering a new era in which we cannot get data needed to keep us secure.

The govt itself can’t decide how to deal with the secrecy around its own development of cyber weapons. The Administration won’t talk about our offensive capabilities, even though we’re spending billions on this. “We can’t have a conversation about how to control them until you admit that you have them and describe the circumstances under which you might use them.”

Q&A

Q: [alex jones] Laypeople assume that there are no secrets and no privacy any more. True?

A: By and large. There’s no system that can’t be defeated. (Hillary Clinton must have come to be so suspicious of the State Dept. email system that she decided to entrust it to gmail.) There’s no guaranteed system. We’d have to completely redesign the Internet to make it secure.

Q: [alex] What’s the state of forensics in this situation?

A: It’s not a sure thing. All govts and law enforcement agencies are putting a lot of money into cyber forensics. In the nuclear age, you could see where the missiles are coming from. Cybercrime is more like terrorism: you don’t know who’s responsibile. It’s easy to route a cyberattack through many computers to mask where it’s coming from. When the NYT was hacked by the Chinese govt, the last hop came from a university in the South. It wouldn’t have been so nice to have assumed that that little university was actually the source.

The best way to make forensics work is to have implants in foreign computing systems that are like little radar stations. This is what the NSA spends a lot of its time doing. You can use the same implant for espionage, to explore the computer, or to launch an attack. The US govt is very sensitive about our questions about implants. E.g., suppose the NSA tells the president that they’ve seen a major attack massing. The president has to decide about reacting proactively. If you cyber-attack a foreign computer, it looks like you struck first. In the Sony case, the President blamed North Korea but the intelligence agencies wouldn’t let him say what the evidence was. Eventually they let out a little info and we ran a story on the inserts in NK. An agency head called and officially complained about this info being published but said more personally that releasing the fact that the govt can track attacks back to the source has probably helped the cause of cybersecurity.

Q: Are there stories that you’re not prepared to publish yet?

A: We’ve held some stuff back. E.g., e were wondering how we attacked Iran computers that were disconnected from the Net (“air gap”). If you can insert some tech onto the motherboard before the product has been shipped you can get access to it. A Snowden document shows the packaging of computers going to Syria being intercepted, opened, and modified. Der Spiegel showed that this would enable you to control an off-line computer from 7 miles away. I withheld that from the book, and a year or two later all that info was in the Snowden docs.

Q: [nick sinai] Why haven’t the attacks on the White House and State Dept. been a bigger story?

A: Because they were mainly on the unclassified side. We think it was a Russian attack, but we don’t know if was state-sponsored.

Q: How does the Times make tradeoffs between security and openness?

A: I’m not sure we get it right. We have a set of standards. If it would threaten a life or an imminent military or intelligence operation we’re likely not to publish it. Every case is individual. An editor I know says that in every case he’s withheld info, he’s sorry that he did. “I don’t blame the government” for this, says David. They’re working hard to prevent an attack, and along comes a newspaper article, and a program they’ve been working on for years blows up. On the other hand, we can’t debate the use of this tech until we know what it can do. As James Clapper said recently, maybe we’re not headed toward a cyber Pearl Harbor but toward a corrosive series of attacks, institution by institution.

Q: At what point do cyberattacks turn into cyberwarfare?

“Cyberwarfare” is often an overstated term. It implies that it might turn into a real-world war, and usually they don’t. Newspapers have to decide which ones to cover, because if you tried to cover them all, that’s all you’d cover. So the threshold keeps going up. It’s got to be more than stealing money or standard espionage.

Q: Will companies have to create cyber militias? And how will that affect your coverage?

A: Most companies don’t like to report cyber attacks because it drives down their stock market valuation. There’s a proposed law that would require a company to report cyber attacks within a month. The federal govt wants cybersecurity to come from private companies. E.g., JP Morgan spends half a billion dollars on cyber security. But there are some state-sponsored attacks that no private company could protect itself against.

Q: How does US compare with our enemies? And in 30 yrs how will we remember Snowden?

A: The usual ranking puts US on top, the British, the Israelis. The Chinese are very good; their method seems to be: attack everyone and see what you get. The Russians are stealthier. The Iranians and North Koreans are further down the list. A year ago if you’d told me that the NKs would have done something as sophisticated as the Sony attack, I would have said you’re crazy.

I have no problem believing both that Snowden violated every oath he took and multiple laws, and that the debates started by the docs that he released is a healthy one to have. E.g., Obama had authorized the re-upping of the collection of metadata. After Snowden, the burden has been put on private companies, none of which have taken it up. Also, Obama didn’t know we were listening in on Angela Merkel. Now all those programs are being reviewed. I think that’s a healthy kind of tradeoff.

Q: What enduring damage has Snowden done?

A: The damage lies between immediate to enduring. Immediately, there were lots of intelligence programs that had to be redone. I don’t see any real damage outside of a 5 year frame.

Q: Might there be a deal that lets Snowden come home?

A: A year ago there was interest in this in order to find out what Snowden knows. But now the intelligence services feel they have a handle on this.

Q: Netanyahu speech?

A: Politically he probably did a little more damage to his cause than good. Some Dems feel coerced. On the substance of it, I think he made the best case you can make for the two biggest weaknesses in the deal: 1. It doesn’t dismantle very much equipment, so when the deal’s term is over, they’ll be up and running. 2. We’re taking a bet that the Iranian govt will be much easier to deal with in 10-15 yrs, and we have no idea if that’s true. But Netanyahu has not put forward a strategy that does not take you down the road to military confrontation.

Tweet
Follow me

Categories: journalism, liveblog, peace, politics Tagged with: cybersecurity • iran • liveblog • security • shorenstein • war Date: March 3rd, 2015 dw

2 Comments »

November 17, 2013

Noam Chomsky, security, and equivocal information

Noam Chomsky and Barton Gellman were interviewed at the Engaging Big Data conference put on by MIT’s Senseable City Lab on Nov. 15. When Prof. Chomsky was asked what we can do about government surveillance, he reiterated his earlier call for us to understand the NSA surveillance scandal within an historical context that shows that governments always use technology for their own worst purposes. According to my liveblogging (= inaccurate, paraphrased) notes, Prof. Chomsky said:

Governments have been doing this for a century, using the best technology they had. I’m sure Gen. Alexander believes what he’s saying, but if you interviewed the Stasi, they would have said the same thing. Russian archives show that these monstrous thugs were talking very passionately to one another about defending democracy in Eastern Europe from the fascist threat coming from the West. Forty years ago, RAND released Japanese docs about the invasion of China, showing that the Japanese had heavenly intentions. They believed everything they were saying. I believe this is universal. We’d probably find it for Genghis Khan as well. I have yet to find any system of power that thought it was doing the wrong thing. They justify what they’re doing for the noblest of objectives, and they believe it. The CEOs of corporations as well. People find ways of justifying things. That’s why you should be extremely cautious when you hear an appeal to security. It literally carries no information, even in the technical sense: it’s completely predictable and thus carries no info. I don’t doubt that the US security folks believe it, but it is without meaning. The Nazis had their own internal justifications. [Emphasis added, of course.]

I was glad that Barton Gellman — hardly an NSA apologist — called Prof. Chomsky on his lumping of the NSA with the Stasi, for there is simply no comparison between the freedom we have in the US and the thuggish repression omnipresent in East Germany. But I was still bothered, albeit by a much smaller point. I have no serious quarrel with Prof. Chomsky’s points that government incursions on rights are nothing new, and that governments generally (always?) believe they are acting for the best of purposes. I am a little bit hung-up, however, on his equivocating on “information.”

Prof. Chomsky is of course right in his implied definition of information. (He is Noam Chomsky, after all, and knows a little more about the topic than I do.) Modern information is often described as a measure of surprise. A string of 100 alternating ones and zeroes conveys less information than a string of 100 bits that are less predictable, for if you can predict with certainty what the next bit will be, then you don’t learn anything from that bit; it carries no information. Information theory lets us quantify how much information is conveyed by streams of varying predictability.

So, when U.S. security folks say they are spying on us for our own security, are they saying literally nothing? Is that claim without meaning? Only in the technical sense of information. It is, in fact, quite meaningful, even if quite predictable, in the ordinary sense of the term “information.”

First, Prof. Chomsky’s point that governments do bad things while thinking they’re doing good is an important reminder to examine our own assumptions. Even the bad guys think they’re the good guys.

Second, I disagree with Prof. Chomsky’s generalization that governments always justify surveillance in the name of security. For example, governments sometimes record traffic (including the movement of identifiable cars through toll stations) with the justification that the information will be used to ease congestion. Tracking the position of mobile phones has been justified as necessary for providing swift EMT responses. Governments require us to fill out detailed reports on our personal finances every year on the grounds that they need to tax us fairly. Our government hires a fleet of people every ten years to visit us where we live in order to compile a census. These are all forms of surveillance, but in none of these cases is security given as the justification. And if you want to say that these other forms don’t count, I suspect it’s because it’s not surveillance done in the name of security…which is my point.

Third, governments rarely cite security as the justification without specifying what the population is being secured against; as Prof. Chomsky agrees, that’s an inherent part of the fear-mongering required to get us to accept being spied upon. So governments proclaim over and over what threatens our security: Spies in our midst? Civil unrest? Traitorous classes of people? Illegal aliens? Muggers and murderers? Terrorists? Thus, the security claim isn’t made on its own. It’s made with specific threats in mind, which makes the claim less predictable — and thus more informational — than Prof. Chomsky says.

So, I disagree with Prof. Chomsky’s argument that a government that justifies spying on the grounds of security is literally saying something without meaning. Even if it were entirely predictable that governments will always respond “Because security” when asked to justify surveillance — and my second point disputes that — we wouldn’t treat the response as meaningless but as requiring a follow-up question. And even if the government just kept repeating the word “Security” in response to all our questions, that very act would carry meaning as well, like a doctor who won’t tell you what a shot is for beyond saying “It’s to keep you healthy.” The lack of meaning in the Information Theory sense doesn’t carry into the realm in which people and their public officials engage in discourse.

Here’s an analogy. Prof. Chomsky’s argument is saying, “When a government justifies creating medical programs for health, what they’re saying is meaningless. They always say that! The Nazis said the same thing when they were sterilizing ‘inferiors,’ and Medieval physicians engaged in barbarous [barber-ous, actually – heyo!] practices in the name of health.” Such reasoning would rule out a discussion of whether current government-sponsored medical programs actually promote health. But that is just the sort of conversation we need to have now about the NSA.

Prof. Chomsky’s repeated appeals to history in this interview covers up exactly what we need to be discussing. Yes, both the NSA and the Stasi claimed security as their justification for spying. But far from that claim being meaningless, it calls for a careful analysis of the claim: the nature and severity of the risk, the most effective tactics to ameliorate that threat, the consequences of those tactics on broader rights and goods — all considerations that comparisons to the Stasi and Genghis Khan obscure. History counts, but not as a way to write off security considerations as meaningless by invoking a technical definition of “information.”

Tweet
Follow me

Categories: infohistory, policy Tagged with: information • nsa • policy • security Date: November 17th, 2013 dw

1 Comment »

September 5, 2013

Pew Internet survey on Net privacy: Most of us have done something about it

Pew Internet has a new study out that shows that most of us have done something to maintain our privacy (or at least the illusion of it) on the Net. Here’s the summary from the report’s home page:

A new survey finds that most internet users would like to be anonymous online, but many think it is not possible to be completely anonymous online. Some of the key findings:

  • 86% of internet users have taken steps online to remove or mask their digital footprintsâ??ranging from clearing cookies to encrypting their email.

  • 55% of internet users have taken steps to avoid observation by specific people, organizations, or the government.

The representative survey of 792 internet users also finds that notable numbers of internet users say they have experienced problems because others stole their personal information or otherwise took advantage of their visibility online. Specifically:

  • 21% of internet users have had an email or social networking account compromised or taken over by someone else without permission.

  • 12% have been stalked or harassed online.

  • 11% have had important personal information stolen such as their Social Security Number, credit card, or bank account information.

  • 6% have been the victim of an online scam and lost money.

  • 6% have had their reputation damaged because of something that happened online.

  • 4% have been led into physical danger because of something that happened online.

You can read the whole thing online or download the pdf, for free. Thank you, Pew Internet!

Tweet
Follow me

Categories: misc Tagged with: anonymity • pew • privacy • security Date: September 5th, 2013 dw

1 Comment »

December 11, 2011

If credit card companies cared about security…

1. When there’s a security issue, they wouldn’t robocall people and ask them to provide personal information. They would robocall people and ask them to call the number on the back of their cards.

2. They would put people’s photographs on their credit cards. Citi used to offer that as a free option, but apparently has discontinued the practice.

Tweet
Follow me

Categories: business Tagged with: credit cards • security Date: December 11th, 2011 dw

1 Comment »

April 7, 2011

Citicard does its best to train us in horrible security practices

Citibank continues to train its customers to use terrible security processes.

This morning I got a call from a robot that claimed to be from Citibank. When I refused to type in my zip code, and then waited for two minutes of repeated requests to do so, it transferred me to a human who wanted me to give him my name, undoubtedly to be followed by a request for my password. Thus does Citibank train its users to divulge personal information to anyone with an automated phone dialer.

This is the same outfit that no longer offers to put a thumbnail photo of you on your credit card, which is a pretty good way to foil card-grabbing bastards. It also used to embed an image of your signature on the front of the card. Again, a cheap and effective prophylactic measure that it no longer offers.

This is also the same outfit that is very happy to sell us monthly services — $10/month last time I looked — that inform us when Citibank has failed to protect us from identity theft.

Tweet
Follow me

Categories: business Tagged with: citibank • identity theft • security Date: April 7th, 2011 dw

3 Comments »

June 19, 2009

FlyClear: Cutting in line so the terrorists won’t win

At the Reagan Airport (would I be jumping the gun to start calling it the Obama Airport already?), Clear has a little square of space right before the security inspection stations. For $200/year, you can skip the long lines and go for the exceedingly short line to Clear. There the uniformed employees will compare some of your body parts (iris and fingerprints) with the information on the Clear card you present. Once you’re through, you can go straight to the Conveyor of Transparencies where you rejoin the hoi polloi so that the TSA can make sure your shoes aren’t on fire.

What I don’t get is why Clear has to give you an extra special biometric scan. Why can’t they just do what the TSA folks do: Look at your drivers license, look at you, and wave you on through? All I can figure is that Clear’s market research showed that people would be more willing to pay to cut in line — which is what Clear is really about — if there’s a pretense that it enhances security.

As far as whether all the fancy-shmancy biometrics — heck, my face is the only biometric I need! — actually increases security, if I were an evil do-er, I’d just bribe a Clear airport employee. They don’t go through security clearances the way TSA folks do, at least according to the Clear employee I asked.

[Tags: airports security tsa terrorism line_cutters ]


June 22, 2009: Clear just went out of business.

Tweet
Follow me

Categories: misc Tagged with: airports • line_cutters • marketing • misc • security • terrorism • tsa Date: June 19th, 2009 dw

8 Comments »

June 16, 2009

Google: Make security the default (Now with Iranian tweets)

Chris Soghoian has posted an open letter to Google, asking it to make encryption the default. This is in line with the talk he gave recently at the Berkman Center.

[Update later that day: Two hours after releasing the letter, Google agreed to try setting encryption as the default for a subset of users, as a trial. If it works out, they’ll consider expanding it.]

Also, Jonathan Zittrain has posted about why the Iranians have problems blocking Twitter. [Tags: google security iran twitter ]

Tweet
Follow me

Categories: Uncategorized Tagged with: digital rights • google • iran • security • twitter Date: June 16th, 2009 dw

Be the first to comment »

May 26, 2009

[berkman] Chris Soghoian on privacy in the cloud

Chris Soghoian is giving a Berkman lunchtime talk called: “Caught in the Cloud: Privacy, Encryption, and Government Back Doors in the Web 2.0 Era,” based on paper he’s just written. In the interest of time, he’s not going to talk about the “miscreants in government” today.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

 

Pew says that “over 69% of Americans use webmail services, store data online, or other use software programs such as word processing applications whose functionality is in the cloud.” Chris’ question: Why have cloud providers failed to provide adequate security for the customers. (“Cloud computing” = users’ data is stored on a company server and the app is delivered through a browser.)

He says that providers are moving to the cloud because they don’t have to worry about privacy. Plus they can lock out troublesome users or countries. It lets them protect patented algorithms. They can do targeted advertising. And they can provide instant updates. Users get cheap/free software, auto revision control, easy collaboration, and worldwide accessibility. Chris refers to “Cloud creep”: the increasing use of cloud computing, its installation on new PCs, etc. Vivek Kundra switched 38,000 DC employees over to Google Docs becore he became Federal CIO. “It’s clear he’s Google-crazy.” Many people may not even know they’ve shifted to the cloud. Many cloud apps now provide offline access as well. HTML 5 (Firefox 3.5) provide offline access without even requiring synchronizers such as Google Gears.

 

Chris says that using a single browser to access every sort of site — from safe to dangerous — is bad practice. Single-site browsers avoid that. E.g., Mozilla Prism keeps its site in its own space. With Prism, you have an icon on your desktop for, e.g., Google Docs. It opens in a browser that can’t go anywhere else; it doesn’t look like a cloud app. “It’s a really cool technology.” Chris uses it for online banking, etc.

 

Conclusion of Part 1 of Chris’ talk: Cloud services are being used increasingly, and users don’t always know it.

 

Part 2

 

We use encryption routinely. SSl/TLS is used by banks, e-commerce, etc. But the cloud providers don’t use SSL for much other than the login screen. Your documents, your spreadsheets, etc., can easily be packet-sniffed. Your authentication cookies can be intercepted. That lets someone login, modify, delete, or pretend to be you. “This is a big deal.” (The “Cookie Monster” tool lets you hijack authentication cookies. AIMJECT lets you intercept IM sessions; you can even interject your own messages.)

This problem has been wn since August 2007, and all the main cloud providers were notified. It took Google a year to release a fix, and even so it hasn’t been turned on by default. Facebook, Yahoo mail, Microsoft, etc. don’t even offer SSL. Google says it doesn’t turn it on by default because it can slow down your computer, because it has to decrypt your data. But Google does require you to use it for Google Health, because the law requires it. To get SSL for gmail, you have to go 5 levels down to set it.

So, why doesn’t Google provide SSL bu default? Because it takes “vastly more processing power,” and thus is very expensive for Google. SSL isn’t a big deal when done on your computer (the client computer), but for cloud computing, it would all fall on Google’s shoulders. “If 100% of Google’s customers opt to use SSL, it sees no new profits, but higher costs.” “And Google is one of the better ones.” The only better one, in Chris’ view, is Adobe, which turns it on by default for its online image editing service. [Here’s a page that tells you how to turn on SSL for a Google Accounts account.]

Chris thinks that cloud computing security may be a type of “shrouded attribute,” i.e. am attribute that isn’t considered when making a buying decision. But, Chris says, defaults matter. E.g., if employees opt employees into a 401K, no one opts out, but if you leave it to employees to opt in, fewer than half do. Facebook, for example, seems to blame the user for not turning privacy features off. “Users should be given safe services by default.”

Part 3: Fixing it

Chris draws analogies to seatbelts and tobacco legislation. He recommends that we go down the cigarette pathway first: Raise publice awareness so that they demand mandatory warnings for insecure apps. E.g., “WARNING: Email messagew that you write can be read, intercepted or stolen. Click here to turn on protection…” [Chris’ version was better. Couldn’t type fast enough.]

Or, if necessary, we could pass regulations mandating SSL. T he FTC could rule that companies that claim their services are safe are lying.

Q: [me] How much crime does this enable?
Q: How about OpenID?
A: The issue of authentication cookies is the same.

Q: Should we have a star rating system?
A: Maybe.

Q: The lack of data about the crime is a problem for getting people to act. Maybe you should look at the effect on children: Web sites aimed for children, under 18 year olds using Facebook…
A: Good idea! Although Google’s terms of service don’t allow people under 18 to use any of their services.

Q: People also feel there’s safety in numbers.

Q: How much more processing power would SSL require from Google?
A: Google custom builds its servers. Adding in a new feature would require crypto-co-processor cards. I don’t think they have those. They’d have to deploy them.

Q: There are GreaseMonkey scripts that require FB to use SSL. Worthwhile?
A: FB won’t accept SSL connections.

Q: Google Chrome’s incognito mode? Does it help with anything?
A: It helps with porn. That cleans up your history, but it doesn’t encrypt traffic.

Q: The vast majority of people where I live don’t lock their house doors. And [says someone else] people don’t lock their mailboxes even though they contain confidential docs.
A: Do you walk around with your ATM PIN number on your forehead? Your bank uses SSL because it’s legally responsible for electronic break-ins, whereas Google isn’t.
A: The risk is small if you’re using a wired ethernet connection or a protected wifi connection.

Q: With seatbelts and smoking, your life’s at risk. For Gmail, the risk seems different. There aren’t data, screaming victims, etc. It makes the demand for regulation harder to stimulate.
A: The analogy doesn’t work 100%. But I think the disanalogy works in my favor: It’s hard to have a cigarette that doesn’t harm you, but it’s easy to have a secure SSL connection.

Q: Shouldn’t business care about this?
A: Yes, CIO’s can make that decision and turn on encryption for the entire org. Consumers have to be their own CIOs.

 

http://ramfarms.com/sites/blog/index.php?=prednisone-5mg

[from the IRC] Maybe the govrnment wants Google to be insecure to enable snooping.
A: Allow me to put on my tin foil hat. Last year the head of DNI said that the gov’t collects vast amounts of traffic. We don’t know how they’re doing it, which networks they’re collecting data from. If Google and AT&T, etc., turned on SSL be default, the gov’t’s job would be much harder. Google has other reasons to keep SSL off, but it works out to the gov’t’s benefit.

Does Adobe’s online wordprocessor, Buzzword, offer SSL for its docs?
A: Don’t know. [It does] [Tags: security identity_theft google ssl ]

Tweet
Follow me

Categories: misc Tagged with: digital rights • everythingIsMiscellaneous • google • security • ssl • tech Date: May 26th, 2009 dw

3 Comments »

May 11, 2009

Smart and secure grids and militaries

The Wired.com piece I wrote about Robin Chase prompted Andrew Bochman to send me an email. Andy is an MIT and DC energy tech guy (and, it turns out, a neighbor) who writes two blogs: The Smart Grid Security Blog and the DoD Energy Blog. Neither of these topics would make it into my extended profile under “Interests,” but I found myself sucked into them (confirming my rule of thumb that everything is interesting if look at in sufficient detail). So many things in the world to care about!

[Tags: ecology power military security experts ]

Tweet
Follow me

Categories: Uncategorized Tagged with: ecology • egov • expertise • experts • military • power • security Date: May 11th, 2009 dw

2 Comments »

Next Page »


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
TL;DR: Share this post freely, but attribute it to me (name (David Weinberger) and link to it), and don't use it commercially without my permission.

Joho the Blog uses WordPress blogging software.
Thank you, WordPress!