Joho the Blog » [berkman] Chris Conley on Surveillance and Transparency
EverydayChaos
Everyday Chaos
Too Big to Know
Too Big to Know
Cluetrain 10th Anniversary edition
Cluetrain 10th Anniversary
Everything Is Miscellaneous
Everything Is Miscellaneous
Small Pieces cover
Small Pieces Loosely Joined
Cluetrain cover
Cluetrain Manifesto
My face
Speaker info
Who am I? (Blog Disclosure Form) Copy this link as RSS address Atom Feed

[berkman] Chris Conley on Surveillance and Transparency

Chris Conley, a Berkman Fellow working on the Open Net Initiative, is giving a lunchtime talk on “Digital Surveillance and Transparency.” [Note: I am live-blogging, hence typing quickly, missing things — missing many things today, actually — getting things wrong, etc. The session will be available in full at Media Berkman. ]

The Surveillance Project looks for evidence of surveillance. But lots of surveillers don’t talk about what they do, so the project looks at tools and technologies, infrastructure, and the legal and/or political constraints. And it looks at the implications for privacy, civil rights, etc.

A security consultant, Ed Giorgio, said “Privacy and security are a zero-sum game.” But this isn’t necessarily true, says Chris. Disclosure can make surveillance more effective. For example, people may behave more the way you want by letting them know they’re being watched.

Chris goes through the parameters of the question. The effect of transparency depends on what you’re trying to do with surveillance. E.g., Facebook’s Beacon ad program watches what you’re doing, without a lot of transparency, to increase the accuracy of ads. Phorm watches what sites you go to in order to achieve the same aim. Surveillance for security purposes is aiming at preventing actions and may well want to be non-transparent. There’s also the audience to consider: the targets of the surveillance, affected third parties (e.g., victims of botnet infections), and other interested parties. It is, he shows, an equation with lots of variables.

Chris walks through some examples. E.g., if you monitor file sharing, announcing that you’re detecting 5% might have an effect. Or, you might announce that you were detecting all files available via BitTorrent. Or all those who are uploading. Each of these might have a different effect. Does announcing a surveillance program deter terrorists? Perhaps not, and announcing it might enable terrorists to counter the surveillance.

What’s the difference with digital surveillance, Chris asks. You can collect more, from more places, of more types. The legal constraints are often very unclear. The mechanisms are rapidly changing. Private entities are being involved. E.g., OnStar was collecting conversations in cars for policing purposes.

The goal of the project is to argue that surveillance needs oversight, public discussion of the goals, and how those goals can be most narrowly met. Chris ends by pointing at Zimbabwe’s recent law that requires ISPs to wiretap their users. Even though it may not actually be happening, this “transparency” can “be a tool to suppress expression on the cheap.”

Q: In the US, are there laws beyond wiretapping, child porn, and financial data retention, that have caused private companies to alter their data retention processes?
A: There are no data retention laws in the US.

(ethanz) The gap between what may be possible in surveillance and what people perceive to be possible is pretty vast. In the middle east among activists, it’s believed that the entire Net passes through seven servers in DC, and that every communication is monitored. This rumor has attained the status of fact in the developing world. The panopticon effect is orders of magnitude more powerful than what these systems are capable of doing. People will not stop believing this.

Q: How well do the counter-digital-surveillance techniques work?
A: Unclear. If you’re identified as a target in a technologically sophisticated country, there’s very little you can do on line to counter it.
Ethan: In one country, they were listening in through parabolic mics a few doors down. There’s nothing you can do about it in a sufficiently motivated environment.
Chris: The best way to keep yourself unidentified is obfuscation. Talk about your topic when in World of Warcraft.

Do people use steganography?
Roger: It’s a myth that it can’t be detected. You can detect non-random low-order bits in graphics.
Ethan: And if they communicate through Tor, you’re flagging (in many countries) that you’re up to no good.

Ethan: I’d like information so people can make better risk assessments. How good are the surveillers? Are they as good as the “tin hats” think? I doubt it, but it would be good to know. E.g., people in Zimbabwe are dropping off of political humor lists, for fear they’re being watched. People over-estimate the ability of governments to watch us.

Gene: Let me sum up: To stop terrorists we’d also stop activists. We have a false sense of security but also a false chilling effect.
Chris: Yes, from the point of surveillance, terrorists and activists are both people trying to hide their communication.
Gene: If from a policy/legal standpoint there’s no difference…
Chris: In a repressive regime, there’s no difference…
Ethan: It’s a difference between behavioral and content analysis. If we were capable of doing the sort of content analysis that most people think we’re capable of doing, people wouldn’t be scared off from (e.g.) participating in Koranic online discussions to argue against suicide bombing. [Tags: ]

Previous: « || Next: »

Leave a Reply

Comments (RSS).  RSS icon