Joho the Blog » 2011 » January

January 26, 2011

Shuttle XS35 bios 1.09

I may come back to expand on this post, but for now: If you are looking for the 1.09 bios of the Shuttle XS35 so that you can get the !@#$-ing wifi to work, I’ve placed a putative copy here. There’s an explanation of how to install it here. Please note that I am still in the process of trying to get the bios installed, so I cannot vouch for the integrity of the bios or of the instructions. Proceed at your own risk.

(If it’s not clear from context: The Shuttle mini-computer claims to come with wifi, but it needs a bios update to v1.09 get it to work, and the only version on the Shuttle site is 1.08. There are a few copies of 1.09 strewn about the Web, but mainly at shady “free updates” and “free bios” sites that send you on a self-circling clickfest that may or many not have any exit. So I found a copy – which may or may not work – and have posted it.)

Later that night: It worked. The wifi is on. But it really shouldn’t be that hard, and Shuttle ought to ship it with a working bios, or at least give us the updated one and the instructions we need.

2 Comments »

McLuhan in his own voice

As a gift on the centenary of Marshall McLuhan’s birth, a site has gone up with videos of him explaining his famous sayings. Some of them still have my scratching my head, but other clips are just, well, startling. For example, this description of the future of books is from 1966.

2 Comments »

January 25, 2011

[berkman] Distributed Denial of Service Attacks against Human Rights Sites

Hal Roberts, Ethan Zuckerman [twitter:ethanz] , and Jillian York [twitter:jilliancyork] are doing a Berkman lunchtime talk on Distributed Denial of Service [DDoS] Attacks against Human Rights Sites, reporting on a paper they’ve posted.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

A DDoS is an attack that consumes the resources of the target machine so that that machine is not able to respond, Hal says. It is an old problem: there was a CERT Advisory about an IP spoofing attack in 1996. A distributed DoS attack uses lots of machines to attack the host, typically via botnets (armies of infected machines). Hal gives an example in which infected machines check Twitter once a minute looking for encoded commands to do nefarious tasks. Gambling sites have often been targets, in part because they are reluctant to report attacks; they’ve also been known to attack each other. In one case, this resulted in the Net going down for 9 hours for most of China. Hal points out that botnets are not the only way DDoS attacks are carried out. In addition, there have been political uses. Botnets have been used to spy as well as bring down sites.

One monitor (Arbor Networks) notes 5-1500 DDoS attacks per day, globally. Hal thinks this number is too low, in part because there are many small attacks.

An application attack “crashes the box.” E.g., a slowloris attack slows down the host’s response time, reducing the number of available TCP connections. App attacks can be clever. E.g., simply reloading a homepage draws upon cached data, but doing searches on random words can be much more effective.

A network attack “clogs the pipe.” It floods the target with as much traffic as it can. This often will take down all the sites hosted by the ISP, not just the target site. The powerful network attacks are almost all “amplification” attacks. E.g., you request a big chunk of data: a little data in requests a massive amount of data back.

To defend against DDoS, you can optimize your server and harden it; you can build in over capacity; you can create a system that adds more resources as required; you can do packet filtering or rate limitation; you can scrub the attacking packets by “outsourcing” them to highly experience sys admins who look for signs in the packets that distinguish good from bad; if flooded, you can do source mitigation, asking routers routing the flood to you to block the packets; or, you can tell your ISP to dynamically reroute the packets. But, none of these technique work well enough or are too expensive.

The study by Hal, Ethan, Jillian, et al., asked a few key questions about how this affects human rights sites: How prevalent are DDoS attacks? What types are used? What’s the impact? How can sites defend against them? To answer these, they aggregated all the media reports, they surved human rights and media organizations. They interviewed respondents. And they hosted a meeting at Harvard. They learned:

  • Attacks are common

  • Sites on the edge of the Net, such as indie media, are particularly vulnerable

  • It’s not just DDoS attacks

  • There are some good answers for application attacks, but fewer for network attacks

  • Network attacks may provoke a move to the core

  • It helps to connect local geeks with core sysadmins

In their media research, they found lots of attacks, but not a strong correlation between the attacks and the politics of the attacked sites. The data are hampered, however, by the difficulty of gathering the info. Not all sites know they’ve been DDoS’ed. And the study had to use large boolean queries to try to find coverage in the media.

Even though there are many attacks, the core (Tier 1 providers, plus their direct customers) does well against DDoS attacks. Those Tier 1 sysadmins work closely together. But, as you get out further from the center — a customer of a customer of a customer of a Tier 1 operator — people have little recourse. “Being at the edge in terms of DDoS is a really bad thing,” says Ethan. The core has dedicated staff and a ton of bandwidth. They typically respond to a DDoS within an hour, and probably within 15 mins. So, if you’re Google, it’s not that much of a problem for you.

But, if you’re a small human rights site, it’s much harder to defend yourself. E.g., Viet Tan has been attacked repeatedly, probably by the Vietnamese government. Worse, they’re not just being DDoS’ed. 72% of those who said they’ve been DDoS’ed are filtered by their governments. 62% have experience ddos attacks. 39% have had an intrusion. 32% have been defaced. Viet Tan was being attacked not just by a botnet, but by the Vietnamese around the world by people who had downloaded a keyboard driver that logged keystrokes and could issue attacks. The people attacking them were the people they were trying to reach. “It’s an incredibly sophisticated way of doing things,” says Ethan.

Arbor Networks says 45% are flood-based, and 26% are app based. Hal et al. sent Arbor the list of attacks his research had uncovered, but Arbor had only known of a small percentage of them, which is some small evidence that Arbor is under-reported.

Of the sites that eperience a DDoS attack last year, 56% had their sites shut down by their ISP, while 36% report that their ISPs successfully defended them. E.g., there was an attack on the Burmese dissident site, irrawaddy.org. This knocked not just that site out, but all of Thailand. Thailand has its own national ISP, which is Tier 2 or 3; a 1gb/sec attack will take down an ISP of that size. Irrawaddy moved ISPs, got hit with a 4gb attack and could not afford to pay for the additional bandwidth.

Hal points to the consolidation of content through fewer and fewer ASNs. In 2007, thousands of ASN’s cotribted 50% of content. In 2009, 150 ASNs contributed 50% of all Net traffic. This may be in part due to the rise of high def video (coming through a few providers), but there’s also fewer on the long tail providing content (e.g., using gmail instead of your own mail server, blogging on a cloud service, etc.). Small sites, not in the core, are at risk.

Should you build dedicated hosting services for human rights sites? That puts all your most at-risk sites in one pool. How do you figure the risk and thus the price? One free host for human rights sites does it for free because they’re a research group and want to watch the DDoS attacks.

The paper Hal et al wrote suggests that human rights sites move into the cloud. E.g., Google’s Blogger offers world class DDoS protection. But, this would mean exchanging the control of the DDoS attackers for the control of proprietary companies that might decide to shut them down. E.g., WikiLeaks moved onto Amazon’s cloud services, and then Amazon caved to Joe Lieberman and shut WikiLeaks down. The right lesson is that whenever you let someone else host your content, you are subject to intermediary censorship. It is an Internet architecture problem. We can respond to it architecturally — e.g., serve off of peer-to-peer networks — or form a consumer movement to demand non-censorship by hosts.

(The attacks by Anonymous were successful mainly against marketing sites. They don’t work against large sites.)

Recommendations:

  • Plan ahead

  • Minimize dynamic pages

  • Have robust monitoring, mirroring, and failover

  • Strongly consider hosting on blogger or something similar

  • Do not use the cheapest hosting provider or dns registrar

Bigger picture recommendations: In the most successful communities, there is an identifiable, embedded, technical experts who can get on the phone to highly-connected core systems. Many of these core entities — Yahoo, Google, etc. — want to help but don’t know how. In the meantime, more will move to cloud hosting, which means there’s a need for a policy, public pressure approach to ensure private companies do the right thing.

Q: Shaming as a technique?
A: We need to do this. But it doesn’t work if you’re, say, a large social media service with 500M users. Human rights orgs are a tin percentage of their users. They tend to make the easy decisions for them, and they’re not very transparent. (Tunisia may turn out to be turning point for Facebook, in part because FB was under attack there, and because it was heavily used by Tunisians.)

Q: Public hosting by the government for human rights groups?
A: Three worries. 1. It’s hard to imagine the intermediary censorship being less aggressive than from commercial companies. 2. It’d be a honeypot for attacks. 3. I’m not sure the US govt has the best geeks. Also, there’s a scaling problem. Akamai carries 2TB/sec of legit traffic. It can absorb an attack But the US would have to create a service that can handle 200gb/sec, which would be very expensive.

Q: What sort of tech expertise do you need to mount an attack?
A: The malware market is highly diversified and commodified. Almost all the botnets are mercenary. Some are hosted by countries that in exchange ask the botnets be turned on enemies now and then.

Q: Denial of payment?
A: We have a case in the study called “denial of service by bureaucracy.” E.g., a domain name was hijacked, and it took 6 wks to resolve. A denial of service attack doesn’t have to attack the server software.

Q: Can botnets be reverse engineered?
A: Yes. Arbor Net listens to the traffic to and from infected computers.
A: You either have to shift the responsibility to the PCs, or put it on the ISP. Some say it’s crazy that ISPs do nothing about subscribers whose computers are running continuously, etc.

[Fabulous presentation: Amazing compression of difficult material into a 1.5 hour totally understandable package. Go to the Berkman site to get the webcast when it’s ready.]

1 Comment »

January 24, 2011

Grimmelman non search neutrality

James Grimmelmann, whose writing on the Google Books settlement I’ve found helpful, has written an article about the incoherence of the concept of “search neutrality” — “the idea that search engines should be legally required to exercise some form of even-handed treatment of the websites they rank. ” (He blogs about it here.) He finds eight different possible meanings of the term, and doesn’t think any of them hold up.

Me neither. Relevancy is not an objective criterion. And too much transparency allows spammers to game the system. I would like to be assured that companies aren’t paying search engine companies to have their results ranked higher (unless the results are clearly marked as pay-for-position, which Google does but not clearly enough).

5 Comments »

January 22, 2011

Berkman Buzz

The weekly Berkman Buzz, compiled by Rebekah Heacock:

  • Stuart Shieber wonders if open-access fees disenfranchise authors with fewer financial resources link

  • Dan Gillmor discusses changes in Google’s leadership link

  • Herdict explores how unrest in North Africa is affecting online censorship link

  • Ethan Zuckerman will be publishing his first book link

  • The OpenNet Initiative reports on the Federal Communication Commission’s new proposal on net neutrality link

  • Weekly Global Voices: “DR of Congo: Discreet Commemorations of the 50th Anniversary of Patrice Lumumba’s Assassination” link

Be the first to comment »

January 21, 2011

Two of the Internet’s parents explain its origins and future

Scott Bradner and Steve Crocker are two of the tech geniuses who built the Internet in an open way and built it to be open. Now they have both published instructive columns recounting the thinking behind the Net that has been responsible for its success . I highly recommend both.

From Scott‘s:

The IETF has interpreted the “End to End” paper to basically say that
the network should not be application aware. Unless told otherwise by
an application, the network should treat all Internet traffic the
same.

…this design philosophy has led the IETF to create
technologies that can be deployed without having to get permission
from network operators or having to modify the networks.

…Last year I was worried about what rules regulators and politicians
were going to impose on the Internet. This year, my pessimism is
focused at a lower level in the protocol stack: I’m worried about what
kind of network the network operators will provide for the IETF to
build on, for me and you to use, and for tomorrow’s enterprises to
depend on.

From Steve‘s:

…we always tried to design each new protocol to be
both useful in its own right and a building block available to others.
We did not think of protocols as finished products, and we
deliberately exposed the internal architecture to make it easy for
others to gain a foothold. This was the antithesis of the attitude of
the old telephone networks, which actively discouraged any additions
or uses they had not sanctioned.

As we rebuild our economy, I do hope we keep in mind the value of
openness, especially in industries that have rarely had it. Whether
it’s in health care reform or energy innovation, the largest payoffs
will come not from what the stimulus package pays for directly, but
from the huge vistas we open up for others to explore.

2 Comments »

Open access continues to catch on

Science Magazine reports on a study sponsored by the EU that found that 89% of the 50,000 researchers surveyed think open access is good for their field. On the other hand, the reporter, Gretchen Vogel, points out that while 53% said they had published at least one open access article, only 10% of papers are published in open access journals. What’s holding them back from doing more open access publishing? About 40% said it was because there wasn’t enough funding to cover the publication fees, and 30% said there weren’t high-quality open access journals in their field.

The data and analysis is supposed to become available this week at The SOAP Project. Unfortunately, the Science Magazine article covering the report is only available to members of the AAAS or to those willing to pay $15 for 24 hours of access. (Hat tip to Andrew “Yes he is my brother” Weinberger.)

1 Comment »

January 20, 2011

If you laid out all the shelves in Harvard’s libraries…

Mainly because I wanted to futz around with the Google Maps API, I’ve created a mashup that pretends to lay out all the shelves in Harvard’s 73 libraries on a map.

Screen capture of map
Click to go to the page

You can choose your starting point — it defaults to Widener Library at Harvard — and choose whether you’d like to see a line of markers or concentric circles. It then pretends to map the shelves according to how many books there are in each subject.

Here’s where the pretending comes in. First, I have assumed that each book in the 12,000,000 volume collection is one inch thick. Second, I have used the Dewey Decimal system’s ten subject areas, even though Harvard doesn’t use Dewey. Third, I used an almost entirely arbitrary method to figure out how many books are in each subject: I did keyword subject searches. Sometimes, when the totals seemed way too low, I added in searches on sub-headings in Dewey. At the end, the total was probably closer to 4 million, which means my methodology was 300% unreliable. (Note: Math was never my strong suit.)

So, the actual data is bogus. For me, learning how to use the API was the real point. If you happen to have actual data for your local library, you can download the page and just plug them into the array at the beginning of the page. (All the code is in the .html file.)

4 Comments »

What English sounds like

Here’s a video of Italians speaking gibberish that is supposed to — and does — sound like English:

I got there from a comment from Elmo Gearloose (possibly a pseudonym) to a BoingBoing posting of Rush Limbaugh imitating what Hu Jintao sounds like to him — which strikes me not so much as racism as boastful ignorance. If it’s outrageous, it’s because it’s so inhospitable and unwelcoming.

2 Comments »

January 19, 2011

Your TV shoots back

As a diploma project, David Arenou has prototyped an augment reality game that turns your living room into a scene in a first person shooter. Here’s his description:

Before beginning, the user has to set action markers and hiding places with personal furniture. It can be a chair, an armchair, an overturned coffee table: whatever wanted. After calibration, the player sits behind one of his hideouts and the game can start. The position of the body will have a direct impact on the avatar that it embodies.

When the player hides, he becomes invisible for his virtual enemies. When he uncovers himself, he can attack but becomes vulnerable to enemy bullets. Following a shooting phase, the game forces the player to change hiding place or to touch one of the markers, in order to get to a new sequence…

On his site you can see a video of the actual prototype in which he places the computer-readable markers, and crouches behind his furniture in between firing off shots at his television. Here’s his concept video:

The future of gaming, the innocent-sounding origins of the apocalypse, or both?

2 Comments »

« Previous Page | Next Page »