I just received Google’s Oculus Rift emulator. Given that it’s made of cardboard, it’s all kinds of awesome.
Google Cardboard is a poke in Facebook’s eyes. FB bought Oculus Rift, the virtual reality headset, for $2B. Oculus hasn’t yet shipped a product, but its prototypes are mind-melting. My wife and I tried one last year at an Israeli educational tech lab, and we literally had to have people’s hands on our shoulders so we wouldn’t get so disoriented that we’d swoon. The Lab had us on a virtual roller coaster, with the ability to turn our heads to look around. It didn’t matter that it was an early, low-resolution prototype. Swoon.
Oculus is rumored to be priced at around $350 when it ships, and they will sell tons at that price. Basically, anyone who tries one will be a customer or will wish s/he had the money to be a customer. Will it be confined to game players? Not a chance on earth.
So, in the midst of all this justifiable hype about the Oculus Rift, Google announced Cardboard: detailed plans for how to cut out and assemble a holder for your mobile phone that positions it in front of your eyes. The Cardboard software divides the screen in two and creates a parallaxed view so you think you’re seeing in 3D. It uses your mobile phone’s kinetic senses to track the movement of your head as you purview your synthetic domain.
I took a look at the plans for building the holder and gave up. For $15 I instead ordered one from Unofficial Cardboard.
When it arrived this morning, I took it out of its shipping container (made out of cardboard, of course), slipped in my HTC mobile phone, clicked on the Google Cardboard software, chose a demo, and was literally — in the virtual sense — flying over the earth in any direction I looked, watching a cartoon set in a forest that I was in, or choosing YouTube music videos by turning to look at them on a circular wall.
Obviously I’m sold on the concept. But I’m also sold on the pure cheekiness of Google’s replicating the core functionality of the Oculus Rift by using existing technology, including one made of cardboard.
(And, yeah, I’m a little proud of the headline.)
Tagged with: facebook
Date: December 27th, 2014 dw
Here’s a four-minute video from July 13, 2003, of Zack Rosen describing the social networking tool he and his group were building for the Howard Dean campaign. DeanSpace let Dean supporters connect with one another on topics, form groups, and organize action. This was before Facebook, remember.
This comes from Lisa Rein’s archive. I’m sorry to say that I’ve lost touch with Lisa, so I hope she’s ok with my uploading this to YouTube. The talk itself was part of iLaw 2003, an event put on every couple of years or so by the Berkman Center and Harvard Law.
(I think that’s Aaron Swartz sitting in front.)
The Web was social before it had social networking software. It just hadn’t yet evolved a pervasive layer of software specifically designed to help us be social.
In 2003 it was becoming clear that we needed — and were getting — a new class of application, unsurprisingly called “social software.” But what sort of sociality were we looking for? What sort could such software bestow?
That was the theme of Clay Shirky’s 2003 keynote at the ETech conference, the most important gathering of Web developers of its time. Clay gave a brilliant talk,“A Group Is Its Own Worst Enemy,” in which he pointed to an important dynamic of online groups. I replied to him at the same conference (“The Unspoken of Groups”). This was a year before Facebook launched. The two talks, especially Clay’s, serve as reminders of what the Internet looked like before social networks.
Here’s what for me was the take-away from these two talks:
The Web was designed to connect pages. People, being people, quickly created ways for groups to form. But there was no infrastructure for connecting those groups, and your participation in one group did nothing to connect you to your participation in another group. By 2003 it was becoming obvious (well, to people like Clay) that while the Internet made it insanely easy to form a group, we needed help — built into the software, but based on non-technological understanding of human sociality — sustaining groups, especially now that everything was scaling beyond imagination.
So this was a moment when groups were increasingly important to the Web, but they were failing to scale in two directions: (1) a social group that gets too big loses the intimacy that gives it its value; and (2) there was a proliferation of groups but they were essential disconnected from every other group.
Social software was the topic of the day because it tried to address the first problem by providing better tools. But not much was addressing the second problem, for that is truly an infrastructural issue. Tim Berners-Lee’s invention of the Web let the global aggregation of online documents scale by creating an open protocol for linking them. Mark Zuckerberg addressed the issue of groups scaling by creating a private company, with deep consequences for how we are together online.
Clay’s 2003 analysis of the situation is awesome. What he (and I, of course) did not predict was that a single company would achieve the position of de facto social infrastructure.
When Clay gave his talk, “social software” was all the rage, as he acknowledges in his very first line. He defines it uncontroversially as “software that supports group interaction.” The fact that social software needed a definition already tells you something about the state of the Net back then. As Clay said, the idea of social software was “rather radical” because “Prior to the Internet, the last technology that had any real effect on the way people sat down and talked together was the table,” and even the Internet so far was not doing a great job supporting sociality at the group level.
He points out that designers of social software are always surprised by what people do with their software, but thinks there are some patterns worth attending to. So he divides his talk into three parts: (1) pre-Internet research that explains why groups tend to become their own worst enemy; (2) the “revolution in social software” that makes this worth thinking about; and (3) “about a half dozen things…that I think are core to any software that supports larger, long-lived groups.”
Part 1 uses the research of W.R. Bion from his 1961 book, Experiences in Groups that leads him, and Clay, to conclude that because groups have a tendency to sandbag “their sophisticated goals with…basic urges,” groups need explicit formulations of acceptable behaviors. “Constitutions are a necessary component of large, long-lived, heterogenous groups.”
Part 2 asks: if this has been going on for a long time, why is it so important now? “I can’t tell you precisely why, but observationally there is a revolution in social software going on. The number of people writing tools to support or enhance group collaboration or communication is astonishing.”
The Web was getting very very big by 2003 and Clay points says that “we blew past” the “interesting scale of small groups.” Conversation doesn’t scale.
“We’ve gotten weblogs and wikis, and I think, even more importantly, we’re getting platform stuff. We’re getting RSS. We’re getting shared Flash objects. We’re getting ways to quickly build on top of some infrastructure we can take for granted, that lets us try new things very rapidly.”
Why did it take so long to get weblogs? The tech was ready from the day we had Mosaic, Clay says. “I don’t know. It just takes a while for people to get used to these ideas.” But now (2003) we’re fully into the fully social web. [The social nature of the Web was also a main theme of The Cluetrain Manifesto in 2000.]
What did this look like in 2003, beyond blogs and wikis? Clay gives an extended, anecdotal example. He was on a conference all with Joi Ito, Peter Kaminski, and a few others. Without planning to, the group started using various modalities simultaneously. Someone opened a chat window, and “the interrupt logic” got moved there. Pete opened a wiki and posted its URL into the chat. The conversation proceeded along several technological and social forms simultaneously. Of course this is completely unremarkable now. But that’s the point. It was unusual enough that Clay had to carefully describe it to a room full of the world’s leading web developers. It was a portent of the future:
This is a broadband conference call, but it isn’t a giant thing. It’s just three little pieces of software laid next to each other and held together with a little bit of social glue. This is an incredibly powerful pattern. It’s different from: Let’s take the Lotus juggernaut and add a web front-end.
Most important, he says, access is becoming ubiquitous. Not uniformly, of course. But it’s a pattern. (Clay’s book Here Comes Everybody expands on this.)
In Part 3, he asks: “‘What is required to make a large, long-lived online group successful?’ and I think I can now answer with some confidence: ‘It depends.’ I’m hoping to flesh that answer out a little bit in the next ten years.” He suggests we look for the pieces of social software that work, given that “The normal experience of social software is failure.” He suggests that if you’re designing social software, you should accept three things:
- You can’t separate the social from the technical.
- Groups need a core that watches out for the well-being of the group itself.
- That core “has rights that trump individual rights in some situations.” (In this section, Clay refers to Wikipedia as “the Wikipedia.” Old timer!)
Then there are four things social software creators ought to design for:
- Provide for persistent identities so that reputations can accrue. These identities can of course be pseudonyms.
- Provide a way for members’ good work to be recognized.
- Put in some barriers to participation so that the interactions become high-value.
- As the site’s scale increases, enable forking, clustering, useful fragmentation.
Clay ends the talk by reminding us that: “The users are there for one another. They may be there on hardware and software paid for by you, but the users are there for one another.”
This is what “social software” looked like in 2003 before online sociality was largely captured by a single entity. It is also what brilliance sounds like.
I gave an informal talk later at that same conference. I spoke extemporaneously and then wrote up what I should have said. My overall point was that one reason we keep making the mistake that Clay points to is that groups rely so heavily on unspoken norms. Making those norms explicit, as in a group constitution, can actually do violence to the group — not knife fights among the members, but damage to the groupiness of the group.
I said that I had two premises: (1) groups are really, really important to the Net; and (2) “The Net is really bad at supporting groups.”
It’s great for letting groups form, but there are no services built in for helping groups succeed. There’s no agreed-upon structure for representing groups. And if groups are so important, why can’t I even see what groups I’m in? I have no idea what they all are, much less can I manage my participation in them. Each of the groups I’m in is treated as separate from every other.
I used Friendster as my example “because it’s new and appealing.” (Friendster was an early social networking site, kids. It’s now a gaming site.) Friendster suffers from having to ask us to make explicit the implicit stuff that actually matters to friendships, including writing a profile describing yourself and having to accept or reject a “friend me” request. “I’m not suggesting that Friendster made a poor design decision. I’m suggesting that there is no good design decision to be made here.” Making things explicit often does violence to them.
That helps explains why we keep making the mistake Clay points to. Writing a constitution requires a group to make explicit decisions that often break the groups apart. Worse, I suggest, groups can’t really write a constitution “until they’ve already entangled themselves in thick, messy, ambiguous, open-ended relationships,” for “without that thicket of tangles, the group doesn’t know itself well enough to write a constitution.”
I suggest that there’s hope in social software if it is considered to be emergent, rather than relying on users making explicit decisions about their sociality. I suggested two ways it can be considered emergent: “First, it enables social groups to emerge. It goes not from implicit to explicit, but from potential to actual.” Second, social software should enable “the social network’s shape to emerge,” rather than requiring upfront (or, worse, topdown) provisioning of groups. I suggest a platform view, much like Clay’s.
I, too, ask why social software was a buzzword in 2003. In part because the consultants needed a new topic, and in part because entrepreneurs needed a new field. But perhaps more important (I suggested), recent experience had taught us to trust that we could engage in bottom-up sociality without vandals ripping it all to part. This came on the heels of companies realizing that the first-generation topdown social software (e.g., Lotus Notes) was stifling as much sociality and creativity as it was enabling. But our experience with blogs and wikis over the prior few years had been very encouraging:
Five years ago, it was obvious beyond question that groups need to be pre-structured if the team is to “hit the ground running.” Now, we have learned — perhaps — that many groups organize themselves best by letting the right structure emerge over time.
I end on a larger, vaguer, and wrong-er point: “Could we at last be turning from the great lie of the Age of Computers, that the world is binary?” Could we be coming to accept that the “world is ambiguous, with every thought, perception and feeling just a surface of an unspoken depth?”
I have an op-ed/column up at CNN about the Facebook experiment. [The next day: The op-ed led to 4 mins on the Jake Tapper show. Oh what the heck. Here’s the video.]
All I’ll say here is how struck I am again (as always) about the need to leave out most of everything when writing goes from web-shaped to rectangular.
Just as a quick example, I’m not convinced that the Facebook experiment was as egregious as the headlines would have us believe. But I made a conscious decision not to address that point in my column because I wanted to make a more general point. The rectangle for an op-ed is only so big.
Before I wrote the column, I’d observed, and lightly participated in, some amazing discussion threads among people who bring many different sorts of expertise to the party. Disagreements that were not just civil but highly constructive. Evidence based on research and experience experience. Civic concern. Emotional connections. Just amazing.
I learned so much from those discussions. What I produced in my op-ed is so impoverished compared to the richness in that tangle of linked differences. That’s where the real knowledge lives.
I was checking Facebook yesterday afternoon, as I do regularly every six months or so. It greeted me with a list of friend requests. One was from the daughter of a colleague. So I accepted on the grounds that it was unexpected but kind of cute that she would ask.
Only after I clicked did I realize that the list was not of requests but of suggestions for people I might want to friend. So, now the daughter of a colleague has received a friend request from a 61 year old man she never heard of, and I’m probably going to end up on the No Fly list.
The happy resolution: I contacted my colleague to let him know, and he took it as an opportunity to have a conversation with his daughter about how to handle friend requests from people she doesn’t know, especially pervy-looking old men.
Categories: social media
Tagged with: facebook
Date: July 2nd, 2012 dw
I’m on a panel about “What’s Next in Social Media?” at the National Archives tonight , moderated by Alex Howard, the Government 2.0 Correspondent for O’Reilly Media, and with fellow panelists Sarah Bernard, Deputy Director, White House Office of Digital Strategy; Pamela S. Wright, Chief Digital Access Strategist at the National Archives. It’s at 7pm, with a “social media fair” beginning at 5:30pm.
I don’t know if we’re going to be asked to give brief opening statements. I suspect not. But, if so I’m thinking of talking about the context, because I don’t know what social media will be:
1. The Internet began as an open “address space” that enabled networks to be created within it. So, we got the Web, which networked pages. We got social networks, which networked people. We are well on our way to networking data, through the Semantic Web and Linked Open Data. We are getting an Internet of Things. The DPLA will, I hope, help create a network of cultural objects.
2. The Internet and the Web have always been social, but the rise of networks particularly tuned to social needs is of vast importance because the social determines all the rest. Indeed, the Internet is a medium only because we are in fact that through which messages pass. We pass them along because they matter to us, and we stake a bit of selves on them. We are the medium.
3. Of all of the major and transformative networks that have emerged, only the social networks are closed and owned. I don’t know how or if we will get open social networks, but it is a danger that as of now we do not have them.
Jon Mitchell at ReadWriteWeb reports on a ten-minute talk Chris Poole (founder of 4chan and Canvas) gave at Web 2.0. Chris argues that Facebook and Google are getting identity wrong. “Identity is prismatic.”
Being confined to a single identity on the Web is like a wiki accepting only a single final draft, only far more tragic.
Tagged with: chris poole
Date: October 18th, 2011 dw
Edward Vielmetti asked on Google Plus “What is Google+ for?” I thought Peter Kaminski‘s response was particularly insightful. (Quoted in full with Pete’s permission.)
The purpose of Google+ is to keep you within the Google web (as opposed to having you outside anybody’s web, or in someone else’s web). Where “web” used to mean the spidered collection of documents and files available via HTTP, but has grown to mean your Digital Life.
Google’s business is to mediate as much of your Digital Life as it can — similar to the way Microsoft’s business in the old days was to mediate as much of your Digital Office as it could (back in the day when Digital Life and Digital Office were nearly equivalent). The monetization model is completely different, of course; but the more of your Digital Life Google can mediate, the more they can monetize, and the more sticky the whole suite is. Google wants to be as ubiquitous as Microsoft used to feel.
(Google and Microsoft have also had altruistic goals of making the world a better place while running their business, but of course that means they have to be successful at business to be successful in their altruistic goals.)
Google has been pretty good at understanding how far Digital Life will reach into Real Life. Want to find out where you are physically and where you’re going? There’s a Google (Maps) for that. Want to watch millions of channels of video? There’s a Google (YouTube) for that. Want to talk to your friends, family and business associates on the phone? There’s a Google (Android, Voice) for that. Etc.
It took them a while to figure out that “socializing with friends” was a big part of regular folks’ Real Life, and then it’s taken them a while to figure out how to make a Google for that. But it looks to me like they got it right with Plus.
Bonus look at the other players in the game:
Apple: understands the idea of a Digital Life, but hampered by its long-term view that Digital Life would be built around digital assets (documents, apps, media), instead of Real Life.
Facebook: has a huge head start on mediating your Digital Life, because it’s built on socializing, which is a big part of regular folks’ Real Life. May or may not figure out there are other parts to it.
Microsoft: mediated most people’s Digital Life for a long time. Parts of it understand that there’s more to Digital Life than Digital Office. But they may die by milking their old cash cow (Innovator’s Dilemma) before succeeding in the new game.
Yahoo: accidentally, subconsciously, understood Digital Life early on. Couldn’t wake up and realize it consciously, gave away the race.
Categories: social media
Tagged with: facebook
• google plus
• social nets
Date: July 8th, 2011 dw
Time Magazine’s choice of Person of the Year is meaningless as data, but meaningful as metadata. Picking one person as the most influential in a year is almost always just silly. No one takes it seriously except as a signifier of broader cultural currents.
This year it’s Mark Zuckerberg. That seems to me to be one of the many reasonable choices Time could have made. But I have two meta-comments.
1. I’m glad that Time took MZ over Julian Assange. Facebook is truly influential and important. WikiLeak’s importance is primarily symbolic, and it has been given that symbolic importance mainly by forces that want to use it as justification for killing what they don’t like about the Internet â€” its openness, its bottom-uppity character, its distrust of extrinsic controls…in other words, all that makes it the Internet.
2. The contrast the Time article draws between MZ and the portrait of him in The Social Network (a movie I did not care for) will, I hope, hurt the movie’s chances at the Oscars. It makes vandalism of Wikipedia’s biographies of living people look bush league.
(Lev Grossman’s cover story about MZ for Time is well worth reading.)
Tagged with: facebook
• mark zuckerberg
Date: December 15th, 2010 dw
Paul Ohm (law prof at U of Colorado Law School — here’s a paper of his) moderates a panel among those with lots of data. Panelists: Jessica Staddon (research scientist, Google), Thomas Lento (Facebook), Arvin Narayanan (post-doc, Stanford), and Dan Levin (grad student, U of Mich).
NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.
Dan Levin asks what Big Data could look like in the context of law. He shows a citation network for a Supreme Court decision. “The common law is a network,” he says. He shows a movie of the citation network of first thirty years of the Supreme Court. Fascinating. Marbury remains an edge node for a long time. In 1818, the net of internal references blooms explosively. “We could have a legalistic genome project,” he says. [Watch the video here.]
What will we be able to do with big data?
Thomas Lento (Facebook): Google flu tracking. Predicting via search terms.
Jessica Staddon (Google): Flu tracking works pretty well. We’ll see more personalization to deliver more relevant info. Maybe even tailor privacy and security settings.
Dan: If someone comes to you as a lawyer and ask if she has a case, you’ll do a better job deciding if you can algorithmically scour the PACER database of court records. We are heading for a legal informatics revolution.
Thomas: Imagine someone could tell you everything about yourself, and cross ref you with other people, say you’re like those people, and broadcast it to the world. There’d be a high potential for abuse. That’s something to worry about. Further, as data gets bigger, the granularity and accuracy of predictions gets better. E.g., we were able to beat the polls by doing sentiment analysis of msgs on Facebook that mention Obama or McCain. If I know who your friends are and what they like, I don’t actually have to know that much about you to predict what sort of ads to show you. As the computational power gets to the point where anyone can run these processes, it’ll be a big challenge…
Jessica: Companies have a heck of a lot to lose if they abuse privacy.
Helen Nissenbaum: The harm isn’t always to the individual. It can be harm to the democratic system. It’s not about the harm of getting targeted ads. It’s about the institutions that can be harmed. Could someone explain to me why to get the benefits of something like the Flu Trends you have to be targeted down to the individual level?
Jessica: We don’t always need the raw data for doing many types of trend analysis. We need the raw data for lots of other things.
Arvind: There are misaligned incentives everywhere. For the companies, it’s collect data first and ask questions yesterday; you never know what you’ll need.
Thomas: It’s hard to understand the costs and benefits at the individual level. We’re all looking to build the next great iteration or the next great product. The benefits of collecting all that data is not clearly defined. The cost to the user is unclear, especially down the line.
Jessica: Yes, we don’t really understand the incentives when it comes to privacy. We don’t know if giving users more control over privacy will actually cost us data.
Arvind describes some of his work on re-identification, i.e., taking anonymized data and de-anonymizing it. (Arvind worked on the deanonymizing of Netflix records.) Aggregation is a much better way of doing things, although we have to be careful about it.
Q: In other fields, we hear about distributed innovation. Does big data require companies to centralize it? And how about giving users more visibility into the data they’ve contributed — e.g., Judith Donath’s data mirrors? Can we give more access to individuals without compromising privacy?
Thomas: You can do that already at FB and Google. You can see what your data looks like to an outside person. But it’s very hard to make those controls understandable. There are capital expenditures to be able to do big data processing. So, it’ll be hard for individuals, although distributed processing might work.
Paul: Help us understand how to balance the costs and benefits? And how about the effect on innovation? E.g., I’m sorry that Netflix canceled round 2 of its contest because of the re-identification issue Arvind brought to light.
Arvind: No silver bullets. It can help to have a middleman, which helps with the misaligned incentives. This would be its own business: a platform that enables the analysis of data in a privacy-enabled environment. Data comes in one side. Analysis is done in the middle. There’s auditing and review.
Paul: Will the market do this?
Jessica: We should be thinking about systems like that, but also about the impact of giving the user more controls and transparency.
Paul: Big Data promises vague benefits — we’ll build something spectacular — but that’s a lot to ask for the privacy costs.
Paul: How much has the IRB (institutional review board) internalized the dangers of Big Data and privacy?
Daniel: I’d like to see more transparency. I’d like to know what the process is.
Arvind: The IRB is not always well suited to the concerns of computer scientists. Maybe current the monolithic structure is not the best way.
Paul: What mode of solution of privacy concerns gives you the most hope? Law? Self-regulation? Consent? What?
Jessica: The one getting the least attention is the data itself. At the root of a lot of privacy problems is the need to detect anomalies. Large data sets help with this detection. We should put more effort in turning the date around to use it for privacy protection.
Paul: Is there an incentive in the corporate environment?
Jessica: Google has taken some small steps in this direction. E.g., Google’s “got the wrong bob” tool for gmail that warns you if you seem to have included the wrong person in a multi-recipient email. [It’s a useful tool. I send more email to the Annie I work with than to the Annie I’m married to, so my autocomplete keeps wanting to send Annie I work with information about my family. Got the wrong Bob catches those errors.]
Dan: It’s hard to come up with general solutions. The solutions tend to be highly specific.
Arvind: Consent. People think it doesn’t work, but we could reboot it. M. Ryan Calo at Stanford is working on “visceral notice,” rather than burying consent at the end of a long legal notice.
Thomas: Half of our users have used privacy controls, despite what people think. Yes, our controls could be simpler, but we’ve been working on it. We also need to educate people.
Q: FB keeps shifting the defaults more toward disclosure, so users have to go in and set them back.
Thomas: There were a couple of privacy migrations. It’s painful to transition users, and we let them adjust privacy controls. There is a continuum between the value of the service and privacy: all privacy and it would have no value. It also wouldn’t work if everything were open: people will share more if they feel they control who sees it. We think we’ve stabilized it and are working on simplification and education.
Paul: I’d pick a different metaphor: The birds flying south in a “privacy migration”…
Thomas: In FB, you have to manage all these pieces of content that are floating around; you can’t just put them in your “house” for them to be private. We’ve made mistakes but have worked on correcting them. It’s a struggle of a mode of control over info and privacy that is still very new.
Categories: too big to know
Tagged with: 2b2k
Date: November 30th, 2010 dw
Next Page »