Here’s the start of a piece I posted at Medium about one thing we might do with a gigabit connection.
It’s 2017 and this year’s riot is in San Diego. It involves pandas, profit-driven zoo executives, and a Weight Watchers sponsorship. Doesn’t matter. People are massing in the streets and it’s heading toward a confrontation.
You first hear about this on Twitter. The embedded link takes you to FlyEye, a site that is unrelated to whatever sites and companies own trademarks like it in 2014. (Stand down, lawyers! This is all made up!)
Thankfully, San Diego in 2017 provides gigabit connectivity. In fact, the entire nation has gigabit, thanks to a personal appearance by Jesus H. Christ in the Comcast headquarters in late 2015.
At the FlyEye site you scan a huge video wall that shows you a feed from every person out in the streets who is sporting a meshed GoPro or Google Glass wearable video camera. Thousands of them. All 4K, of course.
Read the rest here.
Tagged with: gigabit
Date: October 16th, 2014 dw
Despite the claims of some — and unfortunately some of these some run the companies that provide the US with Internet access — there are n reasons why we need truly high-speed, high-capacity Internet access, where n = everything we haven’t invented yet.
If we had truly high-speed, high-capacity Internet access, protesters in Ferguson might have each worn a GoPro video camera, or even just all pressed “Record” on their smartphones, and those of us not in Ferguson could have dialed among them to see what’s happening. In fact, it’s pretty likely someone would have written an app that treats co-located video streams as a single source to be made sense of, giving us fish-eye, fly-eye perspectives anywhere we want to focus: a panopticon for social good.
Clive Thompson is talking about the quest to build a new Net without its flaws.
NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.
Problems: Governments can shut down the Net. Corporations have a lot of control over copyright, causing the corporations that deliver the Net to throttle it. And Mother Nature can shut it down, e.g. Sandy. Is it possible to build another Internet? He’s been talking with people about this, mainly with people building mesh networks.
But can you make a mesh big enough to cover the planet? Not now. But what are the biggest ones now. (1) Guifi.net in Spain started about 10 years ago. 19,509 nodes, 43 web servers, etc. They don’t view themselves as building a new Net but extending the old one. (2) Athens Municipal Wireless covers about 50% of Greece that has internal versions of Google, Yahoo, etc. Way faster than normal Nets. (3) Quintana Libre serves a rural town of fewer than 500 people, speeds up to 20mbps. (4) Red Hook’s mesh provides local services that don’t show up off the mesh.
For mesh to work, the local community has to be really invested in it.
A guy in Australia has tech — Serval — that lets you make mobile-to-mobile calls to your normal phone number but without cell towers.
So, can you build another Net? You can do inter-continental hops: Austr
alia to Slovenia exists. People are talking about buying decommissioned satellites “but you’d need a really big Kickstarter for that.”
So can you build a new one? Yes, and no, and no and maybe. Reliability and scalability? Mesh sw is still way too geeky. Bootstrapping conundrum: people want global connections, not just local.
Q: [me] But mesh currently connects outside of the mesh to the Internet via the existing Internet backbones. Is there any hope for a pure mesh Internet?
A: Not yet.
Q: Suppose we scaled back our expectations and started with mesh just for SMS, for example?
A: Yes, and that’d be great for activists who want to connect.
A: The examples I gave are all different. Quintana just wants access to the big Internet, but in Athens much of what they want is access to their own local stuff.
, net neutrality
Tagged with: mesh
Date: January 18th, 2013 dw
Lots of good stuff as VP Gore answers questions mainly about climate change.
But there’s also this from him:
Our national information infrastructure is no longer competitive. We need to invest in more bandwidth, easier access, and the rapid transition of our democratic institutions to the internet. And we need to protect the freedom of the internet against corporate control by legacy businesses that see it as a threat, and against the obscene invasions of privacy and threats to security from government and corporations alike. Please think about this: almost everytime there has been a choice between privacy/security on the one hand and convenience on the other, the mass of folks have chosen convenience. I for one believe the “stalker economy” on the internet is undemocratic and anti- American. Are folks at the gag point on this yet? Thanks, btw, to the Reddit community for fighting off Sopa and PIPA. Keep your powder dry; more big struggles ahead.
F@#$ing Florida :(
Google commissioned the compiling of
an international dataset of retail broadband Internet connectivity prices. The result was an international dataset of 3,655 fixed and mobile broadband retail price observations, with fixed broadband pricing data for 93 countries and mobile broadband pricing data for 106 countries. The dataset can be used to make international comparisons and evaluate the efficacy of particular public policies—e.g., direct regulation and oversight of Internet peering and termination charges—on consumer prices.
The links are here. WARNING: a knowledgeable friend of mine says that he has already found numerous errors in the data, so use them with caution.
, too big to know
Tagged with: 2b2k
• big data
Date: August 27th, 2012 dw
As you likely know, Google is in the midst of providing ‘ultra high speed fiber’ access to the residents of Kansas City (MO and KS). (‘Ultra high speed‘ means at least 1gb, which is
50100x faster than your 10m gb connection.) This has been positioned as an experiment, and as a poke in the eye to the incumbents to “show ‘em how it’s done.” And it has apparently made the incumbents nervous enough to offer residents a bounty for tips about the deployment.
Now Bill St. Arnaud speculates about how Google is going to turn this into a business. I have zero idea if he’s right, simply because I don’t know enough to have an opinion, but it sure is some interesting speculation.
Bill’s post is very readable, so I suggest you not rely on my summary, but here goes. First, Bill wonders how Google could hope to make back its investment in the physical infrastructure, since providers need about 40% of the market to subscribe to drop the per-user cost sufficiently. But (Bill figures), the incumbents will never let Google take 40% of their market. So, Bill figures:
Google will offer a basic free high speed Internet to each and every home, perhaps bundled with Google TV using their new set top box. A variety of premium services will also be offered for additional fees. I would not be surprised that Google decided to offer a basic 1 Gbps service to every home. This would clearly differentiate Google from the cableco or telco and make it almost impossible for them to compete without undertaking a massive investment themselves.
But, Bill guesses that the premium services will still not make the venture profitable. So, he speculates that Google…
…could offer to peak manage the customer’s power usage, by briefly turning off air conditioners and hot water tanks. They could also install smart thermostats and other devices to further reduce energy consumption. The money in the energy savings would be used to pay for the fiber or premium services, rather than being returned to the customer as piffling amount of energy savings.
So, the deal to users would be: We’ll give you incredibly high speed connectivity (or we’ll give you some great premium services) if you’ll let your energy company install a smart thermostat and manage your peak energy consumption in ways you won’t much notice. The user’s energy bills don’t go down (or don’t go down proportional to their energy consumption decrease), and the energy company shares the money with Google.
I’m not convinced that users would take the deal positioned that way. Maybe I’m positioning it wrong, but it seems like a pretty complex offer. I think I’d rather take a deal with my energy company to lower my usage and my costs, and then decide if I want to pay Google for fiber access or for premium fiber access. I already resent the cablecos for making their “triple play” (telephone, tv, Internet) pragmatically a requirement to get any one of the three. A double play of Internet and energy savings would be even weirder.
But, Bill knows approximately 50x what my own poor brain fiber does. The key is, I believe, in the energy company making the claim that the decrease in energy consumption will be minor, the noticeable impact on the user will be negligible, and the monetary savings would be “piffling.” If he’s right, it’ll be fascinating to watch.
This isn’t right, is it?
Tagged with: broadband
Date: July 13th, 2012 dw
A post by Stacy Higginbotham at GigaOm talks about the problems moving Big Data across the Net so that it can be processed. She draws on an article by Mari Silbey at SmartPlanet. Mari’s example is a telescope being built on Cerro Pachon, a mountain in Chile, that will ship many high-resolution sky photos every day to processing centers in the US.
Stacy discusses several high-speed networks, and the possibility of compressing the data in clever ways. But a person on a mailing list I’m on (who wishes to remain anonymous) pointed to GLIF, the Global Lambda Integrated Facility, which rather surprisingly is not a cover name for a nefarious organization out to slice James Bond in two with a high-energy laser pointer.
The title of its “informational brochure” [pdf] is “Connecting research worldwide with lightpaths,” which helps some. It explains:
GLIF makes use of the cost and capacity advantages offered by optical multiplexing, in order to build an infrastructure that can take advantage of various processing, storage and instrumentation facilities around the world. The aim is to encourage the shared use of resources by eliminating the traditional performance bottlenecks caused by a lack of network capacity.
Multiplexing is the carrying of multiple signals at different wavelengths on a single optical fiber. And these wavelengths are known as … wait for it … lambdas. Boom!
My mailing list buddy says that GLIF provides “100 gigabit optical waves”, which compares favorably to your pathetic earthling (um, American) 3-20 megabit broadband connection,(maybe 50mb if you have FIOS), and he notes that GLIF is available in Chile.
To sum up: 1. Moving Big Data is an issue. 2. We are not at the end of innovating. 3. The bandwidth we think of as “high” in the US is a miserable joke.
By the way, you can hear an uncut interview about Big Data I did a few days ago for Breitband, a German radio program that edited, translated, and broadcast it.
, too big to know
Tagged with: 2b2k
• big data
Date: July 7th, 2012 dw
States are being pushed to pass legislation to prevent cities from offering municipal wifi, in order to preserve the current providers’ de facto monopolies. The latest are Georgia and South Carolina, because it would like be um terrible and, er, un-American to let localities experiment and maybe enter into private-public partnerships to speed more even distribution of Net access, or maybe even to view minimal Net access as some sort of public good or, well, do anything that doesn’t first of all maximize the profits of some large companies following a policy that has pushed America way down the global list of broadband access in terms of prices and speeds, because you know the Net is just used for porn and games and stuff and we have to PROTECT THE JOB CREATORS, yeah that’s it.
Tagged with: wifi
Date: January 25th, 2012 dw
Benoît Felten and Herman Wagter have published a follow up to their 2009 article “Is the ‘bandwidth hog’ a myth?.” The new article (for sale, but Benoit summarizes it on his blog) analyzes data from a mid-size North American ISP and confirms their original analysis: Data caps are at best a crude tool for targeting the users who most affect the amount of available bandwidth.
Read Benoît’s post for the details (or at least a fairly detailed overview of the details). But here’s the gist:
Benoît and Herman looked at the actual usage data in five minute increments of broadband customers sharing a single aggregation link. They looked both at the total number of megabytes being downloaded (= data consumption) and the number of megabits per second being used (= bandwidth usage).
They found that there is indeed a set of users who download a whole lot: “The top 1% of data consumers…account for 20% of the overall consumption.” But half of these “Very Heavy consumers” are doing so on plans that give them only 3Mbps, as opposed to the highest tier of this particular ISP, which is 6Mbps. So, even with their heavy consumption, their bandwidth usage is already limited. Further, if you look at who is using the most bandwidth during peak hours, 85.3% of the bandwidth is being used by those are not Very Heavy users.
Here’s the point. ISP assumes that Very Heavy users (= “data hogs” = “people who use the bandwidth they’re paying for”) are responsible for clogging the digital arteries. So, the ISPs measure data consumption in order to preserve bandwidth. But, according to Benoît and Herman’s data, the vast bulk of bandwidth during the times when bandwidth is scarce (= peak hours) is not taken up by the Very Heavy users. Thus, punishing people for downloading too much inhibits the wrong people. Data consumption is not a good measure of critical broadband usage.
Put differently: “42% of all customers (and nearly 48% of active customers) are amongst the top 10% of bandwidth users at one point or another during peak hours.” The problem therefore is not “data hogs.” It’s people going about their normal business of using the Net during the most convenient hours.
I asked Benoît (via email) what he thinks would be a more effective and fair way of limiting usage during peak hours, and he replied:
throttling everyone indiscriminately during actual peaks (ie. not predetermined times that could be considered peak) would be a fairer solution, although the cost of implementing that should be weighed against the cost of increasing the capacity in the aggregation, core and transit. The economics don’t necessarily work. And of course, that would affect all users, and might create dissatisfaction. But it would be fair and more effective.
In any case, the data suggest that “data hogs” are not the main culprits causing bandwidth scarcity. The real problem is you and me using our bandwidth non-hoggishly.
The plan to provide ultra high speed Internet connectivity to universities (mainly in the heartland) is exciting. And it’s got some serious people behind it, including Lev Gonick and Blair Levin.
The NY Times article, seeking to find something negative to say about it, finds someone who doubts that providing significantly higher speeds will lead to innovative uses of those greased-lightning pipes. Does history count for nothing?
Tagged with: broadband
Date: July 28th, 2011 dw
Next Page »