Joho the BlogJanuary 2016 - Joho the Blog

January 28, 2016

Keep the Web unbroken, with Amber

When sites go down, they don’t take the links to them with them. So, your posts now point to 404s. That’s not just an inconvenience. It’s Web entropy and over time it will render the Web less and less useful and even less intelligible.

Amber fights Web entropy. It’s a plugin for WordPress or Drupal that automatically takes a snapshot of whatever you’re linking to. If the linked site goes down — or is taken down by a government that doesn’t like what it’s saying — your readers will still be able to read what was there when you linked to it.

For example, this is a page that I posted and then took down. It was here: http://toobigtoknow.com/amberSample.html. It’s not there now. But if you hover over the link, Amber shows you what you’d otherwise be missing.

Amber’s pedigree literally could not be better. It’s a project from the Berkman Center, from an idea cooked up by Jonathan Zittrain and Tim Berners-Lee. It is a fully distributed system, thus helping to re-decentralize the Web, although you can opt to store the page images at sites like the Internet Archive, Perma.cc, and Amazon AWS.

What are you waiting for?

 


 

If you install Amber and it’s not working, make sure that you’ve created a folder called “amber” in your WordPress “uploads” directory: /wp-content/uploads/amber.

8 Comments »

January 26, 2016

Oscars.JSON. Bad, bad JSON

Because I don’t actually enjoy watching football, during the Pats vs. Broncs game on Sunday I transposed the Oscar™ nominations into a JSON file. I did this very badly, but I did it. If you look at it, you’ll see just how badly I misunderstand JSON on some really basic levels.

But I posted it at GitHub™ where you can fix it if you care enough.

Why JSON™? Because it’s an easy format for inputting data into JavaScript™ or many other languages. It’s also human-readable, if you have a good brain for indents. (This is very different from having many indents in your brain, which is one reason I don’t particularly like to watch football™, even with the helmets and rules, etc.)

Anyway, JSON puts data into key:value™ pairs, and lets you nest them into sets. So, you might have a keyword such as “category” that would have values such as “Best Picture™” and “Supporting Actress™.” Within a category you might have a set of keywords such as “film_title” and “person” with the appropriate keywords.

JSON is such a popular way of packaging up data for transport over the Web™ that many (most? all?) major languages have built-in functions for transmuting it into data that the language can easily navigate.

So, why bother putting the Oscar™ nomination info into JSON? In case someone wants to write an app that uses that info. For example, if you wanted to create your own Oscar™ score sheet or, to be honest, entry tickets for your office pool, you could write a little script and output it exactly as you’d like. (Or you could just google™ for someone else’s Oscar™ pool sheet.) (I also posted a terrible little PHP script™ that does just that.)

So, a pointless™ exercise™ in truly terrible JSON design™. You’re welcome™!

Comments Off on Oscars.JSON. Bad, bad JSON

January 23, 2016

Guns, Sarah Palin, and other hilarious stuff

My brother Andy points to a New Yorker humor post by John Quaintance about the original intent of the Second Amendment. It’s simultaneously hilarious and sad.

Then, in the righthand column there’s a link to an Andy Borowitz post with an Onion-esque title that I enjoyed:

Palin Blames Obama for Her Defeat in 2008 Election

And while we’re on the subject of terribly sad mirth, here’s Colbert’s hilarious impersonation of the First Hockey Mom’s rhetorical style / way of thinking:

2 Comments »

January 22, 2016

Open Syllabus Project goes live—Yay for open platforms!

The Open Syllabus Project has just gone live with a terrific beta Web site and a front page article about it by two of the main people on the project. (I’m proud to be an advisor to the group.)

The OSP is an open platform that so far has aggregated over a million syllabi. At the beta version of their search site you can do plain old searches, or filter by a number of factors. Want to see what is the most taught work at Harvard? In the state of Texas? In the field of Biology? Lucky you.

The project is computing what it calls a “Teaching Score” for each work, a number from 1-100. This is along the same lines of the StackScore I’ve been pushing for, a metric we use in Harvard’s LibraryCloud Project and that will be used in the Linked Data for Libraries project. (The OSP used Harvard’s open catalog metadata as a main source for book metadata and disambiguation; that metadata is available through LibraryCloud’s API. It’s an intertwingly world.)

The OSP plans on making its data available through open APIs, which will multiply the good effect it has. Sites will be able to integrate data from the OSP through the API, developers will be able to create apps that use that data, and researchers will find ways to investigate it that we literally cannot imagine.

Now, you’d think someone would have done something like the OSP years ago. In fact, there have certainly been efforts. For example, Dan Cohen (currently head of the DPLA) scoured the Web and aggregated about a million publicly available syllabi. But the sad truth is that most academic institutions don’t make their syllabi openly available. In fact, many institutions and many professors copyright their syllabi. That makes sense to me if they have written little essays in them. But as a listing of topics and works, I can’t imagine why anyone would insist on asserting copyright. What’s the worst that would happen? Some other teacher copies your syllabus perfectly? That teacher has learned from you, and you’re going to teach your course differently anyway. Meanwhile, the potential good from sharing syllabi is enormous: We can learn from one another. We can see unintended patterns that may express wisdom or bias.

The OSP is here. It’s going to make a real difference.

2 Comments »

Good luck with the snow, my southern friends

I just got off the phone with a friend in DC where a couple of days ago one inch of snow caused 6-hour tie-ups, causing some people to abandon their cars. Now the city is expecting a record thirty inches (or what we in Boston call, “Oh, it looks like it may have snowed overnight”). Residents are being told to expect power outages.

Best of luck to you all. You have New England’s sympathies. (And you also have an offer of help from Boston’s mayor.)

At least we can look forward to the Republican snowball fight in Congress to prove that global warming is a myth.

1 Comment »

January 16, 2016

Getting the path from the Dropbox API

Suppose you’re using the Dropbox API to let a user choose a file from her Dropbox folder and open it in your application. Dropbox provides a convenient widget — the Chooser — you can more or less just drop into your Web page. But…suppose you want to find out the path of an item that a user opens. For example, you want to know not only that the user has opened “testfile.txt” but that it’s “Dropbox/testfolder/TestA/testfile.txt”. The chooser only tells you the link is something like:

https://www.dropbox.com/s/lry43krhdskl0bxeiv/testfile.txt?dl=1

Figuring out how to get that path information took me /for/ev/er. I know it shouldn’t have, but it did. So, here’s how I’m doing it. (As always, please try not to laugh at my efforts at coding. I am an amateur. I suck. Ok?) (I owe thanks to Andrew Twyman at Dropbox who went out of his way to help me. Thanks, Andrew! And none of this is his fault.)

The way to get the path is explained in Dropbox’s API documentation, but that documentation assumes I know more than I do. Dropbox also provides an API Explorer that lets you try out queries and shows you the code behind them. Very helpful, but not quite helpful enough for the likes of me, because I need to know what the actual PHP or JavaScript code is. (It’d be easier if I knew Python. Someday.)

So, here’s roughly how I got it working. I’m going to skip some of the preliminaries because I went through them in a prior post: how to register an app with Dropbox so you can embed the Dropbox Chooser that lets users browse their Dropbox folders and download a file.

That prior post included code that initializes the Chooser. I want to add a single line to it so we can get the pathname of the downloaded document:

1

var opts= {

2

success: function(files) {

 

3

var filename = files[0].link;

4

filename = filename.replace(“dl=0″,”dl=1”);

5

$(“#filenamediv”).text(files[0].name);

6

alert(filename);

7

$.ajax({

8

url: “./php/downloadDropboxContents2.php”,

9

data: {src : filename},

10

success: function(cont){

11

$(“#busy”).show();

12

openOpmlFile_File(filename);

13

$(“#busy”).hide();

14

setCookie(“lastfile”,”/php/currentFile/” + filename);

15

getDropboxPath(filename);

 

16

},

17

error: function(e){

18

alert(e.responseText);

19

}

20

});

 

21

},

22

multiselect: false,

23

extensions: [‘.opml’],

24

linkType: “download”

25

};

26

var button = Dropbox.createChooseButton(opts);

27

document.getElementById(“uploadDB”).appendChild(button);

When a user chooses a file from Chooser, the “success” function that starts on line 10 is invoked. That function is passed information about the files that have been opened by the user in an array, but since I’m only allowing users to open one file at a time, the information is always going to be in the first and only element of that array. That information includes something called “link,” which is a link to the file that does not include the path information. So, in line 15 — the only new line — we’re going to pass that link to a function that will get that elusive path.

1

function getDropboxPath(link){

2

$.ajax({

3

type: “POST”,

4

beforeSend: function (request)

5

{

6

request.setRequestHeader(“Content-Type”, “application/json”);

7

},

8

url: “https://api.dropboxapi.com/2/sharing/get_shared_link_metadata?authorization=Bearer [INSERT YOUR AUTHORIZATION CODE]”,

9

data: JSON.stringify({url : link}),

10

success: function(cont){

11

alert(cont.path_lower);

12

},

13

error: function(e){

14

alert(e.responseText);

15

}

16

});

17

}

This is another AJAX call; it too assumes that you’ve included jQuery. (See the prior post.)

Now, how does this work. Well, I’m not entirely sure. But it’s sending a request to the Dropbox API. It’s doing this as a standard http web call, which means (I think) that you have to include metadata that web servers expect when you’re using http. (I could be wrong about this.) So, in line 6 you tell it that you are expecting to get JSON back, not a standard Web page. (JSON is a standard way of encoding human-readable, multipart information.)

In line 8 you’re constructing the URL you’re going to send your request to. Everything up to the question mark is simply the URL of the Dropbox API for getting metadata about a link. After the question mark you’re telling it that you’re authorized to make this request, which requites getting an authorization code from Dropbox. I’m probably cheating by using the one that the API Explorer gives you, but it works for now so I’ll worry about that when it breaks, which will probably be the next time I use it. Anyway, you need to insert your authorization code where it says “insert your authorization code” in all caps.

Line 9: The data is the internal link that the Chooser gave you as the URL of the file the user downloaded. I use JSON.stringify because it didn’t work until I did.

Line 10 is what happens when your query works. You’ll get an object from Dropbox that contains several different pieces of info. You want the one called “path_lower,” presumably because it gives you the path that is lower on the great Tree of Files that is a Dropbox folder. [LATER THAT DAY: Andrew tells me it’s actually called path_lower because it’s the path in all lower case, which is useful because the Dropbox file system is case insensitive. Frankly, I prefer my explanation on poetic grounds, so we’ll have to agree to disagree :)] Line 11 gets that path (cont.path_lower) and pops it into an alert box, which is almost certainly not what you actually want to do with it. But this is a demo.

That’s it. If you have questions, try to find someone who understands this stuff because I got here through many trials and even more errors.

Good luck.

3 Comments »

January 13, 2016

Perfect Eavesdropping

Suppose a laptop were found at the apartment of one of the perpetrators of last year’s Paris attacks. It’s searched by the authorities pursuant to a warrant, and they find a file on the laptop that’s a set of instructions for carrying out the attacks.

Thus begins Jonathan Zittrain‘s consideration of an all-too-plausible hypothetical. Should Google respond to a request to search everyone’s gmail inboxes to find everyone to whom the to-do list was sent ? As JZ says, you can’t get a warrant to search an entire city, much less hundreds of millions of inboxes.

But, while this is a search that sweeps a good portion of the globe, it doesn’t “listen in” on any mail except for that which contains a precise string of words in a precise order. What happens next would depend upon the discretion of the investigators.

JZ points out that Google already does something akin to this when it searches for inboxes that contain known child pornography images.

JZ’s treatment is even handed and clear. (He’s a renown law professor. He knows how to do these things.) He discusses the reasons pro and con. He comes to his own personal conclusion. It’s a model of clarity of exposition and reasoning.

I like this article a lot on its own, but I find it especially fascinating because of its implications for the confused feeling of violation many of us have when it’s a computer doing the looking. If a computer scans your emails looking for a terrorist to-do list, has it violated your sense of privacy? If a robot looks at you naked, should you be embarrassed? Our sense of violation is separable from our legal and moral right to privacy question, but the two meanings often get mixed up in such discussions. Not in JZ’s, but often enough.

Comments Off on Perfect Eavesdropping

January 10, 2016

Why I should have won Powerball… and why you should be glad you didn’t

There’s a very good reason why I should have won Powerball last night. It has nothing to do with my absolute, desperate need for $250,000,000. (That’s all I need. I’m not greedy.) Nor does it have anything to do with my being worthy of such riches. Also, for hobo fortune teller predicted it when I was but a swaddling.

No, I should have won because of aesthetics. Narrative aesthetics. Allow me to explain.

About ten years I wrote a Young Adult novel about a good-hearted boy who wins $100 million in a lottery. The twist is that he has to keep the win a secret from his parents. You can buy My $100 Million Dollar Secret at LuLu or Amazon, or read it online or download it for free.

But that’s not the point.

Had I won Powerball, imagine the fun newspaper story this would be:

Winner of $800M Lottery
Wrote Novel about a Boy Who Won $100M lottery

Or, in more modern terms:

The Powerball winner wrote a self-published novel … And you’ll never guess what it’s about!
PS: Now we’ll see if he lives up to it!

If God were a clickbaiter, I would have won last night.

Cover image

 


 

This long comment at Reddit from a year ago will tell you exactly what you should do if you win instead of me.

It will also explain why you should hope that you do not win.

2 Comments »

January 9, 2016

Netflix’s hidden categories

The redditor makeinstallposted to PasteBin an HTMLized version of a list of hidden categories at Netflix that was the subject of a reddit thread. I’ve posted it as a Web page with links.

As you’ll see, Netflix has thousands of categories it uses internally. These are like the ones it lets you browse but more specific. This list will take you to a page at Netflix where you can browse among these micro-categories.

6 Comments »

January 7, 2016

B12 helped my memory?

I’m 65 so I forget where I put my car keys and then remember that they’re in the ignition. And that I’m driving.

Well, no, it’s not that bad. But you know that thing where you click on your browser to look something up, you see what’s already loaded, and then it takes a minute to remember what you went there for? That was getting worse, and it was annoying.

I asked my doctor about it. It turns out that my B12 levels were scraping right along the minimum acceptable level — 181pg/ml is the minimum recommended (at least on the test results document I get) and I was at 190, and occasionally a bit lower. So at my doctor’s suggestion I started taking a supplement every day. That was about a year ago. My B12 levels are now 624 (914 is the highest recommended).

I have no external measurement of my short-term memory to go by, but it seems to me to be much better. Not perfect. I still won’t remember your name. But much better.

I ain’t no stinkin’ medical professional, but you can get B12 without a prescription. Best of all, if you take too much, you will pee a cheerful yellow.

By the way, my B12 levels might have been low because I’ve been a vegetarian for 35 years. There’s B12 in eggs and diary, but I probably wasn’t paying enough attention. (See my post about Soylent.)

2 Comments »

Next Page »