It looks so far like Mint (a Linux distribution) is working on my 2006 MacBook — one of them old white plastic models. I wiped out the entire disk, so there’s no Mac left except what Apple burned into the hardware. As far as I can tell, everything is working, from audio, to trackpad, to wifi.
Here’s how I did it: I tried everything.
Unfortunately, I can’t quite remember what worked, except that I used Mac Linux USB Loader to create the USB stick from which I booted the Mac into Linux. I also used Iso 2 USB EFI Booter to get the Mac to boot into Linux, although I’m not sure I actually needed that since I wasn’t going for a dual boot.
But I do know that the thing that put me over the top were some commands listed in a comment on a page about how to manually install a bootloader. I was there because after I eventually got Linux installed, it still wouldn’t boot. The article on that page was helpful but I was stilling getting the weird-ass “canonical cow” error message when trying to install grub (the standard Linux bootloader) — you’ll know that error message when you see it. But the commands in the comment at the end by Zigilin got it working:
instead of running grub-install, run cmd below:
mount –bind /proc /mnt/proc
mount –bind /dev /mnt/dev
mount –bind /sys /mnt/sys
(Replace the # in sd# with the letter of the partition you installed the Linux into. Better: read the article.)
After you get it working, you might want to check this post about how to add some finishing touches.
Thank you, kind Internet strangers!
Tagged with: linux
Date: September 15th, 2014 dw
Time for another in my series of occasional posts over-explaining simple programming tasks that took me longer to figure out than they should have.
Here’s the simple HTML:
With jQuery, you fade an element out by first selecting the particular element. which you can do by putting its ID in quotes and prefixing it with a #:
$("#fader"). Then you tell that element what method you want to execute, which in this case is the jQuery “fadeOut” command, with a duration expressed in
microsecondsmilliseconds. Put ‘em together and you get the simple-but-powerful jQuery statement:
$("#fader").fadeOut(500);. Likewise for the
If you’re me, the first thing you’ll try will be:
Click here to give it a try on the following sample text:
So here’s a way that works. (Note that I’m not saying it’s the best or right way. If it’s worse than that, if it’s actually the wrong way, please leave a comment and I’ll link to it at the top of his post. Thanks!)
Click here to to try it on the text below:
The difference is that the second way adds a function to the jQuery’s fadeOut command that is invoked only after the fadeOut is completed. That function changes the text of the element and fades it in.
(Click here to reset both examples.)
(PS: I created the tables for the code by pasting it in here.)
Tagged with: jquery
Date: August 17th, 2014 dw
The basic way this works:
The element that will get a popup help should have an attribute called “help.” The content of that attribute will be the content of the balloon. E.g. <p help=”This is some <em> help</em>.”>
Declare the elements getting a balloon to be position:relative.
Create the help balloons using the CSS styling you’d like, but make their position absolute. Use margin-top and margin-left to position them relative to the element they’re explaining. Don’t forget about negative margins.
Change the mouseenter event for elements with the “help” attribute. Also the mouseleave event.
Remember, I probably did some things very very wrong. Use Qtip or some other library that works and works reliably. But, having written this for myself, I figure I ought to share it. (And, by the way, the Google Books -> Harvard Library code is here. Have pity on a poor amateur.)
Date: May 19th, 2014 dw
I made the mistake many years ago of creating a Google Accounts email address in addition to my existing Gmail account. Thus I have been plagued (granted, it’s an excellent example of a First World Problem plague) with two out of sync accounts.
Gmail works fine because “firstname.lastname@example.org” is my public-facing email address and has been since about 1994 when first I took the evident.com domain. (Yes, children, there was a time when you could register an existing word with all its vowels just by being the first to claim it.) When you send mail to that address, it shows up in my email@example.com Google Account. It also shows up at my firstname.lastname@example.org account, which now has 12,722 unread messages in it. Nevertheless, the “system” works for me.
But it does not work for me at YouTube.com, where I have two accounts that cannot be merged. I’ve tried.
And I thought it didn’t work at Google Plus. But recently I’ve been getting friend requests (or Circle requests, I guess) at G+ for evident@evident, whereas my social network (such as it is) is at dweinberger@gmail. Since I do very little with G+ anyway, it only bothers me because I hate rejecting friends’ requests, even though they’re trying to join a G+ that I don’t ever check and that currently has a total of 7 people in it. So, I googled for info, and found that Google Takeout promises to move my dweinberger Circles over to evident. Google seems quite serious about it: access is limited during the first 48 hours, the transfer takes up to 7 days, and you can only request one transfer every six months.
We’ll see how it works. In any case, I do appreciate the Google Data Liberation Front commitment.
And perhaps now my Circles will be unbroken.
Tagged with: google
• google plus
Date: September 2nd, 2013 dw
Rachel Plotnick has an article in Technology and Culture called “At the Interface: The Case of the Electric Push Button, 1880-1923″ that begins by recounting the early reaction to push buttons. She cites a book by “educator and activist Dorothy Canfield Fisher”:
Fisher recognized how use of push-button interfaces had contributed to making electrical experiences effortless, opaque, and therefore unquestioned by consumers. She and others worried that if button-pressers could not envision the mechanical processes that happened behind buttons, they would lose all ability to navigate in the world. [p. 814]
We obviously didn’t get perpetually lost, but that doesn’t mean that Fisher was wrong to raise the alarm, especially if the alarm was raised by slamming down on a big ol’ button.
The rest of the article is about the educational strategies used by manufacturers, journalists, advertisers and others, starting in 1880, to “make button interfaces intelligible to consumers.”  She notes a split between those who thought customers would be more comfortable with electricity if they understood what happened on the other side of the button and those who thought it best to keep electricity as uncomplicated to the user as possible. Business was at stake here: Rachel notes that by 1915, push buttons were letting us take electricity for granted, which meant “the electrical industry faced an apathetic populace” that didn’t appreciate the complexity or expense of their service.
..And slightly reminiscent of a 1950 instructional video from AT&T on how to dial a rotary phone.
In fact, here’s an earlier public service announcement:
 Rachel Plotnick, “At the Interface: The Case of the Electric Push Button, 1880â??1923,” Technology and Culture, Volume 53, Number 4, October 2012, pp. 815-845 DOI: 10.1353/tech.2012.0138
Tagged with: techdet
Date: August 7th, 2013 dw
In my continuing series “How to Be an Idiot,” here’s what not to do when installing a new hard drive into your MacBook Pro.
I started off right. I had everything prepared: a new 500gB hybrid drive, a fresh Time Machine backup, and an 8gB USB stick with the Mac Mountain Lion installer on it. I still managed to fail maybe 20 times over the course of two days booting from everything I could find, re-installing Lion onto the stick, backing up from Time Machine, etc. The closest I came was when I installed off the repair partition over a backup drive. The Mac started up its install process, but got stopped with a message that said that Apple was unable to confirm that my computer is authorized for an OS install. At least, that’s what I think it meant; it’s not a very clear message, and, no I didn’t write it down :(
This made me think that the problem was that I was trying to install the wrong version, although I was pretty durn sure that I had upgraded to Mountain Lion a few weeks earlier, having resisted the blandishments of Lion. Maybe Apple was confused, although I couldn’t see why. I installed the prior version of the OS on my USB drive. Nope.
And now for the answer. And it’s not going to make me look smart, that I promise you.
You see, kids, for Apple to verify my machine, it has to get onto the Internet. It turns out that if during the install process you give your Mac a choice of wifi hotspots to connect to, it picks an open one without asking for your say-so. As a result, it happened to pick a hotspot that requires a login on a web site, but there’s no browser available during the install process. Once I pointed the Mac to another hotspot, it was able to connect and authorize my machine, enabling the installation to proceed.
Sure it was dumb of me. But it’s also dumb of Apple to give us an error message that says that it’s unable to authorize, rather than that it was unable connect. (I also didn’t see a relevant message in the Installer log, but I may have missed it.)
Fortunately, each of the things I tried took a relatively long time to fail, so I was able to get a lot done while trying. Still, the moment of victory was definitely a forehead-slapper for me.
Tagged with: apple
Date: April 28th, 2013 dw
Let me remind you young whippersnappers what looking for knowledge was like before the Internet (or “hiphop” as I believe you call it).
Cast your mind back to 1982, when your Mommy and Daddy weren’t even gleams in each other’s eyes. I had just bought my first computer, a KayPro II.
I began using WordStar and ran into an issue pretty quickly. For my academic writing, I needed to create end notes. Since the numbering of those notes would change as I took advantage of WordStar’s ability to let me move blocks of text around (^KB and ^KK, I believe, marked the block), I’d have to go back and re-do the numbering both in the text and in the end notes section. What a bother!
I wanted to learn how to program anyway, so I sat down with the included S-Basic manual. S-Basic shared syntax with BASIC, but it assumed you’d write functions, not just lines of code to be executed in numbered order. This made it tougher to learn, but that’s not what stopped me at first. The real problem I had was figuring out how to open a file so that I could read it. (My program was going to look for anything between a “[[" and a "]]“,, which would designate an in-place end note.)The manual assumed I knew more than I did, what with its file handlers and strange parameters for what type of file I was reading and what types of blocks of data I wanted to read.
I spent hours and hours and hours, mainly trying random permutations. I was so lacking the fundamental concepts that I couldn’t even figure out what to play with. I was well and truly stuck.
“Simple!” you say. “Just go on the Internet…and…oh.” So, it’s 1982 and you have a programming question. Where do you go? The public library? It was awfully short on programming manuals at that time, and S-Basic was an oddball language. To your local bookstore? Nope, no one was publishing about S-Basic. Then, how about to…or…well…no…then?…nope, not for another 30 years.
I was so desperate that I actually called the Boston University switchboard, and got connected to a helpful receptionist in the computer science division (or whatever it was called back then), who suggested a professor who might be able to help me. I left a message along the lines of “I’m a random stranger with a basic question about a programming language you probably never heard of, so would you mind calling me back? kthxbye.” Can you guess who never called me back?
Eventually I did figure it out, if by “figuring out” you mean “guessed.” And by odd coincidence, as I contemplate moving to doing virtually all my writing in a text editor, I’m going to be re-writing that little endnoter pretty soon now.
But that’s not my point. My point is that YOU HAVE NO IDEA HOW LUCKY YOU ARE, YOU LITTLE BASTARDS.
For those of you who don’t know what it’s like to get a programming question answered in 2013, here are some pretty much random examples:
Tagged with: 2b2k
• old fart
Date: March 29th, 2013 dw
Diana Kimball [twitter:dianakimball] is giving a Berkman lunchtime talk on coding as a liberal art. She’s a Berkman Fellow and at the Harvard Business School. (Here are some of her posts on this topic.)
NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.
She says that she’s loved computers since she was a kid. But when she went to Harvard as an undergrad she decided to study history, in part because there’s a natural specialization that happens in college: the students who come in as coders are fantastic at coding, whereas Diana had greater strengths as a writer of prose. She found HTML and programming intimidating. But in her third year, she got interested in coding and Internet culture. She was one of the founders of ROFLcon [yay!]. She got hired by Microsoft after college, as a technical product manager with the Powerpoint team in Silicon Valley. “This was culture shock in the best possible way.”
When she graduated in 2009, she
and some friends started found the SnarkMarket blog that considers what the new liberal arts might be (inspired by Kottke). She wrote an essay that’s a proposal for coding and decoding. She reads it. (It’s short.) An excerpt:
Coding and Decoding is about all modes of communication, and all are in its view. But it is built with particular attention to the future, and what that future will be like. Technological experts can seem like magicians, conjuring effects wordlessly. By approaching that magic as a collection of component parts instead of an indivisible miracle, we can learn to see through these sleights of typing hands. In seeing through, we will learn to perform them ourselves; and think, as magicians, about the worlds we will build.
Language, now, is about more than communication. It is the architecture behind much of what we experience. Understanding that architecture will allow us to experience more.
Her boyfriend taught her how to code. They spent a lot of time on it. “He picked up on something I’d said and took it seriously.” After two years at Microsoft, she was enthusiastic, but still a beginner. It wasn’t until she started at Harvard Business School that coding really took off for her. The entrepreneurial atmosphere encouraged her to just do it. Plus, she was more of a geek than most of the other students. “This was great for my identity, and for my confidence.” She also found it a social refuge. “It takes a lot of time to get over the hump.” She refers to Ellen Ullman’s “Close to the Machine” that talks about the utility of being arrogant enough to obsess over a project, cycling back to humility.
She decided to code up her own site for a project for school, even though the team had been given the money to hire devs for the task. Last fall she took the famous CS50 course [Harry Lewis, who created the course in about 1981, is sitting next to me.] CS50 teaches C, targeted at people who are either taking only that one class, or are going to take many more. For her final project, she did a project that used multiple APIs that she was very proud of. She’s also proud of her Ruby projects folder. Each project is something she was trying to teach herself. She’s more proud of the list than the finished products.
“Learning to code means reclaiming patience and persistence and making them your stubborn own.” [nice]
Ideally, everyone should be exposed to programming, starting at 5 yrs old, or even earlier, Diana says. Seymore Papert’s “Mind-Storms” has greatly influenced her thinking about how coding fits into education and citizenship. At a university, it ought to be taken as a liberal art. She quotes Wikipedia’s definition. And if “grammar, rhetoric, and logic were the core of the liberal arts,” then that’s sound like coding. [Hmm.] What the law was to the liberal arts, programming ought to be, i.e., that which you try if you don’t know what else to do with your liberal arts degree.
Why isn’t it seen that way? When computer scientists teach you, they teach they way they learned: at school. But many of the best programmers are self-taught. CS50 does give a variety of assignments, but it’d be better if students solved their own problems much earlier.
But the number one problem is the academic attitude, she says. Students get fixated on the grade, even when it doesn’t matter. Coding is critical for children because debugging is part of it, as Papert says. But grades are based on the endpoints. Coding is much more like life: You’re never done, you can always make it better.
Diana has a proposal. Suppose coding classes were taught like creative writing workshops. Take it whenever you’re ready. Taught by hackers, esepcially autodidacts. It’d vary in substance — algorithms, apis, etc. — and you’d get to choose. You’d get to see something on screen that you’d never seen before And you’d be evaluated on ingenuity and persistence, rather than only on how well your code runs.
She says what her syllabus would look like:
“Coding should be taught in the same breath as expository writing… Everyone deserves to be exposed to it.” She’s not sure if it should be required.
She quotes Papert: “…the most powerful idea of all is the idea of powerful ideas.” There’s no better example of this, she says, than open source software. And David Foster Wallace’s commencement address: “Learning how to think really means learning to exercise some control over how and what you think…If you cannot exercise this sort of choice in adult life, you will be totally hosed.” Diana says that’s her. She was wrapped up in writing from an early age. She has a running internal commentary. [Join the club!] Coding swaps in a different monologue, one in which she’s inventing thing. That’s the greatest gift: her internal monologue is much more useful and interesting. “If you wanted to be a novelist in 1900, you’d want to be a programmer today.” The experience of creating something that people use is so motivating.
Q: Would you be willing to webcast yourself programming and let people join in? I do this all the time when at hackathons. I think, OMG, there must be 10,000 kids in India who want to be here. And so here they are. “Hackers at Berkeley” does this really well.
A: That’s awesome. I want more people to have more access to that experience of sharing.
Q: Are you familiar with RailsBridge — non-computer scientists who are teaching themselves how to code via weekend workshops.
A: RailsBridge is extraordinary. It’s great to see this happening outside of the university context.
A: [me] Great talk, and I’m a humanities major who spends most of his hobby time programming. But aren’t you recommending the thing that you happen to love? And programming as opposed to traditional logic is an arbitrary set of rules…
Q: Yes, but it would be really useful if more people loved it. We could frame it in a way that is exciting for humanities majors. I’m proposing an idea rather than making an airtight argument. “You’re basically right but I don’t really care” (she says laughing :).
Q: I like your idea of teaching it like a writers workshop so that it doesn’t turn into just another course. But I’m not sure that colleges are the best at doing that.
A: not everyone loves programming.
Q: [harry lewis] I take responsibility for eliminating the Harvard requirement for a programming course. Also, take a look at code.org. Third, the academic world treats computer science the way it does because of our disciplinary specialization. That label — computer science — came about in the context of fields like political science, and arose when computers were used not for posting web sites but for putting people on the Moon where a bug could kill someone. The fact that CompSci exists in academic departments will make it very difficult for your vision of computing to exist, just as creative writing is often an uneasy fit into English curricula.
A: That’s very fair. I know it’d be hard. RIT has separate depts for CompSci and coding.
Q: There’s an emergent exploration of coding in Arts schools, with a much more nimble, plug and play approach, very similar to the one you describe. My question: What do the liberal arts have to offer coding? Much of coding is quite new, e.g., open source. These could be understood within a historical context. Maybe these need to be nurtured, explored, broken. Does seeing coding as a liberal art have something to offer sw development?
A: ITP is maybe the best example of artists working with coders. Liberal Arts can teach programmers so much!
Q: Can we celebrate failure? That’d be a crucial part of any coding workshop.
A: Yes! Maybe “find the most interesting bug” and reward introspection about where you’ve gone wrong. But it’s hard in a class like CS50 where you’re evaluating results.
Q: This is known as egoless programming. It’s 40 years old, from Weinberger [no relation].
Q: You’re making a deeper point, which is not just about coding. The important thing is not the knowledge you get, but the way you get there. Being self-reflective about you came about how you learn. You can do this with code but with anything.
A: You’re so right. Introspection about the meta-level of learning is not naturally part of a course. But Ruby is an introspective language: you can ask any object what it is, and it will tell you. This is a great mirror for trying to know yourself better.
Q: What would you pick to teach?
A: I love Ruby. It would be a good choice because there’s a supportive community so students can learn on their own afterwards, and it’s an introspective language. And the lack of ornament in Ruby (no curly braces and little punctuation) makes it much more like English. The logic is much more visible. (My preference is Sinatra, not Rails.)
Q: What sort of time commitment the average person would have to put in to have a basic grasp of a programming language? Adults vs. children learning it?
A: I’d love to see research on this. [Audience: Rottmeyers, CMU (?)] A friend of mine reported he spent 20 hours. The learning curve is very halting at first. It’s hard to teach yourself. It helps to have a supportive in-person environment. CS50 is a 10-20 hour commitment/week and who has that sort of time except for fulltime students? To teach yourself, start out a few hours a time.
Q: How about where the MOOCs are going? Can you do a massively online course in compSci that would capture some of what you’re talking about?
A: The field is so focused on efficiency that MOOCs seem like an obvious idea. I think that a small workshop is the right way to start. CS50 requires so much fear of failure and resilience that it wouldn’t have been a good way for me to start. At CS50, you can’t let others read your code.
Q: We shouldn’t put together Computer Science and programming. Programming is just a layer of expression on top of computer science. You don’t need compSci to become a programmer. And the Net is the new computer; we’re gluing together services from across the Net. That will change how people think about programming because eveyrone will be able to do it. The first language everyone should learn is ifttt.com
Q: I’m a NY Times journalist. I love languages. And I love the analogy you draw. I’m 30. Do you think coding is really essential? Would it open my eyes as a journalist?
A: It’s never too late. If you keep asking the question, you should probably do it. You don’t have to be good at it to get a lot out of it. It’s so cool that your children are learning multiple languages including coding. Learn alongside them.
Tagged with: berkman
Date: February 5th, 2013 dw
I get lost in my browser tabs all the time. The place I most often want to go is back to the tab I was just in. On Firefox, there are a few utilities that let me do that. My nephew Joel Weinberger has written one that does that and nothing but that for Chrome. You can grab it (free, of course) here. (The source code is on github.)
Joel wrote this, as the result of my whining, during our annual post-Thanksgiving-dinner viewing of Jurassic Park, although he did some clean up of the code afterwards. I should add that, among other things, Joel is a certified computer genius working in deep areas of computer security.
Tagged with: utilities
Date: December 3rd, 2012 dw
This is really basic, but it drove me crazy. Suppose you want to add a row of data to a MySQL database, but not if there’s already an entry in it with the essence of that data. For example, suppose you have a table of notes and the titles they’re clustered under, and don’t want to allow the same note to appear more than once for any particular title. Assume also that that table is also recording an identifier of which book the note is about.
Here’s a SQL statement that works (line breaks don’t matter):
INSERT IGNORE into
With the exception of the mysterious “IGNORE” (see below), this is a straightforward command that inserts a row into the table “playlistentries” with a series of values (‘fiction’, ‘1234’,’5678′) mapped to the series of fields (title, noteid, bookid). If there’s already a row with those title and noteid values, the table will be left unchanged. Otherwise, a new row will be added.
But, this will not work unless you set up your MySQL table so that it has a unique key based on the fields you’re testing for (title and noteid in this example). That way when you go to insert a row that has an already-existing title and noteid, it will be automatically rejected. The “IGNORE” in the SQL statement means it will be rejected without creating an error message that just gets in the way.
To set up your table so that it has the right unique key, use a SQL statement like this:
ALTER TABLE playlistentries
ADD UNIQUE KEY noteidtitle (noteid,title(100))
This tells MySQL to create a unique key (named “noteidtitle”) based on the fields noteid and title. The “(100)” is there to tell MySQL that it should only look at the first 100 characters of the title field; if you’ve set up your table with that field as “text,” you’ll get an error unless you put a limit on it. A hundred characters is probably 75 more than I need.
Note also that you only run the “alter table” command once in the lifetime of your database.
Finally, please note that there is a high probability that what I’m telling you is wildly inefficient, non-robust, and suboptimal. On a scale of 1-100, I am about 3 points past raw beginner. But these commands work for me, and I am assuming as always that if you’re reading this, you are an amateur like me engaged in some small project designed primarily for your own use. Improvements and do-overs will be gladly accepted.
Tagged with: mysql
Date: October 30th, 2012 dw
Next Page »