Joho the Blogtech Archives - Joho the Blog

November 25, 2018

Using the Perma.cc API to check links

My new book (Everyday Chaos, HBR Press, May 2019) has a few hundred footnotes with links to online sources. Because Web sites change and links rot, I decided to link to Perma.cc‘s pages instead . Perma.cc is a product of the Harvard Library Innovation Lab, which I used to co-direct with Kim Dulin, but Perma is a Jonathan Zittrain project from after I left.

When you give Perma.cc a link to a page on the Web, it comes back with a link to a page on the Perma.cc site. That page has an archive copy of the original page exactly as it was when you supplied the link. It also makes a screen capture of that original page. And of course it includes a link to the original. It also promises to maintain the Perma.cc copy and screen capture in perpetuity — a promise backed by the Harvard Law Library and dozens of other libraries. So, when you give a reader a Perma link, they are taken to the Perma.cc page where they’ll always find the archived copy and the screen capture, no matter what happens to the original site. Also, the service is free for everyone, for real. Plus, the site doesn’t require users to supply any information about themselves. Also, there are no ads.

So that’s why my book’s references are to Perma.cc.

But, over the course of the six years I spent writing this book, my references suffered some link rot on my side. Before I got around to creating the Perma links, I managed to make all the obvious errors and some not so obvious. As a result, now that I’m at the copyediting stage, I wanted to check all the Perma links.

I had already compiled a bibliography as a spreadsheet. (The book will point to the Perma.cc page for that spreadsheet.) So, I selected the Title and Perma Link columns, copied the content, and stuck it into a text document. Each line contains the page’s headline and then the Perma link.

Perma.cc has an API that made it simple to write a script that looks up each Perma link and prints out the title it’s recorded next to the title of the page that I intend to be linked. If there’s a problem with Perma link, such as a double “https://https://” (a mistake I managed to introduce about a dozen times), or if the Perma link is private and not accessible to the public, it notes the problem. The human brain is good at scanning this sort of info, looking for inconsistencies.

Here’s the script. I used PHP because I happen to know it better than a less embarrassing choice such as Python and because I have no shame.

1

<?php

 

2

// This is a basic program for checking a list of page titles and perma.cc links

3

// It’s done badly because I am a terrible hobbyist programmer.

4

// I offer it under whatever open source license is most permissive. I’m really not

5

// going to care about anything you do with it. Except please note I’m a

6

// terrible hobbyist programmer who makes no claims about how well this works.

7

//

8

// David Weinberger

9

// [email protected]

10

// Nov. 23, 2018

 

11

// Perma.cc API documentation is here: https://perma.cc/docs/developer

 

12

// This program assumes there’s a file with the page title and one perma link per line.

13

// E.g. The Rand Corporation: The Think Tank That Controls America https://perma.cc/B5LR-88CF

 

14

// Read that text file into an array

15

$lines = file(‘links-and-titles.txt’);

 

 

16

for ($i = 0; $i < count($lines); $i++){

17

$line = $lines[$i];

18

// divide into title and permalink

19

$p1 = strpos($line, “https”); // find the beginning of the perma link

20

$fullperma = substr($line, $p1); // get the full perma link

21

$origtitle = substr($line, 0,$p1); // get the title

22

$origtitle = rtrim($origtitle); // trim the spaces from the end of the title

 

23

// get the distinctive part of the perma link: the stuff after https://perma.cc/

24

$permacode = strrchr($fullperma,”/”); // find the last forward slash

25

$permacode = substr($permacode,1,strlen($permacode)); // get what’s after that slash

26

$permacode = rtrim($permacode); // trim any spaces from the end

 

27

// create the url that will fetch this perma link

28

$apiurl = “https://api.perma.cc/v1/public/archives/” . $permacode . “/”;

 

29

// fetch the data about this perma link

30

$onelink = file_get_contents($apiurl);

31

// echo $onelink; // this would print the full json

32

// decode the json

33

$j = json_decode($onelink, true);

34

// Did you get any json, or just null?

35

if ($j == null){

36

// hmm. This might be a private perma link. Or some other error

37

echo “<p>– $permacode failed. Private? $permaccode</p>”;

38

}

39

// otherwise, you got something, so write some of the data into the page

40

else {

41

echo “<b>” . $j[“guid”] . ‘</b><blockquote>’ . $j[“title”] . ‘<br>’ . $origtitle . “<br>” . $j[“url”] . “</blockquote>”;

42

}

43

}

 

 

44

// finish by noting how many files have been read

45

echo “<h2>Read ” . count($lines) . “</h2>”;

 

46

?>

Run this script in a browser and it will create a page with the results. (The script is available at GitHub.)

Thanks, Perma.cc!



By the way, and mainly because I keep losing track of this info, the table of code was created by a little service cleverly called Convert JS to Table.

2 Comments »

October 12, 2018

How browsers learned to support your system's favorite font

Operating systems play favorites when it comes to fonts: they pick one as their default. And the OS’s don’t agree with one another:

But now when you’re designing a page you can tell CSS to use the system font of whatever operating system the browser is running on. This is thanks to Craig Hockenberry who proposed the idea in an article three years ago. Apple picked up on it, and now it’s worked it’s way into the standard CSS font module and is supported by Chrome and Safari; Windows and Mozilla are lagging. Here’s Craig’s write-up of the process.

Here’s a quick test of whether it’s working in the browser you’re reading this post with:

This sentence should be in this blog’s designated font: Georgia. Or maybe one of its serif-y fall-backs.

This one should be in your operating system’s standard font, at least if you’re using Chrome or Safari at the moment.

We now return you to this blog’s regular font, already in progress.

Comments Off on How browsers learned to support your system's favorite font

August 4, 2018

Unlocking picture frames in Keynote 8

In 2015, I posted about a way to unlock the picture frames that for some reason ship with Keynote but are not accessible within Keynote. They’re there but they don’t show up on the pull down, so you can’t use them.

In 2018, with Keynote at version 8 + change, the same technique works. With the same warnings. So, if you want to give yourself considerably more than the 14 frames that the menu shows you, go here, follow the instructions carefully, and most important: do not blame me. (The replacement file I link to still seems to work.)

1 Comment »

June 18, 2018

Google Docs named versions

Google Docs’ version history functionality is getting to be really powerful and useful. Named versions help tame that power.

Google Docs automatically saves versions as you type so you can roll back to a prior state of your document at any point. In fact, you can roll back, copy a piece of it, roll forward, and paste in text from your past.

But because Google Docs makes so many versions and does so without asking you, suppose you want to go back to a version from earlier in the day before you cut that paragraph about secretly enjoying Paw Patrol? Google labels each automatically-created version with a time stamp, but you happened not to have memorized the precise time you made the change.

Now you can give a friendly name to a version. So let’s say you’re about to cut the Paw Patrol paragraph, but you’re not sure that you should. Before you make the cut, go to File > Version history > Name current version and give it a name such as “With Paw Patrol”. (If you want to be perverse, use the current hour and minute as the time. That’ll get you nowhere fast.) That name will show up in the list of versions under File > Version history > See version history.

Now when you cut the paragraph or make other changes, you’ll always be able to go back.

Meanwhile, Google will continue to automatically create new versions, capturing quite small increments of change. If you want to step back through the changes you’ve made since you named a version, click on the triangle to the left of the current version at the top of the version history.

Also, note that when you click on a version in the version history, it highlights the difference between the prior version and this one.

Note that comments are not saved with versions. Let me put this differently: When you restore a prior version, it will not have any of its comments. This is unfortunate.

Nevertheless, there are some big things not to like about Google Docs, but versioning definitely is not one of them.

Comments Off on Google Docs named versions

January 7, 2018

[javascript] Displaying entries’ initial letter when scrolling

I’m more surprised than proud that I got this to work, but here’s some JavaScript that slides down a box when the user scrolls an alphabetized table and slides that box back up once the user stops. While the user continues scroll up or down the page, the box displays the first letter of the row at the top. When the user stops scrolling for about a tenth of a second, the box goes away.

Note that when I say that “I got this to work,” what I really mean is that I successfully copy-and-pasted code from StackOverflow into the part of my script that runs when the script is first loaded. And when I say “JavaScript” I really mean “JavaScript using the jQuery library along with the Visible plugin that I think I actually don’t need but I couldn’t get jQuery’s is(":visible") to work the way I thought it should.

So here’s an annotated walkthrough of the embarrassing code.

The first part notices the scrolling, shows the box, and fills it with the first letter of the relevant column of the table the page is displaying. (Thank you, Stackoverflow!)

programming code

The second part comes from another StackOverflow question. It notices when someone has stopped scrolling for 0.15 seconds and hides the block displaying the letter. And, yes, it could probably be combined with the first bit.

This is amateurish hackery. I understand that. But I’m an amateur. I’m not writing production code. I don’t have to worry about performance: this code works fine for scrolling 350 rows of a text-only table, but might crap out with 1,000 lines or 5,000 lines. At least it works fine so far. On the current versions of Chrome and Firefox. Under a waxing moon. I understand that I can get this far only because millions of real developers have posted their own code, and answered questions from fools like me. My hat is off to you.

 


 

For your copying-and-pasting convenience, here’s the code in copy-able form. (Click on the “Toggle line numbers” button on the bottom.)

 

1

var mywindow = $(window); // get the window within which

2

// the page is being displayed

3

var mypos = mywindow.scrollTop();

4

var newscroll;

 

5

// add a function that’s called whenever the window is scrolled

6

mywindow.scroll(function () {

7

newscroll = mywindow.scrollTop(); // the scroll bar indicator’s

8

// vertical position

9

// Go through the rows of the table to find the one currently at the top.

10

// I am undoubtedly doing this embarrassingly inefficiently.

11

var letter = “”, done = false, i = 0;

12

// loop until we find the row at the top or we’ve looked at all rows

13

while (!done){

14

var title = $(“#title” + i); // id of the cell with the phrase the

15

// table is sorted on

16

if ( $(title).visible() == true){ // Unnecessary use of the

17

// Visible plugin

18

var currentTopRow = i;

19

done = true;

20

// Get the first letter of the relevant cell

21

letter = $(title).text().substr(0,1).toUpperCase();

22

// put the letter into the box that will display it

23

$(“#lettercontent”).text(letter);

24

}

25

//

26

i++;

27

// if we’ve checked all the rows and none is visible

28

if (i >= gData.length){ // gData is the array the table is built from

29

done = true;

30

letter = “?”;

31

}

32

}

33

// display the box with the letter

34

$(‘#bigletter’).slideDown();

35

mypos = newscroll;

36

});

 

37

// hide letter block when scrolling stops

 

38

var scrollpausetimer = null; // create a timer to note

39

// when scrolling has stopped

40

$(window).scroll( function() {

41

if(scrollpausetimer !== null) {

42

clearTimeout(scrollpausetimer);

43

}

44

scrollpausetimer = setTimeout(function() {

45

// hide the letter block

46

$(‘#bigletter’).slideUp();

47

}, 150); // 150 is the pause to be noticed in 1/1000ths of a sec

48

}, false);

 

Javascript converted into html by this.

Comments Off on [javascript] Displaying entries’ initial letter when scrolling

October 29, 2017

Restoring photos’ dates from Google Photos download

Google Photos lets you download your photos, which is good since they’re you’re own damn photos. But when you do, every photo’s file will be stamped as having been created on the day you downloaded it. This is pretty much a disaster, especially since the photos have names like “IMG_20170619_153745.jpg.”

Ok, so maybe you noticed that the file name Google Photos supplies contains the date the photo was taken. So maybe you want to just munge the file name to make it more readable, as in “2017-06-19.” If you do it that way, you’ll be able to sort chronologically just by sorting alphabetically. But the files are still all going to be dated with the day you did the download, and that’s going to mean they won’t sort chronologically with any photos that don’t follow that exact naming convention.

So, you should adjust the file dates to reflect the day the photos were taken.

It turns out to be easy. JPG’s come with a header of info (called EXIF) that you can’t see but your computer can. There’s lots of metadata about your photo in that header, including the date it was taken. So, all you need to do is extract that date and re-set your file’s date to match it.

Fortunately, the good folks on the Net have done the heavy lifting for us.

Go to http://www.sentex.net/~mwandel/jhead/ and download the right version of jhead for your computer. Put it wherever you keep utilities. On my Mac I put it in /Applications/Utilities/, but it really doesn’t matter.

Open up a terminal. Log in as a superuser:

sudo -i

Enter the password you use to log into your computer and press Enter.

Change to the directory the contains the photos you want to update. You do this with the “cd” command, as in:

cd /Applications/Users/david/Downloads/GooglePhotos/

That’s a Mac-ish path. I’m going to assume you know enough about paths to figure out your own, how to handle spaces in directory names, etc. If not, my dear friend Google can probably help you.

You can confirm that you’ve successfully changed to the right directory by typing this into your terminal:

pwd

That will show you your current directory. Fix it if it’s wrong because the next command will change the file dates of jpgs in whatever directory you’re currently in.

Now for the brutal finishing move:

/Applications/Utilities/jpg-batch-file-jhead/jhead -ft *.jpg

Everything before the final forward slash is the path to wherever you put the jhead file. After that final slash the command is telling the terminal to run the jhead program, with a particular set of options (-ft) and to apply it to all the files in that directory that end with the extension “.jpg.”

That’s it.

If you want to run the program not just on the directory that you’re in but in all of its subdirectories, this post at StackExchange tells you how: https://photo.stackexchange.com/questions/27245/is-there-a-free-program-to-batch-change-photo-files-date-to-match-exif

Many thanks to Matthias Wandel for jhead and his other contributions to making life with bits better for us all.

Comments Off on Restoring photos’ dates from Google Photos download

May 18, 2017

Indistinguishable from prejudice

“Any sufficiently advanced technology is indistinguishable from magic,” said Arthur C. Clarke famously.

It is also the case that any sufficiently advanced technology is indistinguishable from prejudice.

Especially if that technology is machine learning. ML creates algorithms to categorize stuff based upon data sets that we feed it. Say “These million messages are spam, and these million are not,” and ML will take a stab at figuring out what are the distinguishing characteristics of spam and not spam, perhaps assigning particular words particular weights as indicators, or finding relationships between particular IP addresses, times of day, lenghts of messages, etc.

Now complicate the data and the request, run this through an artificial neural network, and you have Deep Learning that will come up with models that may be beyond human understanding. Ask DL why it made a particular move in a game of Go or why it recommended increasing police patrols on the corner of Elm and Maple, and it may not be able to give an answer that human brains can comprehend.

We know from experience that machine learning can re-express human biases built into the data we feed it. Cathy O’Neill’s Weapons of Math Destruction contains plenty of evidence of this. We know it can happen not only inadvertently but subtly. With Deep Learning, we can be left entirely uncertain about whether and how this is happening. We can certainly adjust DL so that it gives fairer results when we can tell that it’s going astray, as when it only recommends white men for jobs or produces a freshman class with 1% African Americans. But when the results aren’t that measurable, we can be using results based on bias and not know it. For example, is anyone running the metrics on how many books by people of color Amazon recommends? And if we use DL to evaluate complex tax law changes, can we tell if it’s based on data that reflects racial prejudices?[1]

So this is not to say that we shouldn’t use machine learning or deep learning. That would remove hugely powerful tools. And of course we should and will do everything we can to keep our own prejudices from seeping into our machines’ algorithms. But it does mean that when we are dealing with literally inexplicable results, we may well not be able to tell if those results are based on biases.

In short: Any sufficiently advanced technology is indistinguishable from prejudice.[2]

[1] We may not care, if the result is a law that achieves the social goals we want, including equal and fair treatment of tax players regardless of race.

[2] Please note that that does not mean that advanced technology is prejudiced. We just may not be able to tell.

Comments Off on Indistinguishable from prejudice

May 15, 2017

[liveblog][AI] AI and education lightning talks

Sara Watson, a BKC affiliate and a technology critic, is moderating a discussion at the Berkman Klein/Media Lab AI Advance.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Karthik Dinakar at the Media Lab points out what we see in the night sky is in fact distorted by the way gravity bends light, which Einstein called a “gravity lens.” Same for AI: The distortion is often in the data itself. Karthik works on how to help researchers recognize that distortion. He gives an example of how to capture both cardiologist and patient lenses to better to diagnose women’s heart disease.

Chris Bavitz is the head of BKC’s Cyberlaw Clinic. To help Law students understand AI and tech, the Clinic encourages interdisciplinarity. They also help students think critically about the roles of the lawyer and the technologist. The clinic prefers early relationships among them, although thinking too hard about law early on can diminish innovation.

He points to two problems that represent two poles. First, IP and AI: running AI against protected data. Second, issues of fairness, rights, etc.

Leah Plunkett, is a professor at Univ. New Hampshire Law School and is a BKC affiliate. Her topic: How can we use AI to teach? She points out that if Tom Sawyer were real and alive today, he’d be arrested for what he does just in the first chapter. Yet we teach the book as a classic. We think we love a little mischief in our lives, but we apparently don’t like it in our kids. We kick them out of schools. E.g., of 49M students in public schools in 20-11, 3.45M were suspended, and 130,000 students were expelled. These disproportionately affect children from marginalized segments.

Get rid of the BS safety justification and the govt ought to be teaching all our children without exception. So, maybe have AI teach them?

Sarah: So, what can we do?

Chris: We’re thinking about how we can educate state attorneys general, for example.

Karthik: We are so far from getting users, experts, and machine learning folks together.

Leah: Some of it comes down to buy-in and translation across vocabularies and normative frameworks. It helps to build trust to make these translations better.

[I missed the QA from this point on.]

Comments Off on [liveblog][AI] AI and education lightning talks

[liveblog] AI Advance opening: Jonathan Zittrain and lightning talks

I’m at a day-long conference/meet-up put on by the Berkman Klein Center‘s and MIT Media Lab‘s “AI for the Common Good” project.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Jonathan Zittrain gives an opening talk. Since we’re meeting at Harvard Law, JZ begins by recalling the origins of what has been called “cyber law,” which has roots here. Back then, the lawyers got to the topic first, and thought that they could just think their way to policy. We are now at another signal moment as we are in a frenzy of building new tech. This time we want instead to involve more groups and think this through. [I am wildly paraphrasing.]

JZ asks: What is it that we intuitively love about human judgment, and are we willing to insist on human judgments that are worse than what a machine would come up with? Suppose for utilitarian reasons we can cede autonomy to our machines — e.g., autonomous cars — shouldn’t we? And what do we do about maintaining local norms? E.g., “You are now entering Texas where your autonomous car will not brake for pedestrians.”

“Should I insist on being misjudged by a human judge because that’s somehow artesinal?” when, ex hypothesis, an AI system might be fairer.

Autonomous systems are not entirely new. They’re bringing to the fore questions that have always been with us. E.g., we grant a sense of discrete intelligence to corporations. E.g., “McDonald’s is upset and may want to sue someone.”

[This is a particularly bad representation of JZ’s talk. Not only is it wildly incomplete, but it misses the through-line and JZ’s wit. Sorry.]

Lightning Talks

Finale Doshi-Velez is particularly interested in interpretable machine learning (ML) models. E.g., suppose you have ten different classifiers that give equally predictive results. Should you provide the most understandable, all of them…?

Why is interpretability so “in vogue”? Suppose non-interpretable AI can do something better? In most cases we don’t know what “better” means. E.g., someone might want to control her glucose level, but perhaps also to control her weight, or other outcomes? Human physicians can still see things that are not coded into the model, and that will be the case for a long time. Also, we want systems that are fair. This means we want interpretable AI systems.

How do we formalize these notions of interpretability? How do we do so for science and beyond? E.g., what is a legal “right to explanation
” mean? She is working with Sam Greshman on how to more formally ground AI interpretability in the cognitive science of explanation.

Vikash Mansinghka leads the eight-person Probabilistic Computing project at MIT. They want to build computing systems that can be our partners, not our replacements. We have assumed that the measure of success of AI is that it beats us at our own game, e.g., AlphaGo, Deep Blue, Watson playing Jeopardy! But games have clearly measurable winners.

His lab is working on augmented intelligence that gives partial solutions, guidelines and hints that help us solve problems that neither system could solve on their own. The need for these systems are most obvious in large-scale human interest projects, e.g., epidemiology, economics, etc. E.g., should a successful nutrition program in SE Asia be tested in Africa too? There are many variables (including cost). BayesDB, developed by his lab, is “augmented intelligence for public interest data science.”

Traditional computer science, computing systems are built up from circuits to algorithms. Engineers can trade off performance for interpretability. Probabilisitic systems have some of the same considerations. [Sorry, I didn’t get that last point. My fault!]

John Palfrey is a former Exec. Dir. of BKC, chair of the Knight Foundation (a funder of this project) and many other things. Where can we, BKC and the Media Lab, be most effective as a research organization? First, we’ve had the most success when we merge theory and practice. And building things. And communicating. Second, we have not yet defined the research question sufficiently. “We’re close to something that clearly relates to AI, ethics and government” but we don’t yet have the well-defined research questions.

The Knight Foundation thinks this area is a big deal. AI could be a tool for the public good, but it also might not be. “We’re queasy” about it, as well as excited.

Nadya Peek is at the Media Lab and has been researching “macines that make machines.” She points to the first computer-controlled machine (“Teaching Power Tools to Run Themselves“) where the aim was precision. People controlled these CCMs: programmers, CAD/CAM folks, etc. That’s still the case but it looks different. Now the old jobs are being done by far fewer people. But the spaces between doesn’t always work so well. E.g., Apple can define an automatiable workflow for milling components, but if you’re student doing a one-off project, it can be very difficult to get all the integrations right. The student doesn’t much care about a repeatable workflow.

Who has access to an Apple-like infrastructure? How can we make precision-based one-offs easier to create? (She teaches a course at MIT called “How to create a machine that can create almost anything.”)

Nathan Mathias, MIT grad student with a newly-minted Ph.D. (congrats, Nathan!), and BKC community member, is facilitating the discussion. He asks how we conceptualize the range of questions that these talks have raised. And, what are the tools we need to create? What are the social processes behind that? How can we communicate what we want to machines and understand what they “think” they’re doing? Who can do what, where that raises questions about literacy, policy, and legal issues? Finally, how can we get to the questions we need to ask, how to answer them, and how to organize people, institutions, and automated systems? Scholarly inquiry, organizing people socially and politically, creating policies, etc.? How do we get there? How can we build AI systems that are “generative” in JZ’s sense: systems that we can all contribute to on relatively equal terms and share them with others.

Nathan: Vikash, what do you do when people disagree?

Vikash: When you include the sources, you can provide probabilistic responses.

Finale: When a system can’t provide a single answer, it ought to provide multiple answers. We need humans to give systems clear values. AI things are not moral, ethical things. That’s us.

Vikash: We’ve made great strides in systems that can deal with what may or may not be true, but not in terms of preference.

Nathan: An audience member wants to know what we have to do to prevent AI from repeating human bias.

Nadya: We need to include the people affected in the conversations about these systems. There are assumptions about the independence of values that just aren’t true.

Nathan: How can people not close to these systems be heard?

JP: Ethan Zuckerman, can you respond?

Ethan: One of my colleagues, Joy Buolamwini, is working on what she calls the Algorithmic Justice League, looking at computer vision algorithms that don’t work on people of color. In part this is because the tests use to train cv systems are 70% white male faces. So she’s generating new sets of facial data that we can retest on. Overall, it’d be good to use test data that represents the real world, and to make sure a representation of humanity is working on these systems. So here’s my question: We find co-design works well: bringing in the affected populations to talk with the system designers?

[Damn, I missed Yochai Benkler‘s comment.]

Finale: We should also enable people to interrogate AI when the results seem questionable or unfair. We need to be thinking about the proccesses for resolving such questions.

Nadya: It’s never “people” in general who are affected. It’s always particular people with agendas, from places and institutions, etc.

Comments Off on [liveblog] AI Advance opening: Jonathan Zittrain and lightning talks

February 1, 2017

How to fix the WordFence wordfence-waf.php problem

My site has been down while I’ve tried to figure out (i.e., google someone else’s solution) to a crash caused by WordFence, an excellent utility that, ironically, protects your WordPress blog from various maladies.

The problem is severe: Users of your blog see naught but an error message of this form:

Fatal error: Unknown: Failed opening required ‘/home/dezi3014/public_html/wordfence-waf.php’ (include_path=’…/usr/lib/php /usr/local/lib/php’) in Unknown on line 0

The exact path will vary, but the meaning is the same. It is looking for a file that doesn’t exist. You’ll see the same message when you try to open your WordPress site as administrator. You’ll see it even when you manually uninstall WordPress by logging into your host and deleting the wordfence folder from the wp-content/plugins folder

If you look inside the wordfence-waf.php file (which is in whatever folder you’ve installed WordPress into), it warns you that “Before removing this file, please verify the PHP ini setting `auto_prepend_file` does not point to this.”

Helpful, except my php.ini file doesn’t have any reference to this. (I use MediaTemple.com as my host.) Some easy googling disclosed that the command to look for the file may not be in php.ini, but may be in .htaccess or .user.ini instead. And now you have to find those files.

At least for me, the .user.ini file is in the main folder into which you’ve installed WordPress. In fact, the only line in that file was the one that has the “auto_prepend_file” command. Remove that line and you have your site back.

I assume all of this is too obvious to write about for technically competent people. This post is for the rest of us.

4 Comments »

Next Page »