logo
EverydayChaos
Everyday Chaos
Too Big to Know
Too Big to Know
Cluetrain 10th Anniversary edition
Cluetrain 10th Anniversary
Everything Is Miscellaneous
Everything Is Miscellaneous
Small Pieces cover
Small Pieces Loosely Joined
Cluetrain cover
Cluetrain Manifesto
My face
Speaker info
Who am I? (Blog Disclosure Form) Copy this link as RSS address Atom Feed

December 12, 2022

The Social Construction of Facts

To say that facts are social constructions doesn’t mean everything put forward as a fact is a fact. Nor does it mean that facts don’t express truths or facts are not to be trusted. Nor does it mean that there’s some unconstructed fact behind facts. Social constructionists don’t want to leave us in a world in which it’s ok to sat “No, it’s not raining” in the middle of a storm or claim “Water boiled at 40C for me this morning under normal circumstances.” 

Rather the critique, as I understand it, is that the fact-based disciplines we choose to pursue, the roles they play, who gets to participate, the forms of discourse and of proof, the equipment invented and the ways the materials are handled (the late Bruno Latour was brilliant on this point, among others), the commitment to an objective and consistent methodology (see Paul Feyerabend), all are the result of history, culture, economics, and social forces. Science itself is a social construct (as per Thomas Kuhn‘s The Structure of Scientific Revolutions [me on that book]). (Added bonus: Here’s Richard Rorty’s review of Ian Hacking’s excellent book, The Social Construction of What?)

Facts as facts pretty clearly seem (to me) to be social constructions. As such, they have a history…

Facts as we understand them became a thing in western culture when Francis Bacon early in the 17th century started explicitly using them to ground theories, which was a different way of constructing scientific truths; prior to this, science was built on deductions, not facts. (Pardon my generalizations.)

You can see the movement from deductive truth to fact-based empirical evidence across the many editions of Thomas Malthus‘ 1798 book,  An Essay on the Principle of Population, that predicted global famine based on a mathematical formula, but then became filled with facts and research from around the world. It went from a slim deductive volume to six volumes thick with facts and stats. Social construction added pounds to his Malthus’ work.

This happened because statistics arrived in Britain, by way of Germany, in the early 19th century. Statistical facts became important at that time not only because they enabled the inductive grounding of theories (as per Bacon and Malthus), but because they could rebut people’s personal interests. In particular,  they became an important way to break the sort of class-based assumptions that made it seem to t be ok to clean rich people’s chimneys by shoving little boys up them.  Against this were posed facts that showed that it was in fact bad for them. 

Compiling “blue books” of fact-based research became a standard part of the legislative process in England in the first half of the 19th century. By mid-century, the use of facts was so prevalent that in 1854 Dickens bemoaned society’s reliance on them in Hard Times on the grounds that facts kill imagination…yet another opposite to facts, and another social construction.

As the 19th century ended, we got our first fact-finding commissions that were established in order to peacefully resolve international disputes. (Narrator: They rarely did.) This was again using facts as the boulder that stubs your toe of self-interest (please forget I ever wrote that phrase), but now those interests were cross-national and not as easily resolvable as when you poise the interests of lace-cuffed lords against the interests of children crawling through upperclass chimneys.

In the following century  we got (i.e., we constructed) an idea of science and human knowledge that focused on assembling facts as if they were bricks out of which one could build a firm foundation. This led to some moaning (in a famous 1963 letter to the editor)  that science was turning into a mere “brickyard” of unassembled facts.

I’m not a historian, and this is the best I can recall from a rabbit hole of specific curiosity I fell into about 10 yrs ago when writing Too Big to Know. But the point is the the idea of the social construction of science and facts doesn’t mean that all facts — including “alternative facts” — are equal.  Water really does boil at 100C. Rather it’s the idea, role, use, importance, and control of facts that’s socially constructed.

Tweet
Follow me

Categories: philosophy, science, too big to know Tagged with: 2b2k • science Date: December 12th, 2022 dw

2 Comments »

July 11, 2021

Agnostic Belief, Believer’s Experience

Although I am an agnostic, I used to think of myself as a functional atheist: I saw no compelling reason to believe in God (and thus am an agnostic), but I lived my life as if there is certainly no God.

Now I see that I got that backwards. I firmly remain an agnostic, but it turns out there are ways in which I have always experienced the world as if it were a divine creation. I don’t believe my experience is actually evidence either way, but I find it interesting that my agnostic belief has long masked my belief-like experience…

— Continued at Psychology Today

Tweet
Follow me

Categories: personal, philosophy, science Tagged with: agnosticism • atheism • phenomenology • religion Date: July 11th, 2021 dw

2 Comments »

March 24, 2020

Hydroxychloroquine use for rheumatoid arthritis — but little research says it helps with COVID

NOTE: I edited the title of this post on March 29, 2020 to reflect the increasing evidence that HCQ is not useful in the prevention or treatment of COVID19. I also removed a few paragraphs from the Wall Street Journal reporting on a French  study, since the body of research since then runs contrary to its hopeful findings. As this post states, the rheumatologist I asked about this was not stating an opinion about whether HCQ works against COVID19, and is worried that the supply needed by their patients and by people with lupus might be diminished by a pointless run on the market. The information in this post about HCQ as a commonly used drug remains.

A highly reputable rheumatologist responded to my request for comment about a column by  Jeff Colyer and Daniel Hinthorn in the WSJ that holds out hope for using hydroxychloroquine to fight the Coronavirus.

The rheumatologist, who is highly respected, asked me not to use their name because they don’t want to be perceived as giving out medical advice — which this is not — and doesn’t have the time to go through their email message carefully enough to present it as a polished response. But they gave me permission to run it anonymously with those caveats. Here it is:

I give hydroxychloroquine to almost everybody who has rheumatoid arthritis and some of my patients have been on it for 20 years or more.  Of course, if patients have side effects from it I stop it and if they have improved to the point of appearing to be in full remission, I taper it down and may stop it.  There are people for whom it is not helpful by itself and is often used by me and others in conjunction with our other medicines for rheumatoid arthritis.  It is used similarly in psoriatic arthritis. I have a number of patients who have no swelling and no symptoms after treatment with hydroxychloroquine as the only “disease modifying drug.”

It is recommended to be given to virtually every patient with systemic lupus erythematosus as it is been found to improve their course, even when other medications are needed to get better control.  We also use it in other rheumatic diseases, sometimes with less evidence than for RA and SLE.

I have not used chloroquine, which is a closely related compound but one with somewhat more side effects and it is more powerful.

The side effects of hydroxychloroquine in the short term, which is what would be contemplated in treating COVID-19, are minimal to nonexistent, other than nausea and related problems, which I have almost never had patients report.  Ulcers are not caused by this.  There is a fear that people who are deficient in G6PD, an enzyme, will get hemolysis from this medication shortly after starting it;   people deficient in the enzyme G6PD could have a bad reaction to chloroquine but that is not reported now with hydroxychloroquine.  Hemolysis is destruction of red blood cells in the bloodstream and organs which could be a source of illness, however rumors that hydroxychloroquine causes this appear to be unfounded.  Several (5-10 years ago) years ago, I emailed a rheumatologist who is a world’s expert on hydroxychloroquine and asked him this question and he said that he has never seen this happen; most of us do not test for the presence of this enzyme anymore before starting hydroxychloroquine, as we feel it is not an issue.  This may not be true of chloroquine, but I have a feeling it is also not a problem.  Having to test everyone before getting this drug for COVID-19 would be a logistical difficulty given the time constraints and cost of the testing.

The rare side effects of hydroxychloroquine that might occur in the short term in my experience had been so rare as to be negligible. I have had one patient that I recall in recent years who had more vivid dreams while on this and she found that disturbing.

The vision problems that people refer to occur only after long-term use and the dangerous one is exceedingly rare.  The latter is some permanent loss of visual acuity due to retinal damage.  There was a recent study by ophthalmologists that reported that the upper dose level that we used was too high and they found evidence on new and specific testing of retinal damage at doses lower than we recommended, but these only occurred in people taking it for a long period of time, not a few weeks. Most of us in the rheumatology field have never seen damage at the frequency they report and are very disturbed by those findings.  We have been forced to lower our recommended dosages which undoubtedly has worsened some people.  In my recollection, which could be very faulty, I have had two or three people in over forty years who have had permanent visual changes after many years on the medication.  My associates have had similar experiences.

There are two other ocular problems both of which are reversible and rarely occur. One is a change in the eyeglass prescription (or requiring glasses) and the other is sparkling of lights at night. I have rarely seen either one and they are theoretically reversible by stopping the medication.  They also occur only after long term use, not a few weeks.

There is the possibility of skin pigment changing with long-term use but I do not believe I have ever had this happen to a patient.

I am sure when you review the possible side effects you will find many other side effects, however these are not common and are usually typical of any medication given to anybody for any reason.

The question of whether it is useful in COVID-19 is a separate issue about which I claim little or no expertise.  The initial trial was very small in number, but encouraging.  A real trial will be helpful but by the time it is completed, analyzed and available, we may be well past the pandemic phase, but still useful for the future.

An important study that I have thought of probably will not be done for logistical reasons.  That would be to study our patients with rheumatoid arthritis and systemic lupus who are on hydroxychloroquine to see their incidence of COVID-19 compared to a similar group of patients who are not on hydroxychloroquine.  The logistics are timing, finding a large enough sample size of patients on the drug and off the drug who are comparable, being sure the doses used are appropriate and knowing the exposures of the patient populations.

There is some concern that overuse of hydroxychloroquine by people who do not need it will deplete the supply of this important drug for our patients who are already on it and depending on it.  In fact, today I had a call from a patient who has been taking it for years and could not get it as her pharmacy was out of it.

Another medicine that rheumatologists use to treat rheumatoid arthritis and other conditions has been found in a small study to be successful in treating COVID-19.  That medicine is tocilizumab with the brand name of Actemra.  It interferes with IL-6.

The study the rheumatologist is proposing sounds ultra-interesting and possibly consequential.

Tweet
Follow me

Categories: science Tagged with: corona • coronavirus • covid29 • medicine Date: March 24th, 2020 dw

1 Comment »

April 29, 2018

Live, from a comet!

This is the comet 67P/Churyumov-Gerasimenko as seen from the European Space Agency’s Rosetta flyby.

The video was put together via crowd-sourcing.

A reliable source tells me that it is not snowing on the comet. Rather, what you’re seeing is dust and star streaks.

Can you imagine telling someone that this would be possible not so very long ago?

Tweet
Follow me

Categories: science Tagged with: comet • crowdsourcing Date: April 29th, 2018 dw

Be the first to comment »

February 11, 2018

The story of lead and crime, told in tweets

Patrick Sharkey [twitter: patrick_sharkey] uses a Twitter thread to evaluate the evidence about a possible relationship between exposure to lead and crime. The thread is a bit hard to get unspooled correctly, but it’s worth it as an example of:

1. Thinking carefully about complex evidence and data.

2. How Twitter affects the reasoning and its expression.

3. The complexity of data, which will only get worse (= better) as machine learning can scale up their size and complexity.

Note: I lack the skills and knowledge to evaluate Patrick’s reasoning. And, hat tip to David Lazer for the retweet of the thread.

Tweet
Follow me

Categories: ai, science Tagged with: 2b2k • ai • complexity • machine learning Date: February 11th, 2018 dw

Be the first to comment »

May 15, 2017

[liveblog][AI] AI and education lightning talks

Sara Watson, a BKC affiliate and a technology critic, is moderating a discussion at the Berkman Klein/Media Lab AI Advance.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Karthik Dinakar at the Media Lab points out what we see in the night sky is in fact distorted by the way gravity bends light, which Einstein called a “gravity lens.” Same for AI: The distortion is often in the data itself. Karthik works on how to help researchers recognize that distortion. He gives an example of how to capture both cardiologist and patient lenses to better to diagnose women’s heart disease.

Chris Bavitz is the head of BKC’s Cyberlaw Clinic. To help Law students understand AI and tech, the Clinic encourages interdisciplinarity. They also help students think critically about the roles of the lawyer and the technologist. The clinic prefers early relationships among them, although thinking too hard about law early on can diminish innovation.

He points to two problems that represent two poles. First, IP and AI: running AI against protected data. Second, issues of fairness, rights, etc.

Leah Plunkett, is a professor at Univ. New Hampshire Law School and is a BKC affiliate. Her topic: How can we use AI to teach? She points out that if Tom Sawyer were real and alive today, he’d be arrested for what he does just in the first chapter. Yet we teach the book as a classic. We think we love a little mischief in our lives, but we apparently don’t like it in our kids. We kick them out of schools. E.g., of 49M students in public schools in 20-11, 3.45M were suspended, and 130,000 students were expelled. These disproportionately affect children from marginalized segments.

Get rid of the BS safety justification and the govt ought to be teaching all our children without exception. So, maybe have AI teach them?

Sarah: So, what can we do?

Chris: We’re thinking about how we can educate state attorneys general, for example.

Karthik: We are so far from getting users, experts, and machine learning folks together.

Leah: Some of it comes down to buy-in and translation across vocabularies and normative frameworks. It helps to build trust to make these translations better.

[I missed the QA from this point on.]

Tweet
Follow me

Categories: berkman, culture, education, liveblog, philosophy, science, tech Tagged with: ai • education • machine learning Date: May 15th, 2017 dw

Be the first to comment »

[liveblog] AI Advance opening: Jonathan Zittrain and lightning talks

I’m at a day-long conference/meet-up put on by the Berkman Klein Center‘s and MIT Media Lab‘s “AI for the Common Good” project.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Jonathan Zittrain gives an opening talk. Since we’re meeting at Harvard Law, JZ begins by recalling the origins of what has been called “cyber law,” which has roots here. Back then, the lawyers got to the topic first, and thought that they could just think their way to policy. We are now at another signal moment as we are in a frenzy of building new tech. This time we want instead to involve more groups and think this through. [I am wildly paraphrasing.]

JZ asks: What is it that we intuitively love about human judgment, and are we willing to insist on human judgments that are worse than what a machine would come up with? Suppose for utilitarian reasons we can cede autonomy to our machines — e.g., autonomous cars — shouldn’t we? And what do we do about maintaining local norms? E.g., “You are now entering Texas where your autonomous car will not brake for pedestrians.”

“Should I insist on being misjudged by a human judge because that’s somehow artesinal?” when, ex hypothesis, an AI system might be fairer.

Autonomous systems are not entirely new. They’re bringing to the fore questions that have always been with us. E.g., we grant a sense of discrete intelligence to corporations. E.g., “McDonald’s is upset and may want to sue someone.”

[This is a particularly bad representation of JZ’s talk. Not only is it wildly incomplete, but it misses the through-line and JZ’s wit. Sorry.]

Lightning Talks

Finale Doshi-Velez is particularly interested in interpretable machine learning (ML) models. E.g., suppose you have ten different classifiers that give equally predictive results. Should you provide the most understandable, all of them…?

Why is interpretability so “in vogue”? Suppose non-interpretable AI can do something better? In most cases we don’t know what “better” means. E.g., someone might want to control her glucose level, but perhaps also to control her weight, or other outcomes? Human physicians can still see things that are not coded into the model, and that will be the case for a long time. Also, we want systems that are fair. This means we want interpretable AI systems.

How do we formalize these notions of interpretability? How do we do so for science and beyond? E.g., what is a legal “right to explanation
” mean? She is working with Sam Greshman on how to more formally ground AI interpretability in the cognitive science of explanation.

Vikash Mansinghka leads the eight-person Probabilistic Computing project at MIT. They want to build computing systems that can be our partners, not our replacements. We have assumed that the measure of success of AI is that it beats us at our own game, e.g., AlphaGo, Deep Blue, Watson playing Jeopardy! But games have clearly measurable winners.

His lab is working on augmented intelligence that gives partial solutions, guidelines and hints that help us solve problems that neither system could solve on their own. The need for these systems are most obvious in large-scale human interest projects, e.g., epidemiology, economics, etc. E.g., should a successful nutrition program in SE Asia be tested in Africa too? There are many variables (including cost). BayesDB, developed by his lab, is “augmented intelligence for public interest data science.”

Traditional computer science, computing systems are built up from circuits to algorithms. Engineers can trade off performance for interpretability. Probabilisitic systems have some of the same considerations. [Sorry, I didn’t get that last point. My fault!]

John Palfrey is a former Exec. Dir. of BKC, chair of the Knight Foundation (a funder of this project) and many other things. Where can we, BKC and the Media Lab, be most effective as a research organization? First, we’ve had the most success when we merge theory and practice. And building things. And communicating. Second, we have not yet defined the research question sufficiently. “We’re close to something that clearly relates to AI, ethics and government” but we don’t yet have the well-defined research questions.

The Knight Foundation thinks this area is a big deal. AI could be a tool for the public good, but it also might not be. “We’re queasy” about it, as well as excited.

Nadya Peek is at the Media Lab and has been researching “macines that make machines.” She points to the first computer-controlled machine (“Teaching Power Tools to Run Themselves“) where the aim was precision. People controlled these CCMs: programmers, CAD/CAM folks, etc. That’s still the case but it looks different. Now the old jobs are being done by far fewer people. But the spaces between doesn’t always work so well. E.g., Apple can define an automatiable workflow for milling components, but if you’re student doing a one-off project, it can be very difficult to get all the integrations right. The student doesn’t much care about a repeatable workflow.

Who has access to an Apple-like infrastructure? How can we make precision-based one-offs easier to create? (She teaches a course at MIT called “How to create a machine that can create almost anything.”)

Nathan Mathias, MIT grad student with a newly-minted Ph.D. (congrats, Nathan!), and BKC community member, is facilitating the discussion. He asks how we conceptualize the range of questions that these talks have raised. And, what are the tools we need to create? What are the social processes behind that? How can we communicate what we want to machines and understand what they “think” they’re doing? Who can do what, where that raises questions about literacy, policy, and legal issues? Finally, how can we get to the questions we need to ask, how to answer them, and how to organize people, institutions, and automated systems? Scholarly inquiry, organizing people socially and politically, creating policies, etc.? How do we get there? How can we build AI systems that are “generative” in JZ’s sense: systems that we can all contribute to on relatively equal terms and share them with others.

Nathan: Vikash, what do you do when people disagree?

Vikash: When you include the sources, you can provide probabilistic responses.

Finale: When a system can’t provide a single answer, it ought to provide multiple answers. We need humans to give systems clear values. AI things are not moral, ethical things. That’s us.

Vikash: We’ve made great strides in systems that can deal with what may or may not be true, but not in terms of preference.

Nathan: An audience member wants to know what we have to do to prevent AI from repeating human bias.

Nadya: We need to include the people affected in the conversations about these systems. There are assumptions about the independence of values that just aren’t true.

Nathan: How can people not close to these systems be heard?

JP: Ethan Zuckerman, can you respond?

Ethan: One of my colleagues, Joy Buolamwini, is working on what she calls the Algorithmic Justice League, looking at computer vision algorithms that don’t work on people of color. In part this is because the tests use to train cv systems are 70% white male faces. So she’s generating new sets of facial data that we can retest on. Overall, it’d be good to use test data that represents the real world, and to make sure a representation of humanity is working on these systems. So here’s my question: We find co-design works well: bringing in the affected populations to talk with the system designers?

[Damn, I missed Yochai Benkler‘s comment.]

Finale: We should also enable people to interrogate AI when the results seem questionable or unfair. We need to be thinking about the proccesses for resolving such questions.

Nadya: It’s never “people” in general who are affected. It’s always particular people with agendas, from places and institutions, etc.

Tweet
Follow me

Categories: berkman, culture, liveblog, philosophy, science, tech Tagged with: ai • ethics • machine learning • philosophy Date: May 15th, 2017 dw

Be the first to comment »

May 11, 2017

[liveblog] St. Goodall

I’m in Rome at the National Geographic Science Festival
, co-produced by Codice Edizioni which, not entirely coincidentally published, the Italian version of my book Took Big to Know. Jane Goodall is giving the opening talk to a large audience full of students. I won’t try to capture what she is saying because she is talking without notes, telling her personal story.

She embodies an inquiring mind capable of radically re-framing our ideas simply by looking at the phenomena. We may want to dispute her anthropomorphizing of chimps but it is a truth that needed to be uncovered. For example, she says that when she got to Oxford to get a graduate degree — even though she had never been to college — she was told that she should’t have given the chimps names. But this, she says, was because at the time science believed humans were unique. Since then genetics has shown how close we are to them, but even before that her field work had shown the psychological and behavioral similarities. So, her re-framing was fecund and, yes, true.

At a conference in America in 1986, every report from Africa was about the decimation of the chimpanzee population and the abuse of chimpanzees in laboratories. “I went to this conference as a scientist, ready to continue my wonderful life, and I left as an activist.” Her Tacare Institute
works with and for Africans. For example, local people are equipped with tablets and phones and mark chimp nests, downed trees, and the occasional leopard. (Takari provides scholarships to keep girls in school, “and some boys too.”)

She makes a totally Dad joke about “the cloud.”

It is a dangerous world, she says. “Our intellects have developed tremendously.” “Isn’t it strange that this most intellectual creature ever is destroying its home.” She calls out the damage done to our climate by our farming of animals. “There are a lot of reasons to avoid eating a lot of meat or any, but that’s one of them.”

There is a disconnect between our beautiful brains and our hearts, she says. Violence, domestic violence, greed…”we don’t think ‘Are we having a happy life?'” She started “Roots and Shoots
” in 1991 in Tanzania, and now it’s in 99 countries, from kindergartens through universities. It’s a program for young people. “We do not tell the young people what to do.” They decide what matters to them.

Her reasons for hope: 1. The reaction to Roots and Shoots. 2. Our amazing brains. 3. The resilience of nature. 4. Social media, which, if used right can be a “tremendous tool for change.” 6. “The indomitable human spirit.” She uses Nelson Mandela as an example, but also refugees making lives in new lands.

“It’s not only humans that have an indomitable spirit.” She shows a brief video of the release of a chimp that left at least some wizened adults in tears:

She stresses making the right ethical choices, a phrase not heard often enough.

If in this audience of 500 students she has not made five new scientists, I’ll be surprised.

Tweet
Follow me

Categories: science Tagged with: climate • hope • nature • science • vegetarian Date: May 11th, 2017 dw

Be the first to comment »

October 18, 2015

The Martian

My wife and I just saw The Martian. Loved it. It was as good a movie as could possibly be made out of a book that’s about sciencing the shit out of problems.

The book was the most fun I’ve had in a long time. So I was ready to be disappointed by the movie. Nope.

Compared to say, Gravity? Gravity‘s choreography was awesome, and the very ending of it worked for me. (No spoilers here!) But, it had irksome moment and themes, especially Sandra Bullock’s backstory. (No spoilers!)

The Martian was much less pretentious, IMO. It’s about science as problem-solving. Eng Fi, if you will. But the theme that emerges from this is:

Also, Let’s go the fuck to Mars!


(I still think Interstellar is a better movie, although it’s nowhere near as much fun. But I’m not entirely reasonable about Interstellar.)

Tweet
Follow me

Categories: culture, reviews, science Tagged with: movies Date: October 18th, 2015 dw

1 Comment »

October 14, 2015

Science!

“Scientists Successfully Used a Mind-Machine Interface to Help a Man With Paralysis Walk”

Tweet
Follow me

Categories: philosophy, science Tagged with: descartes Date: October 14th, 2015 dw

Be the first to comment »

Next Page »


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
TL;DR: Share this post freely, but attribute it to me (name (David Weinberger) and link to it), and don't use it commercially without my permission.

Joho the Blog uses WordPress blogging software.
Thank you, WordPress!