logo
EverydayChaos
Everyday Chaos
Too Big to Know
Too Big to Know
Cluetrain 10th Anniversary edition
Cluetrain 10th Anniversary
Everything Is Miscellaneous
Everything Is Miscellaneous
Small Pieces cover
Small Pieces Loosely Joined
Cluetrain cover
Cluetrain Manifesto
My face
Speaker info
Who am I? (Blog Disclosure Form) Copy this link as RSS address Atom Feed

May 26, 2019

Fake news, harassment, and the robot threat: Some numbers

Lee Rainie, of Pew Research, is giving a talk at a small pre-conference event I’m at. I’m a Lee Rainie and Pew Lifetime Fan. [DISCLAIMERS: These are just some points Lee made. I undoubtedly left out important qualifiers. I’m sure I got things wrong, too. If I were on a connection that got more than 0.4mbps (thanks, Verizon /s) , I’d find the report to link to.]

He reports that 23% of people say they have forwarded fake news, although most in order to warn other people about it. 26% of American adults and 46% between 18 and 29 years old have had fake news about them posted. The major harm reported was reputational.

He says that 41% of American adults have been harassed; the list of types of harassment is broad. About a fifth of Americans have been harassed in severe ways: stalked, sexually harassed, physical threatened, etc. Two thirds have seen someone else be harassed.

The study analyzed the Facebook posts of all the members of Congress. The angrier the contents were, the more often they’re shared, liked, or commented on. Online discussions are reported to be far less likely to be respectful, civil, etc. Seventy one percent of Facebook users did not know what data about them FB is sharing about them, as listed on the FB privacy managaement page.

A majority of Americans favor free speech even if it allows bad ideas to proliferate. [ I wonder how’d they answer if you gave them examples, or if you differentiated free speech from unmoderated speech on private platforms such as Facebook.]

Two thirds of Americans expect that robots and computers will do much of the work currently done by humans within 50 yrs. But we think it’ll mainly be other people who are put out of work; people think they personally will not be replaced. Seventy two percent are worried about a future in which robots do so much. But 63% of experts (a hand-crafted, non-representative list, Lee points out) think AI will make life better. These experts worry first of all about the loss of human agency. They also worry about data abuse, job loss, dependence lock-in (i.e., losing skills as robots take over those tasks), and mayhem (e.g., robots going nuts on the battlefield).

Q: In Europe, the fear of jobdisplacement is the opposite. People worry about their own job being displaced.

Tweet
Follow me

Categories: internet Tagged with: fake news • harassment Date: May 26th, 2019 dw

Be the first to comment »

September 20, 2018

Coming to belief

I’ve written before about the need to teach The Kids (also: all of us) not only how to think critically so we can see what we should not believe, but also how to come to belief. That piece, which I now cannot locate, was prompted by danah boyd’s excellent post on the problem with media literacy. Robert Berkman, Outreach, Business Librarian at the University of Rochester and Editor of The Information Advisor’s Guide to Internet Research, asked me how one can go about teaching people how to come to belief. Here’s an edited version of my reply:

I’m afraid I don’t have a good answer. I actually haven’t thought much about how to teach people how to come to belief, beyond arguing for doing this as a social process (the ol’ “knowledge is a network” argument :) I have a pretty good sense of how *not* to do it: the way philosophy teachers relentlessly show how every proposed position can be torn down.

I wonder what we’d learn by taking a literature course as a model — not one that is concerned primarily with critical method, but one that is trying to teach students how to appreciate literature. Or art. The teacher tries to get the students to engage with one another to find what’s worthwhile in a work. Formally, you implicitly teach the value of consistency, elegance of explanation, internal coherence, how well a work clarifies one’s own experience, etc. Those are useful touchstones for coming to belief.

I wouldn’t want to leave students feeling that it’s up to them to come up with an understanding on their own. I’d want them to value the history of interpretation, bringing their critical skills to it. The last thing we need is to make people feel yet more unmoored.

I’m also fond of the orthodox Jewish way of coming to belief, as I, as a non-observant Jew, understand it. You have an unchanging and inerrant text that means nothing until humans interpret it. To interpret it means to be conversant with the scholarly opinions of the great Rabbis, who disagree with one another, often diametrically. Formulating a belief in this context means bringing contemporary intelligence to a question while finding support in the old Rabbis…and always always talking respectfully about those other old Rabbis who disagree with your interpretation. No interpretations are final. Learned contradiction is embraced.

That process has the elements I personally like (being moored to a tradition, respecting those with whom one disagrees, acceptance of the finitude of beliefs, acceptance that they result from a social process), but it’s not going to be very practical outside of Jewish communities if only because it rests on the acceptance of a sacred document, even though it’s one that literally cannot be taken literally; it always requires interpretation.

My point: We do have traditions that aim at enabling us to come to belief. Science is one of them. But there are others. We should learn from them.

TL;DR: I dunno.

Tweet
Follow me

Categories: philosophy, too big to know Tagged with: 2b2k • fake news • logic • philosophy Date: September 20th, 2018 dw

2 Comments »

November 27, 2016

Fake news sucks but isn't the end of civilization

Because fake news only works if it captures our attention, and because presenting ideas that are outside the normal range is a very effective way to capture our attention, fake news will with some inevitably tend to present extreme positions.

Real news items often uses the same technique these days: serious news stories often will have clickbait headlines. “Clickbait, whether fake or real, thus tends to make us think that the world is full of extremes. The normal doesn’t seem very normal any more.”Clickbait, whether fake or real, thus tends to make us think that the world is full of extremes. The normal doesn’t seem very normal any more.

Of course, clickbait is nothing new. Tabloids have been using it forever. For the past thirty years, in the US, local TV stations have featured the latest stabbing or fire as the lead story on the news. (This is usually said to have begun in Miami
, and is characterized as “If it bleeds, it leads,” i.e., it is the first item in the news broadcast.)

At the same time, however, the Internet makes it easier than ever to find news that doesn’t simply try to set our nerves on fire. Fact checking abounds, at sites dedicated to the task and as one of the most common of distributed Internet activities. Even while we form echo chambers that reinforce our beliefs, “we are also more likely than ever before to come across contrary views”we are also more likely than ever before to come across contrary views. Indeed, I suspect (= I have no evidence) that one reason we seem so polarized is that we can now see the extremities of belief that have always been present in our culture — extremities that in the age of mass communication were hidden from us.

Now that there are economic reasons to promulgate fake news — you can make a good living at it — we need new mechanisms to help us identify it, just as the rise of “native advertising” (= ads that pose as news stories) has led to new norms about letting the reader know that they’re ads. The debate we’re currently having is the discussion that leads to new techniques and norms.

Some of the most important techniques can best be applied by the platforms through which fake news promulgates. We need to press those platforms to do the right thing, even if it means a marginal loss of revenues for them. The first step is to stop them from thinking, as I believe some of them genuinely do, that they are mere open platforms that cannot interfere with what people say and share on them. Baloney. As Zeynep Tufekci, among others, has repeatedly pointed out, these platforms already use algorithms to decide which items to show us from the torrent of possibilities. Because the major Western platforms genuinely hold to democratic ideals, they may well adjust their algorithms to achieve better social ends. I have some hope about this.

Just as with spam, “native advertising,” and popup ads, we are going to have to learn to live with fake news both by creating techniques that prevent it from being as effective as it would like to be and by accepting its inevitability. If part of this is that we learn to be more “meta” — not accepting all content at its face value — then fake news will be part of our moral and intellectual evolution.

Tweet
Follow me

Categories: journalism, politics Tagged with: clickbait • facebook • fake news • platforms • twitter Date: November 27th, 2016 dw

Be the first to comment »


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
TL;DR: Share this post freely, but attribute it to me (name (David Weinberger) and link to it), and don't use it commercially without my permission.

Joho the Blog uses WordPress blogging software.
Thank you, WordPress!