Joho the Blogliveblog Archives - Page 2 of 14 - Joho the Blog

September 26, 2017

[liveblog][PAIR] Karrie Karahalios

At the Google PAIR conference, Karrie Karahalios is going to talk about how people make sense of their world and lives online. (This is an information-rich talk, and Karrie talks quickly, so this post is extra special unreliable. Sorry. But she’s great. Google her work.)

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Today, she says, people want to understand how the information they see comes to them. Why does it vary? “Why do you get different answers depending on your wifi network? ”Why do you get different answers depending on your wifi network? These algorithms also affect our personal feeds, e.g., Instagram and Twitter; Twitter articulates it, but doesn’t tell you how it decides what you will see

In 2012, Christian Sandvig and [missed first name] Holbrook were wondering why they were getting odd personalized ads in their feeds. Most people were unaware that their feeds are curated: only 38% were aware of this in 2012. Thsoe who were aware became aware through “folk theories”: non-authoritative explanations that let them make sense of their feed. Four theories:

1. Personal engagement theory: If you like and click on someone, the more of that person you’ll see in your feed. Some people were liking their friends’ baby photos, but got tired of it.

2. Global population theory: If lots of people like, it will show up on more people’s feeds.

3. Narcissist: You’ll see more from people who are like you.

4. Format theory: Some types of things get shared more, e.g., photos or movies. But people didn’t get

Kempton studied thermostats in the 1980s. People either thought of it as a switch or feedback, or as a valve. He looked at their usage patterns. Regardless of which theory, they made it work for them.

She shows an Orbitz page that spits out flights. You see nothing under the hood. But someone found out that if you use a Mac, your prices were higher. People started using designs that shows the seams. So, Karrie’s group created a view that showed the feed and all the content from their network, which was three times bigger than what they saw. For many, this was like awakening from the Matrix. More important, they realized that their friends weren’t “liking” or commenting because the algorithm had kept their friends from seeing what they posted.

Another tool shows who you are seeing posts from and who you are not. This was upsetting for many people.

After going through this process people came up with new folk theories. E.g., they thought it must be FB’s wisdom in stripping out material that’s uninteresting one way or another. [paraphrasing].

They let them configure who they saw, which led many people to say that FB’s algorithm is actually pretty good; there was little to change.

Are these folk theories useful? Only two: personal engagement and control panel, because these let you do something. But there are poor tweaking tools.

How to embrace folk theories: 1. Algorithm probes, to poke and prod. “It would be great, Karrie says, to have open APIs so people could create tools”(It would be great to have open APIs so people could create tools. FB deprecated it.) 2. Seamful interfaces to geneate actionable folk theories. Tuning to revert of borrow?

Another control panel UI, built by Eric Gilbert, uses design to expose the algorithms.

She ends with a wuote form Richard Dyer: “All technolgoies are at once technical and also always social…”

Be the first to comment »

[liveblog][PAIR] Jess Holbrook

I’m at the PAIR conference at Google. Jess Holbrook is UX lead for AI. He’s talking about human-centered machine learning.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

“We want to put AI into the maker toolkit, to help you solve real problems.” One of the goals of this: “How do we democratize AI and change what it means to be an expert in this space?” He refers to a blog post he did with Josh Lovejoy about human-centered ML. He emphasizes that we are right at the beginning of figuring this stuff out.

Today, someone finds a data set, and finds a problem that that set could solve. You train a model and look at its performance, and decided if it’s good enough. And then you launch “The world’s first smart X. Next step: profit.” But what if you could do this in a human-centered way?

Human-centered design means: 1. Staying proximate. Know your users. 2. Inclusive divergence: reach out and bring in the right people. 3. Shared definition of success: what does it mean to be done? 4. Make early and often: lots of prototyping. 5. Iterate, test, throw it away.

So, what would a human-centered approach to ML look like? He gives some examples.

Instead of trying to find an application for data, human-centered ML finds a problem and then finds a data set appropriate for that problem. E.g., diagnosis plant diseases. Assemble tagged photos of plants. Or, use ML to personalize a “balancing spoon” for people with Parkinsons.

Today, we find bias in data sets after a problem is discoered. E.g., ProPublica’s article exposing the bias in ML recidivism predictions. Instead, proactively inspect for bias, as per JG’s prior talk.

Today, models personalize experiences, e.g., keyboards that adapt to you. With human-centered ML, people can personalize their models. E.g., someone here created a raccoon detector that uses images he himself took and uploaded, personalized to his particular pet raccoon.

Today, we have to centralize data to get results. “With human-centered ML we’d also have decentralized, federated learning”With human-centered ML we’d also have decentralized, federated learning, getting the benefits while maintaining privacy.

Today there’s a small group of ML experts. [The photo he shows are all white men, pointedly.] With human-centered ML, you get experts who have non-ML domain expertise, which leads to more makers. You can create more diverse, inclusive data sets.

Today, we have narrow training and testing. With human-centered ML, we’ll judge instead by how systems change people’s lives. E.g., ML for the blind to help them recognize things in their environment. Or real-time translation of signs.

Today, we do ML once. E.g., PicDescBot tweets out amusing misfires of image recognition. With human-centered ML we’ll combine ML and teaching. E.g., a human draws an example, and the neural net generates alternatives. In another example, ML improved on landscapes taken by StreetView, where it learned what is an improvement from a data set of professional photos. Google auto-suggest ML also learns from human input. He also shows a video of Simone Giertz, “Queen of the Shitty Robots.”

He references Amanda Case: “Expanding people’s definion of normal” is almost always a gradual process.

[The photo of his team is awesomely diverse.]

Be the first to comment »

[liveblog] Google AI Conference

I am, surprisingly, at the first PAIR (People + AI Research) conference at Google, in Cambridge. There are about 100 people here, maybe half from Google. The official topic is: “How do humans and AI work together? How can AI benefit everyone?” I’ve already had three eye-opening conversations and the conference hasn’t even begun yet. (The conference seems admirably gender-balanced in audience and speakers.)

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

The great Martin Wattenberg (half of Wattenberg – Fernanda Viéga) kicks it off, introducing John Giannandrea, a VP at Google in charge of AI, search, and more. “We’ve been putting a lot of effort into using inclusive data sets.”

John says that every vertical will affected by this. “It’s important to get the humanistic side of this right.” He says there are 1,300 languages spoken world wide, so if you want to reach everyone with tech, machine learning can help. Likewise with health care, e.g. diagnosing retinal problems caused by diabetes. Likewise with social media.

PAIR intends to use engineering and analysis to augment expert intelligence, i.e., professionals in their jobs, creative people, etc. And “how do we remain inclusive? How do we make sure this tech is available to everyone and isn’t used just by an elite?”

He’s going to talk about interpretability, controllability, and accessibility.

Interpretability. Google has replaced all of its language translation software with neural network-based AI. He shows an example of Hemingway translated into Japanese and then back into English. It’s excellent but still partially wrong. A visualization tool shows a cluster of three strings in three languages, showing that the system has clustered them together because they are translations of the same sentence. [I hope I’m getting this right.] Another example: a photo of integrated gradients hows that the system has identified a photo as a fire boat because of the streams of water coming from it. “We’re just getting started on this.” “We need to invest in tools to understand the models.”

Controllability. These systems learn from labeled data provided by humans. “We’ve been putting a lot of effort into using inclusive data sets.” He shows a tool that lets you visuallly inspect the data to see the facets present in them. He shows another example of identifying differences to build more robust models. “We had people worldwide draw sketches. E.g., draw a sketch of a chair.” In different cultures people draw different stick-figures of a chair. [See Eleanor Rosch on prototypes.] And you can build constraints into models, e.g., male and female. [I didn’t get this.]

Accessibility. Internal research from Youtube built a model for recommending videos. Initially it just looked at how many users watched it. You get better results if you look not just at the clicks but the lifetime usage by users. [Again, I didn’t get that accurately.]

Google open-sourced Tensor Flow, Google’s AI tool. “People have been using it from everything to to sort cucumbers, or to track the husbandry of cows.”People have been using it from everything to to sort cucumbers, or to track the husbandry of cows. Google would never have thought of this applications.

AutoML: learning to learn. Can we figure out how to enable ML to learn automatically. In one case, it looks at models to see if it can create more efficient ones. Google’s AIY lets DIY-ers build AI in a cardboard box, using Raspberry Pi. John also points to an Android app that composes music. Also, Google has worked with Geena Davis to create sw that can identify male and female characters in movies and track how long each speaks. It discovered that movies that have a strong female lead or co-lead do better financially.

He ends by emphasizing Google’s commitment to open sourcing its tools and research.

 


 

Fernanda and Martin talk about the importance of visualization. (If you are not familiar with their work, you are leading deprived lives.) When F&M got interested in ML, they talked with engineers. ““ML is very different. Maybe not as different as software is from hardware. But maybe. ”ML is very different. Maybe not as different as software is from hardware. But maybe. We’re just finding out.”

M&F also talked with artists at Google. He shows photos of imaginary people by Mike Tyka created by ML.

This tells us that AI is also about optimizing subjective factors. ML for everyone: Engineers, experts, lay users.

Fernanda says ML spreads across all of Google, and even across Alphabet. What does PAIR do? It publishes. It’s interdisciplinary. It does education. E.g., TensorFlow Playground: a visualization of a simple neural net used as an intro to ML. They opened sourced it, and the Net has taken it up. Also, a journal called Distill.pub aimed at explaining ML and visualization.

She “shamelessly” plugs deeplearn.js, tools for bringing AI to the browser. “Can we turn ML development into a fluid experience, available to everyone?”
What experiences might this unleash, she asks.

They are giving out faculty grants. And expanding the Brain residency for people interested in HCI and design…even in Cambridge (!).

Be the first to comment »

May 15, 2017

[liveblog][AI] AI and education lightning talks

Sara Watson, a BKC affiliate and a technology critic, is moderating a discussion at the Berkman Klein/Media Lab AI Advance.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Karthik Dinakar at the Media Lab points out what we see in the night sky is in fact distorted by the way gravity bends light, which Einstein called a “gravity lens.” Same for AI: The distortion is often in the data itself. Karthik works on how to help researchers recognize that distortion. He gives an example of how to capture both cardiologist and patient lenses to better to diagnose women’s heart disease.

Chris Bavitz is the head of BKC’s Cyberlaw Clinic. To help Law students understand AI and tech, the Clinic encourages interdisciplinarity. They also help students think critically about the roles of the lawyer and the technologist. The clinic prefers early relationships among them, although thinking too hard about law early on can diminish innovation.

He points to two problems that represent two poles. First, IP and AI: running AI against protected data. Second, issues of fairness, rights, etc.

Leah Plunkett, is a professor at Univ. New Hampshire Law School and is a BKC affiliate. Her topic: How can we use AI to teach? She points out that if Tom Sawyer were real and alive today, he’d be arrested for what he does just in the first chapter. Yet we teach the book as a classic. We think we love a little mischief in our lives, but we apparently don’t like it in our kids. We kick them out of schools. E.g., of 49M students in public schools in 20-11, 3.45M were suspended, and 130,000 students were expelled. These disproportionately affect children from marginalized segments.

Get rid of the BS safety justification and the govt ought to be teaching all our children without exception. So, maybe have AI teach them?

Sarah: So, what can we do?

Chris: We’re thinking about how we can educate state attorneys general, for example.

Karthik: We are so far from getting users, experts, and machine learning folks together.

Leah: Some of it comes down to buy-in and translation across vocabularies and normative frameworks. It helps to build trust to make these translations better.

[I missed the QA from this point on.]

Comments Off on [liveblog][AI] AI and education lightning talks

[liveblog][AI] Perspectives on community and AI

Chelsea Barabas is moderating a set of lightning talks at the AI Advance, aat Berkman Klein and MIT Media Lab.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Lionel Brossi recounts growing up in Argentina and the assumption that all boys care about football. He moved to Chile which is split between people who do and do not watch football. “Humans are inherently biased.” So, our AI systems are likely to be biased. Cognitive science has shown that the participants in their studies tend to be WEIRD: western, educated, industrialized, rich and developed. Also straight and white. He references Kate Crawford‘s “AI’s White Guy Problem.” We need not only diverse teams of developers, but also to think about how data can be more representative. We also need to think about the users. One approach is work on goal centered design.

If we ever get to unbiased AI, Borges‘ statement, “The original is unfaithful to the translation” may apply.

Chelsea: What is an inclusive way to think of cross-border countries?

Lionel: We need to co-design with more people.

Madeline Elish is at Data and Society and an anthropology of technology grad student at Columbia. She’s met designers who thought it might be a good to make a phone run faster if you yell at it. But this would train children to yell at things. What’s the context in which such designers work? She and Tim Hwang set about to build bridges between academics and businesses. They asked what designers see as their responsibility for the social implications of their work. They found four core challenges:

1. Assuring users perceive good intentions
2. Protecting privacy
3. Long term adoption
4. Accuracy and reliability

She and Tim wrote An AI Pattern Language [pdf] about the frameworks that guide design. She notes that none of them were thinking about social justice. The book argues that there’s a way to translate between the social justice framework and, for example, the accuracy framework.

Ethan Zuckerman: How much of the language you’re seeing feels familiar from other hype cycles?

Madeline: Tim and I looked at the history of autopilot litigation to see what might happen with autonomous cars. We should be looking at Big Data as the prior hype cycle.

Yarden Katz is at the BKC and at the Dept. of Systems Biology at Harvard Medical School. He talks about the history of AI, starting with 1958 claim about translation machine. 1966: Minsky Then there was an AI funding winter, but now it’s big again. “Until recently, AI was a dirty word.”

Today we use it schizophrenically: for Deep Learning or in a totally diluted sense as something done by a computer. “AI” now seems to be a branding strategy used by Silicon Valley.

“AI’s history is diverse, messy, and philosophical.” If complexit is embraced, “AI” might not be a useful caregory for policy. So we should go basvk to the politics of technology:

1. who controls the code/frameworks/data
2. Is the system inspectable/open?
3. Who sets the metrics? Who benefits from them?

The media are not going to be the watchdogs because they’re caught up in the hype. So who will be?

Q: There’s a qualitative difference in the sort of tasks now being turned over to computers. We’re entrusting machines with tasks we used to only trust to humans with good judgment.

Yarden: We already do that with systems that are not labeled AI, like “risk assessment” programs used by insurance companies.

Madeline: Before AI got popular again, there were expert systems. We are reconfiguring our understanding, moving it from a cognition frame to a behavioral one.

Chelsea: I’ve been involved in co-design projects that have backfired. These projects have sometimes been somewhat extractive: going in, getting lots of data, etc. How do we do co-design that are not extractive but that also aren’t prohibitively expensive?

Nathan: To what degree does AI change the dimensions of questions about explanation, inspectability, etc.

Yarden: The promoters of the Deep Learning narrative want us to believe you just need to feed in lots and lots of data. DL is less inspectable than other methods. DL is not learning from nothing. There are open questions about their inductive power.


Amy Zhang and Ryan Budish give a pre-alpha demo of the AI Compass being built at BKC. It’s designed to help people find resources exploring topics related to the ethics and governance of AI.

Comments Off on [liveblog][AI] Perspectives on community and AI

[liveblog] AI Advance opening: Jonathan Zittrain and lightning talks

I’m at a day-long conference/meet-up put on by the Berkman Klein Center‘s and MIT Media Lab‘s “AI for the Common Good” project.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Jonathan Zittrain gives an opening talk. Since we’re meeting at Harvard Law, JZ begins by recalling the origins of what has been called “cyber law,” which has roots here. Back then, the lawyers got to the topic first, and thought that they could just think their way to policy. We are now at another signal moment as we are in a frenzy of building new tech. This time we want instead to involve more groups and think this through. [I am wildly paraphrasing.]

JZ asks: What is it that we intuitively love about human judgment, and are we willing to insist on human judgments that are worse than what a machine would come up with? Suppose for utilitarian reasons we can cede autonomy to our machines — e.g., autonomous cars — shouldn’t we? And what do we do about maintaining local norms? E.g., “You are now entering Texas where your autonomous car will not brake for pedestrians.”

“Should I insist on being misjudged by a human judge because that’s somehow artesinal?” when, ex hypothesis, an AI system might be fairer.

Autonomous systems are not entirely new. They’re bringing to the fore questions that have always been with us. E.g., we grant a sense of discrete intelligence to corporations. E.g., “McDonald’s is upset and may want to sue someone.”

[This is a particularly bad representation of JZ’s talk. Not only is it wildly incomplete, but it misses the through-line and JZ’s wit. Sorry.]

Lightning Talks

Finale Doshi-Velez is particularly interested in interpretable machine learning (ML) models. E.g., suppose you have ten different classifiers that give equally predictive results. Should you provide the most understandable, all of them…?

Why is interpretability so “in vogue”? Suppose non-interpretable AI can do something better? In most cases we don’t know what “better” means. E.g., someone might want to control her glucose level, but perhaps also to control her weight, or other outcomes? Human physicians can still see things that are not coded into the model, and that will be the case for a long time. Also, we want systems that are fair. This means we want interpretable AI systems.

How do we formalize these notions of interpretability? How do we do so for science and beyond? E.g., what is a legal “right to explanation
” mean? She is working with Sam Greshman on how to more formally ground AI interpretability in the cognitive science of explanation.

Vikash Mansinghka leads the eight-person Probabilistic Computing project at MIT. They want to build computing systems that can be our partners, not our replacements. We have assumed that the measure of success of AI is that it beats us at our own game, e.g., AlphaGo, Deep Blue, Watson playing Jeopardy! But games have clearly measurable winners.

His lab is working on augmented intelligence that gives partial solutions, guidelines and hints that help us solve problems that neither system could solve on their own. The need for these systems are most obvious in large-scale human interest projects, e.g., epidemiology, economics, etc. E.g., should a successful nutrition program in SE Asia be tested in Africa too? There are many variables (including cost). BayesDB, developed by his lab, is “augmented intelligence for public interest data science.”

Traditional computer science, computing systems are built up from circuits to algorithms. Engineers can trade off performance for interpretability. Probabilisitic systems have some of the same considerations. [Sorry, I didn’t get that last point. My fault!]

John Palfrey is a former Exec. Dir. of BKC, chair of the Knight Foundation (a funder of this project) and many other things. Where can we, BKC and the Media Lab, be most effective as a research organization? First, we’ve had the most success when we merge theory and practice. And building things. And communicating. Second, we have not yet defined the research question sufficiently. “We’re close to something that clearly relates to AI, ethics and government” but we don’t yet have the well-defined research questions.

The Knight Foundation thinks this area is a big deal. AI could be a tool for the public good, but it also might not be. “We’re queasy” about it, as well as excited.

Nadya Peek is at the Media Lab and has been researching “macines that make machines.” She points to the first computer-controlled machine (“Teaching Power Tools to Run Themselves“) where the aim was precision. People controlled these CCMs: programmers, CAD/CAM folks, etc. That’s still the case but it looks different. Now the old jobs are being done by far fewer people. But the spaces between doesn’t always work so well. E.g., Apple can define an automatiable workflow for milling components, but if you’re student doing a one-off project, it can be very difficult to get all the integrations right. The student doesn’t much care about a repeatable workflow.

Who has access to an Apple-like infrastructure? How can we make precision-based one-offs easier to create? (She teaches a course at MIT called “How to create a machine that can create almost anything.”)

Nathan Mathias, MIT grad student with a newly-minted Ph.D. (congrats, Nathan!), and BKC community member, is facilitating the discussion. He asks how we conceptualize the range of questions that these talks have raised. And, what are the tools we need to create? What are the social processes behind that? How can we communicate what we want to machines and understand what they “think” they’re doing? Who can do what, where that raises questions about literacy, policy, and legal issues? Finally, how can we get to the questions we need to ask, how to answer them, and how to organize people, institutions, and automated systems? Scholarly inquiry, organizing people socially and politically, creating policies, etc.? How do we get there? How can we build AI systems that are “generative” in JZ’s sense: systems that we can all contribute to on relatively equal terms and share them with others.

Nathan: Vikash, what do you do when people disagree?

Vikash: When you include the sources, you can provide probabilistic responses.

Finale: When a system can’t provide a single answer, it ought to provide multiple answers. We need humans to give systems clear values. AI things are not moral, ethical things. That’s us.

Vikash: We’ve made great strides in systems that can deal with what may or may not be true, but not in terms of preference.

Nathan: An audience member wants to know what we have to do to prevent AI from repeating human bias.

Nadya: We need to include the people affected in the conversations about these systems. There are assumptions about the independence of values that just aren’t true.

Nathan: How can people not close to these systems be heard?

JP: Ethan Zuckerman, can you respond?

Ethan: One of my colleagues, Joy Buolamwini, is working on what she calls the Algorithmic Justice League, looking at computer vision algorithms that don’t work on people of color. In part this is because the tests use to train cv systems are 70% white male faces. So she’s generating new sets of facial data that we can retest on. Overall, it’d be good to use test data that represents the real world, and to make sure a representation of humanity is working on these systems. So here’s my question: We find co-design works well: bringing in the affected populations to talk with the system designers?

[Damn, I missed Yochai Benkler‘s comment.]

Finale: We should also enable people to interrogate AI when the results seem questionable or unfair. We need to be thinking about the proccesses for resolving such questions.

Nadya: It’s never “people” in general who are affected. It’s always particular people with agendas, from places and institutions, etc.

Comments Off on [liveblog] AI Advance opening: Jonathan Zittrain and lightning talks

March 1, 2017

[liveblog] Five global challenges and the role of the university

Juan Carlos De Martin is giving a lunchtime talk called “Five global challenges and the role of the university,” with Charles Nesson. These are two of my favorite people. Juan Carlos is here to talk about his new book (in Italian), Università Futura – Tra Democrazia e Bit.

Charlie introduces Juan Carlos by describing his first meeting with him at a conference in Torino at which the idea of the Nexa Center of Internet and Society
, which is now a reality.

Juan Carlos begins by tracing the book’s traIn the book and here he will talk about five global challenges. Why five? Because that’s how we he sees it, but it’s subjective.

  1. Democracy. It’s in crisis.

  2. Environment. For example, you may have heard about this global warming thing. It’s hard for us to think about such large systems.

  3. Technology. E.g., bio tech, AI, nanotech, neuro-cognition. The benefits of these are important, but the problems they raise are very difficult.

  4. Economy. Growth is slowing. Trade is slowing. How do we ensure a decent livelihood to all?

  5. Geopolitics. The world order seems to be undergoing constant change. How do we preserve the peace?

We are in uncharted waters, he says: high risk and high unpredictability. ““I don’t want to sound apocalyptic, because I’m not, but we have to face the dangers”I don’t want to sound apocalyptic, because I’m not, but we have to face the dangers.”
Juan Carlos makes three observations:

First, we are going to need lots of knowledge, more than ever before.

Second, we’ll need people capable of interpreting, using, and producing such knowledge, more than ever before.

Third, in democracies we need the knowledge to get to as many people as possible, and as many people as possible have to become better critical thinkers. “There’s a clear rejection of experts which we, as people in universities, need to take seriously…What did we do wrong to lose the trust of people?”

These three observations lead to the idea that universities should play an important role. So, what is the current state of the university?

First, for the past forty years, universities have pursued knowledge useful to the economy.

Second, there has been an emphasis on training workers, which makes sense, but has meant less emphasis on educating people as full humans and citizens.

Third, the university has been a normative organization (like non-profits and churches) that has been pushed to become more of a utilitarian organization (like businesses). This shows itself in, for example, the excessive use of quantitative metrics for promotion, an insane emphasis on publishing for its own sake, and a hyper-disciplinarity because it’s easier to publish within a smaller slice.

These mean that the historically multi-dimensional mission of the university has been flattened, and the spirit has gone from normative to utilitarian. “All of this represents a problem if we want the university to help society face … 21st century problems.” (Juan Carlos says that he wrote the book in Italian [his English is perfect] because when he began in 2008, Italian universities were beginning a seven year contraction of 20%.)

We need all kinds of knowledge — not just what looks useful right now — because we don’t know what will be useful. We need interdisciplinarity because so many societal challenges — including all the ones he began the talk with — are interdisciplinary. But the incentives are not currently in that direction. And we need “effective interaction with the general public.” This is not just about communicating or transferring knowledge; it has to be genuinely interactive.

We need, he says, the university to speak the truth.

His proposal is that we “rediscover the roots of the university” and update them to present times. There is a solution in those roots, he says.

At the root, education is a personal relationship among human beings. ““Education is not mere information transfer”Education is not mere information transfer.” This means educating human beings and citizens, not just workers.

Everyone agrees we need critical thinking, but we need to work on how to teach it and what it means. We need critical thinkers because we need people who can handle unexpected situations.

We need universities to be institutions that can take the long view, can go slowly, value silence, that enable concentration. These were characteristics of universities for a thousand years.
What universities can do:

1. To achieve inter-disciplinarity, we cannot abolish disciplines; they play an important role. But we need to avoid walls between them. “Maybe a little short fence” that people can easily cross.

2. We need to strongly encourage heterodox thinking. Some disciplines need this urgently; Juan Carlos calls out economics as an example.

3. The university should itself be a “trustee of the unborn,” i.e., of the generation to come. “The university has always had the role of bridging the dead and the unborn.” In Europe this has been a role of the state, but they’re doing it less and less.

A side effect is that the university should be the conscience and critic of society. He quotes Pres. Drew Faust on whether universities are doing this enough.

4. Universities need to engage with the public, listening to their concerns. That doesn’t mean pandering to them. Only dialogue will help people learn.

5. Universities need to actively employ the Internet to achieve its objectives. Juan Carlos’ research on this topic began with the Internet, but it flipped, focusing first on the university.

Overall, he says, “we need new ideas, critical thinking, and character”we need new ideas, critical thinking, and character. By that last he means moral commitment. Universities can move in that direction by rediscovering their roots, and updating them.

Charlie now leads a session in which we begin by posting questions to http://cyber.harvard.edu/questions/list.php . I cannot keep up with the conversation. The session is being webcast and the recording will be posted. (Charlie is a celebrated teacher with a special skill in engaging groups like this.)


I agree with everything Juan Carlos says, and especially am heartened by the idea that the university as an institution can help to re-moor us. But I then find myself thinking that it took enormous forces to knock universities off their 1,000 year mission. Those same forces are implacable. Can universities deny the fusion of powers that put them in this position in the first place?

Comments Off on [liveblog] Five global challenges and the role of the university

December 3, 2016

[liveblog] Stephanie Mendoza: Web VR

Stephanie Mendoza [twitter:@_liooil] [Github: SAM-liooil] is giving a talk at the Web 1.0 conference. She’s a Unity developer.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

WebVR— a 3D in-browser standard— is at 1.0 these days, she says.. It’s cross platform which is amazing because it’s hard to build for Web, Android, and Vive. It’s “uncharted territory” where “everything is an experiment.” You need Chromium
, an experimental version of Chrome, to run it. She uses A-Frame to create in-browser 3D environments.

“We’re trying to figure out the limit of things we can simulate.” It’s going to follow us out into the real world. E.g., she’s found that “Simulating fearful situations ) can lessen fear of those situations in the real world”simulating fearful situations (e.g., heights) can lessen fear of those situations in the real world.

This crosses into Meinong’s jungle: a repository of non-existent entities in Alexius Meinong‘s philosophy.

The tool they’re using is A-Frame, which is an abstraction layer on top of WebGL
, Three.js, and VRML. (VRML was an HTML standard that didn’t get taken up much because the browsers didn’t run it very well. [I was once on the board of a VRML company which also didn’t do very well.]) WebVR works on Vibe, High Fidelity, Janus, the Unity Web player, and Youtube 360, under different definitions of “works.” A-Frame is open source.

Now she takes us through how to build a VR Web page. You can scavenge for 3D assets or create your own. E.g., you can go to Thingiverse and convert the files to the appropriate format for A-Frame.

Then you begin a “scene” in A-Frame, which lives between <a-scene> tags in HTML. You can create graphic objects (spheres, planes, etc.) You can interactively work on the 3D elements within your browser. [This link will take you to a page that displays the 3D scene Stephanie is working with, but you need Chromium to get to the interactive menus.]

She goes a bit deeper into the A-Frame HTML for assets, light maps, height maps, specular maps, all of which are mapped back to much lower-count polygons. Entities consist of geometry, light, mesh, material, position, and raycaster, and your extensions. [I am not attempting to record the details, which Stephanie is spelling out clearly. ]

She talks about the HTC Vive. “The controllers are really cool. “They’re like claws. I use them to climb virtual trees and then jump out”They’re like claws. I use them to climb virtual trees and then jump out because it’s fun.” Your brain simulates gravity when there is none, she observes. She shows the A-Frame tags for configuring the controls, including gabbing, colliding, and teleporting.

She recommends some sites, including NormalMap, which maps images and lets you download the results.

QA

Q: Platforms are making their own non-interoperable VR frameworks, which is concerning.

A: It went from art to industry very quickly.

Comments Off on [liveblog] Stephanie Mendoza: Web VR

[liveblog] Paul Frazee on the Beaker Browser

At the Web 1.0 conference, Paul Frazee
is talking about a browser — a Chrome fork — he’s been writing to browse the distributed Web.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

The distributed Web is the Web with ideas from BitTorrent integrated into it. Beaker
uses IPFS and DAT

  • This means:

  1. Anyone can be a server at any time.

  2. There’s no binding between a specific computer and a site; the content lives independently.

  3. There’s no back end.

This lets Beaker provide some unusual features:

  1. A “fork button” is built into the browser itself so you can modify the site you’re browsing. “People can hack socially” by forking a site and sharing those changes.

  2. Independent publishing: The site owner can’t change your stuff. You can allocate new domains cheaply.

  3. With Beaker, you can write your site locally first, and then post into the distributed Web.

  4. Secure distribution

  5. Versioned URLs

He takes us through a demo. Beaker’s directory looks a bit like Github in terms of style. He shows how to create a new site using an integrated terminal tool. The init command creates a dat.json file with some core metadata. Then he creates an index.html file and publishes it. Then anyone using the browser can see the site and ask to see the files behind it…and fork them. As with GitHub, you can see the path of forks. If you own the site, you can write to the site, with the browser. [This fulfills Tim Berners’-Lee’s original vision of Web browsers.]

QA

Q: Any DNS support?

A: Yes.

Comments Off on [liveblog] Paul Frazee on the Beaker Browser

[liveblog] Amber Case on making the Web fun again

I’m at the Web 1.0 conference, at the MIT Media Lab, organized by Amber Case [@caseorganic]. It’s a celebration of sites that can be built by a single person, she explains.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

The subtitle of Amber’s opening talk is “Where did my data go?” She talks about hosting sites that folded and took all the home pages with them. After AOL Hometown got angry comments about this, it AOL Hometown “solved” the problem by turning off comments“solved” the problem by turning off comments. Other bad things can happen to sites you build on other people’s sites. They can change your UI. And things other than Web sites can be shut down — including household items in the Internet of Things.

She shows the Maslow Hierarchy for Social Network Supermarkets from Chris Messina. So, what happened to owning your identity? At early Web conferences, you’d write your domain name on your ID tag. Your domain was your identity. RSS and Atom allowed for distributed reading. But then in the early 2000s social networks took over.

We started writing on third party platforms such as Medium and Wikia, but their terms of service make it difficult to own and transfer one’s own content.

The people who could have created the tools that would let us share our blogs went to work for the social networking sites. In 2010 there was a Federated Web movement that resulted in a movement towards this. E.g., it came up with Publish on your own Site and Syndicate Elsewhere (POSSE
).

Why do we need an independent Web? To avoid losing our content, so businesses can’t to fold and take it with it, for a friendlier UX, and for freedom. “Independent Websites can help provide the future of the Web.”

If we don’t do this, the Web gets serious, she says: People go to a tiny handful of sites. They’re not building as many quirky, niche, weird Web sites. “”We need a weird Web””“We need a weird Web because it allows us to play at the edges and to meet others.” But if you know how to build and archive your own things, you have a home for your data, for self-expression, and with links out to the rest of the Web.

Make static websites, she urges…possibly with the conference sponsor, Neocities.

QA

Bob Frankston: How can you own a domain name?

Amber: You can’t, not really.

Bob: And that’s a big, big problem.

Comments Off on [liveblog] Amber Case on making the Web fun again

« Previous Page | Next Page »