logo
EverydayChaos
Everyday Chaos
Too Big to Know
Too Big to Know
Cluetrain 10th Anniversary edition
Cluetrain 10th Anniversary
Everything Is Miscellaneous
Everything Is Miscellaneous
Small Pieces cover
Small Pieces Loosely Joined
Cluetrain cover
Cluetrain Manifesto
My face
Speaker info
Who am I? (Blog Disclosure Form) Copy this link as RSS address Atom Feed

February 10, 2020

Brink has just posted a piece of mine that suggests that the Internet and machine learning have been teaching companies that our assumptions about the predictability of the future — based in turn on assumptions about the law-like and knowable nature of change — don’t hold. But those are the assumptions that have led to the relatively recent belief in the efficacy of strategy.

My article outlines some of the ways organizations are facing the future differently. And, arguably, more realistically.

Tweet
Follow me

Categories: business, everyday chaos, future, too big to know Tagged with: business • everydaychaos • future Date: February 10th, 2020 dw

Be the first to comment »

October 28, 2017

Making medical devices interoperable

The screen next to a patient’s hospital bed that displays the heart rate, oxygen level, and other moving charts is the definition of a dumb display. How dumb is it, you ask? If the clip on a patient’s finger falls off, the display thinks the patient is no longer breathing and will sound an alarm…even though it’s displaying outputs from other sensors that show that, no, the patient isn’t about to die.

The problem, as explained by David Arney at an open house for MD PnP, is that medical devices do not share their data in open ways. That is, they don’t interoperate. MD PnP wants to fix that.

The small group was founded in 2004 as part of MIT’s CIMIT (Consortia for Improving Medicine with Innovation and Technology). Funded by grants, including from the NIH and CRICO Insurance, it currently has 6-8 people working on ways to improve health care by getting machines talking with one another.

The one aspect of hospital devices that manufacturers have generally agreed on is that they connect via serial ports. The FDA encourages this, at least in part because serial ports are electrically safe. So, David pointed to a small connector box with serial ports in and out and a small computer in between. The computer converts the incoming information into an open industry standard (ISO 11073). And now the devices can play together. (The “PnP” in the group’s name stands for “plug ‘n’ play,” as we used to say in the personal computing world.)

David then demonstrated what can be done once the data from multiple devices interoperate.

  • You can put some logic behind the multiple signals so that a patient’s actual condition can be assessed far more accurately: no more sirens when an oxygen sensor falls off a finger.

  • You can create displays that are more informative and easier to read — and easier to spot anomalies on — than the standard bedside monitor.

  • You can transform data into other standards, such as the HL7 format for entry into electronic medical records.

  • If there is more than one sensor monitoring a factor, you can do automatic validation of signals.

  • You can record and perhaps share alarm histories.

  • You can create what is functionally an API for the data your medical center is generating: a database that makes the information available to programs that need it via publish and subscribe.

  • You can aggregate tons of data (while following privacy protocols, of course) and use machine learning to look for unexpected correlations.

MD PnP makes its stuff available under an open BSD license and publishes its projects on GitHub. This means, for example, that while PnP has created interfaces for 20-25 protocols and data standards used by device makers, you could program its connector to support another device if you need to.

Presumably not all the device manufacturers are thrilled about this. The big ones like to sell entire suites of devices to hospitals on the grounds that all those devices interoperate amongst themselves — what I like to call intraoperating. But beyond corporate greed, it’s hard to find a down side to enabling more market choice and more data integration.

Tweet
Follow me

Categories: future, interop Tagged with: 2b2k • interop • medical Date: October 28th, 2017 dw

Be the first to comment »

May 7, 2017

Predicting the tides based on purposefully false models

Newton showed that the tides are produced by the gravitational pull of the moon and the Sun. But, as a 1914 article in Scientific American pointed out, if you want any degree of accuracy, you have to deal with the fact that “the earth is not a perfect sphere, it isn’t covered with water to a uniform­ form depth, it has many continents and islands and sea passages of peculiar shapes and depths, the earth does not travel about the sun in a circular path, and earth, sun and moon are not always in line. The result is that two tides are rarely the same for the same place twice running, and that tides dif­fer from each other enormously in both times and in amplitude.”

So, we instead built a machine of brass, steel and mahogany. And instead of trying to understand each of the variables, Lord Kelvin postulated “a very respectable number” of fictitious suns and moons in various positions over the earth, moving in unrealistically perfect circular orbits, to account for the known risings and fallings of the tide, averaging readings to remove unpredictable variations caused by weather and “freshets.” Knowing the outcomes, he would nudge a sun or moon’s position, or add a new sun or moon, in order to get the results to conform to what we know to be the actual tidal measurements. If adding sea serpents would have helped, presumably Lord Kelvin would have included them as well.

The first mechanical tide-predicting machines using these heuristics were made in England. In 1881, one was created in the United States that was used by the Coast and Geodetic Survey for twenty-seven years.

Then, in 1914, it was replaced by a 15,000-piece machine that took “account of thirty-seven factors or components of a tide” (I wish I knew what that means) and predicted the tide at any hour. It also printed out the information rather than requiring a human to transcribe it from dials. “Unlike the human brain, this one cannot make a mistake.”

This new model was more accurate, with greater temporal resolution. But it got that way by giving up on predicting the actual tide, which might vary because of the weather. We simply accept the unpredictability of what we shall for the moment call “reality.” That’s how we manage in a world governed by uniform laws operating on unpredictably complex systems.

It is also a model that uses the known major causes of average tides — the gravitational effects of the sun and moon — but that feels fine about fictionalizing the model until it provides realistic results. This makes the model incapable of being interrogated about the actual causes of the tide, although we can tinker with it to correct inaccuracies. In this there is a very rough analogy — and some disanalogies — with some instances of machine learning.

Tweet
Follow me

Categories: future Tagged with: machine learning • predictions • science Date: May 7th, 2017 dw

Be the first to comment »

April 19, 2017

Alien knowledge

Medium has published my long post about how our idea of knowledge is being rewritten, as machine learning is proving itself to be more accurate than we can be, in some situations, but achieves that accuracy by “thinking” in ways that we can’t follow.

This is from the opening section:

We are increasingly relying on machines that derive conclusions from models that they themselves have created, models that are often beyond human comprehension, models that “think” about the world differently than we do.

But this comes with a price. This infusion of alien intelligence is bringing into question the assumptions embedded in our long Western tradition. We thought knowledge was about finding the order hidden in the chaos. We thought it was about simplifying the world. It looks like we were wrong. Knowing the world may require giving up on understanding it.

Tweet
Follow me

Categories: future, philosophy Tagged with: ai • machine learning Date: April 19th, 2017 dw

2 Comments »

January 16, 2017

The maximalist approach to removing Trump

The list of ways Trump’s term might be cut short ranges from impeachment, to the invocation of the 25th Amendment, to personal blackmail, to a Fact Ex Machina that is so awful and indisputable that it picks him up by his ill-fitting suit and kicks him into the Loser’s Suite of his new DC hotel.

But if this past year has taught us anything — and I’m open to the possibility that it has not — it’s that we are very bad at making predictions about specific events that result from complex circumstances. We can’t know if and how Trump’s term might come to early end. For all we know, he might exeunt chased by a bear. (Hint: The bear is Russia.)

Which suggests that the most effective action ordinary janes and joes like us can take is to create the conditions under which several paths are easier to be trod.

For example:

  • Demonstrate the depth and breadth of the opposition by loyal, patriotic US citizens, to embolden Congress to oppose and remove him.

  • Extend and deepen the bonds among his opponents — emotional as well as political bonds

  • Expose as many of his lies as we can

  • Call him on his bullshit and attacks on the Constitution

  • Make heroes of his opponents, no matter what party they’re in

  • Frame him as an outsider to the American tradition and to both political parties

  • Do what we can as citizens, techies, parents, businesspeople, creators, activists, mimes — whatever is our excellence and our joy — to pursue a particular path towards Trump’s removal…and, not incidentally, to repair the damage his administration causes to our neighbors and communities.

When the future is so unknowable, we have no choice but to make it more possible.

Tweet
Follow me

Categories: future, politics Tagged with: trump Date: January 16th, 2017 dw

2 Comments »

October 12, 2016

[liveblog] Perception of Moral Judgment Made by Machines

I’m at the PAPIs conference where Edmond Awad [ twitter]at the MIT Media Lab is giving a talk about “Moral Machine: Perception of Moral Judgement Made by Machines.”

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

He begins with a hypothetical in which you can swerve a car to kill one person instead of stay on its course and kill five. The audience chooses to swerve, and Edmond points out that we’re utilitarians. Second hypothesis: swerve into a barrier that will kill you but save the pedestrians. Most of us say we’d like it to swerve. Edmond points out that this is a variation of the trolley problem, except now it’s a machine that’s making the decision for us.

Autonomous cars are predicted to minimize fatalities from accidents by 90%. He says his advisor’s research found that most people think a car should swerve and sacrifice the passenger, but they don’t want to buy such a car. They want everyone else to.

He connects this to the Tragedy of the Commons in which if everyone acts to maximize their good, the commons fails. In such cases, governments sometimes issue regulations. Research shows that people don’t want the government to regulate the behavior of autonomous cars, although the US Dept of Transportation is requiring manufacturers to address this question.

Edmond’s group has created the moral machine, a website that creates moral dilemmas for autonomous cars. There have been about two million users and 14 million responses.

Some national trends are emerging. E.g., Eastern countries tend to prefer to save passengers more than Western countries do. Now the MIT group is looking for correlations with other factors, e.g., religiousness, economics, etc. Also, what are the factors most crucial in making decisions?

They are also looking at the effect of automation levels on the assignment of blame. Toyota’s “Guardian Angel” model results in humans being judged less harshly: that mode has a human driver but lets the car override human decisions.

Q&A

In response to a question, Edmond says that Mercedes has said that its cars will always save the passenger. He raises the possibility of the owner of such a car being held responsible for plowing into a bus full of children.

Q: The solutions in the Moral Machine seem contrived. The cars should just drive slower.

A: Yes, the point is to stimulate discussion. E.g., it doesn’t raise the possibility of swerving to avoid hitting someone who is in some way considered to be more worthy of life. [I’m rephrasing his response badly. My fault!]

Q: Have you analyzed chains of events? Does the responsibility decay the further you are from the event?

This very quickly gets game theoretical.
A:

Tweet
Follow me

Categories: big data, future, liveblog, philosophy Tagged with: autonomous cars • morality • philosophy Date: October 12th, 2016 dw

Be the first to comment »

October 11, 2016

[liveblog] Bas Nieland, Datatrix, on predicting customer behavior

At the PAPis conference Bas Nieland, CEO and Co-Founder of Datatrics, is talking about how to predict the color of shoes your customer is going to buy. The company tries to “make data science marketeer-proof for marketing teams of all sizes.” IT ties to create 360-degree customer profiles by bringing together info from all the data silos.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

They use some machine learning to create these profiles. The profile includes the buying phase, the best time to present choices to a user, and the type of persuasion that will get them to take the desired action. [Yes, this makes me uncomfortable.]

It is structured around a core API that talks to mongoDB and MySQL. They provide “workbenches” that work with the customer’s data systems. They use BigML to operate on this data.

The outcome are models that can be used to make recommendations. They use visualizations so that marketeers can understand it. But the marketeers couldn’t figure out how to use even simplified visualizations. So they created visual decision trees. But still the marketeers couldn’t figure it out. So they turn the data into simple declarative phrases: which audience they should contact, in which channel, what content, and when. E.g.:

“To increase sales, çontact your customers in the buying phase with high engagement through FB with content about jeans on sale on Thursday, around 10 o’clock.”

They predict the increase in sales for each action, and quantify in dollars the size of the opportunity. They also classify responses by customer type and phase.

For a hotel chain, they connected 16,000 variables and 21M data points, that got reduced to 75 variables by BigML which created a predictive model that ended up getting the chain more customer conversions. E.g., if the model says someone is in the orientation phase, the Web site shows photos of recommend hotels. If in the decision phase, the user sees persuasive messages, e.g., “18 people have looked at this room today.” The messages themselves are chosen based on the customer’s profile.

Coming up: Chatbot integration. It’s a “real conversation” [with a bot with a photo of an atttractive white woman who is supposedly doing the chatting]

Take-aways: Start simple. Make ML very easy to understand. Make it actionable.

Q&A

Me: Is there a way built in for a customer to let your model know that it’s gotten her wrong. E.g., stop sending me pregnancy ads because I lost the baby.

Bas: No.

Me: Is that on the roadmap?

Bas: Yes. But not on a schedule. [I’m not proud of myself for this hostile question. I have been turning into an asshole over the past few years.]

Tweet
Follow me

Categories: big data, business, cluetrain, future, liveblog, marketing Tagged with: ethics • personalization Date: October 11th, 2016 dw

Be the first to comment »

[liveblog] First panel: Building intelligent applications with machine learning

I’m at the PAPIs conference. The opening panel is about building intelligent apps with machine learning. The panelists are all representing companies. It’s Q&A with the audience; I will not be able to keep up well.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

The moderator asks one of the panelists (Snejina Zacharia from Insurify) how AI can change a heavily regulated audience such as insurance. She replies that the insurance industry gets low marks for customer satisfaction, which is an opportunity. Also, they can leverage the existing platforms and build modern APIs on stop of them. Also, they can explore how to use AI in existing functions, e.g., chatbots, systems that let users just confirm their identification rather than enter all the data. They also let users pick from an AI-filtered list of carriers that are right for them. Also, personalization: predicting risk and adjusting the questionnaire based on the user’s responses.

Another panelist is working on mapping for a company that is not Google and that is owned by three car companies. So, when an Audi goes over a bump, and then a Mercedes goes over it, it will record the same data. On personalization: it’s ripe for change. People are talking about 100B devices being connected by 2020. People think that RFID tags didn’t live up to their early hype, but 10 billion RFID tags are going to be sold this year. These can provide highly personalized, higher relevant data. This will be the base for the next wave of apps. We need a standards body effort, and governments addressing privacy and security. Some standards bodies are working on it, e.g., Global Standards 1, which manages the barcodes standard.

Another panelist: Why is marketing such a good opportunity for AI and ML? Marketers used to have a specific skill set. It’s an art: writing, presenting, etc. Now they’re being challenged by tech and have to understand data. In fact, now they have to think like scientists: hypothesize, experiment, redo the hypothesis… And now marketers are responsible for revenue. Being a scientist responsible for predictable revenue is driving interest in AI and ML. This panelist’s company uses data about companies and people to segmentize following up on leads, etc. [Wrong place for a product pitch, IMO, which is a tad ironic, isn’t it?]

Another panelist: The question is: how can we use predictive intelligence to make our applications better? Layer input intelligence on top of input-programming-output. For this we need a platform that provides services and is easy to attach to existing processes.

Q: Should we develop cutting edge tech or use what Google, IBM, etc. offer?

A: It depends on whether you’re an early adopter or straggler. Regulated industries have to wait for more mature tech. But if your bread and butter is based on providing the latest and greatest, then you should use the latest tech.

A: It also depends on whether you’re doing a vertically integrated solution or something broader.

Q: What makes an app “smart”? Is it: Dynamic, with rapidly changing data?

A: Marketers use personas, e.g., a handful of types. They used to be written in stone, just about. Smart apps update the personas after ever campaign, every time you get new info about what’s going on in the market, etc.

Q: In B-to-C marketing, many companies have built the AI piece for advertising. Are you seeing any standardization or platforms on top of the advertising channels to manage the ads going out on them?

A: Yes, some companies focus on omni-channel marketing.

A: Companies are becoming service companies, not product companies. They no longer hand off to retailers.

A: It’s generally harder to automate non-digital channels. It’s harder to put a revenue number on, say, TV ads.

Tweet
Follow me

Categories: big data, future, marketing Tagged with: business • machine learning • marketing Date: October 11th, 2016 dw

Be the first to comment »

[liveblog] PAPIs: Cynthia Rudin on Regulating Greed

I’m at the PAPIs (Predictive Applications and APIS) [twitter: papistotio] conference at the NERD Center in Cambridge.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

The first speaker is Cynthia Rudin, Director of the Prediction Analysis Lab at MIT. Her topic is “Regulating Greed over Time: An Important Lesson for Practical Recommender Systems.” It’s about her Lab’s entry in a data mining competition. (The entry did not win.) The competition was to design a better algorithm for Yahoo’s recommendation of articles. To create an unbiased data set they showed people random articles for two weeks. Your algorithm had to choose to show one of the pool of articles to a user. To evaluate a recommender system, they’d check if your algorithm recommended the same thing that was shown to the user. If the user clicked on it, you could get an evaluation. [I don’t think I got this right.] If so, you sent your algorithm to Yahoo, and they evaluated its clickthrough rate; you never got access to Yahoo’s data.

This is, she says, a form of the multi-arm bandit problem: one arm is better (more likely to lead to a pay out) but you don’t know which one. So you spend your time figuring out which arm is the best, and then you only pull that one. Yahoo and Microsoft are among the companies using multi-arm bandit systems for recommendation systems. “They’re a great alternative to massive A-B testing

  • ” [Alternative view

] [No, I don’t understand this. Not Cynthia’s fault!.].

Because the team didn’t have access to Yahoo’s data, they couldn’t tune their algorithms to it. Nevertheless, they achieved a 9% clickthrough rate … and still lost (albeit by a tiny margin). Cynthia explains how they increased the efficiency of their algorithms, but it’s math so I can only here play the sound of a muted trumpet. But it involves “decay exploration on the old articles,” and a “peak grabber”: If any articles gets more than 9 clicks out of the last 100 times they show the article, and they keep displaying it: if you have a good article, grab it. The dynamic version of a Peak Grabber had them continuing to showing a peak article if it had a clickthrough rate 14% above the global clickthrough rate.

“We were adjusting the exploration-exploitation tradeoff based on trends.” Is this a phenomenon worth exploring?The phenomenon: you shouldn’t always explore. There are times when you should just stop and exploit the flowers.

Some data supports this. E.g., in England, on Boxing Day you should be done exploring and just put your best prices on things — not too high, not too low. When the clicks on your site are low, you should be exploring. When high, maybe not. “Life has patterns.” The Multiarm Bandit techniques don’t know about these patterns.

Her group came up with a formal way of putting this. At each time there is a known reward multiplier: G(t). G is like the number of people in the store. When G is high, you want to exploit, not explore. In the lower zones you want to balance exploration and exploitation.

So they created two theorems, each leading to an algorithm. [She shows the algorithm. I can’t type in math notation that fast..]

Tweet
Follow me

Categories: big data, future, marketing Tagged with: future • marketing • math Date: October 11th, 2016 dw

Be the first to comment »

September 21, 2016

[iab] Robert Scoble

I’m at a IAB conference in Toronto. The first speaker is Robert Scoble, who I haven’t seen since the early 2000s. He’s working at Upload VR that gives him “a front row seat on what’s coming.”

WARNING: Live blogging. Not spellpchecking before posting. Not even re-reading it. Getting things wrong, including emphasis.

The title of his talk is “The Fourth Transformation: How AR and AI change everything.”

First: The PC.

Second: Mac and GUI. Important companies in the first went away.

Third: Mobile and touch. Companies from the second went away.

We’re now getting a taste of the fourth: Virtual Reality and Augmented Reality. Kids take to VR naturally and with enthusiasm, he notes.

“Most people in the world are going to experience with VR with a mobile phone because the cost advantages of doing that are immense.” This Christmas Google will launch its Tango sensors that map the world in 3D. Early games for the Tango phone will give a taste of AR: mapping the physical space and put virtual things into it. Robert shows what’s possible with the Tango phone. Retail 411 is working on bringing you straight to the product you want in a physical store. This tech will let us build new games, but also, for example, put a virtual blue line on a floor to show you where your meeting is. Or, in a furniture store it can show you the items in a vision of your home.

Robert calls AR “Mixed Reality” because he thinks AR refers to the prior generation.

Vuforia was designed for mobile phones, placing virtual objets in real space. But soon we’ll be doing this with glasses, Robert says. Genesis [?] puts a virtual window on your wall. Click on it, and zombies crawl through it and come toward you.

Magic Leap got huge investments because the optics of the glasses they;re building are so good. He points out that the system knows to occlude images by interfering real world objects, e.g., the couch between you and the zombie.

He shows a Hololens app preview. Dokodemo Teleportation Door, made in Unity. You place a door on the ground. Open it. There’s a polygonal world inside it. Walk through the door and you’re in it.

Robert says Apple ditched the headphone jack in order to put advanced audio computing in your head, replacing ambient sound with processed sound that may include virtual audio.

Eyefluence builds sensors for eyes. Robert shows video of someone navigating complex screens of icons solely with his eyes. “Advertisers will be able to build a new kind of billboard in the street and know who looked at it.” [Oh great.]

ActionGram puts holograms into VR. [If you need a tiny George Takei in your living room — and who doesn’t? — this is for you.]

SnapChat bought a company that puts a camera in glasses. SnapChat is going to bring out a connected camera. It could be the size of a sugar cube.

Sephora has an app that shows you how their makeup looks like on your face, color matched.

Robert talks about the effect on sports. E.g, Nascar has 100+ sensors in cars already Researchers are putting sensors in NFL players’ tags for “next gen stats.”

“We’re in the Apple II stage” of this. It wasn’t great but kicked off a trillion dollar industries. Robert’s been told that we’re two years away, but says maybe it’s four years. “The new Ford cards are all built in virtual reality…If you don’t have a team thinking about working in this new world, you’ll be at a disadvantage soon.”

“This is the best educational technology humans have ever invented.”

This is intensely social tech, he says. You can play basketball or ski jumping with your friends over the Internet. He shows a Facebook demo. You can share things with others, things with media inside of them. E.g., go to a physical space and see it together. [Very cool demo. I think this is it:]

Tweet
Follow me

Categories: future, marketing Tagged with: vr Date: September 21st, 2016 dw

Be the first to comment »

Next Page »


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
TL;DR: Share this post freely, but attribute it to me (name (David Weinberger) and link to it), and don't use it commercially without my permission.

Joho the Blog uses WordPress blogging software.
Thank you, WordPress!