Joho the Bloginterop Archives - Joho the Blog

October 28, 2017

Making medical devices interoperable

The screen next to a patient’s hospital bed that displays the heart rate, oxygen level, and other moving charts is the definition of a dumb display. How dumb is it, you ask? If the clip on a patient’s finger falls off, the display thinks the patient is no longer breathing and will sound an alarm…even though it’s displaying outputs from other sensors that show that, no, the patient isn’t about to die.

The problem, as explained by David Arney at an open house for MD PnP, is that medical devices do not share their data in open ways. That is, they don’t interoperate. MD PnP wants to fix that.

The small group was founded in 2004 as part of MIT’s CIMIT (Consortia for Improving Medicine with Innovation and Technology). Funded by grants, including from the NIH and CRICO Insurance, it currently has 6-8 people working on ways to improve health care by getting machines talking with one another.

The one aspect of hospital devices that manufacturers have generally agreed on is that they connect via serial ports. The FDA encourages this, at least in part because serial ports are electrically safe. So, David pointed to a small connector box with serial ports in and out and a small computer in between. The computer converts the incoming information into an open industry standard (ISO 11073). And now the devices can play together. (The “PnP” in the group’s name stands for “plug ‘n’ play,” as we used to say in the personal computing world.)

David then demonstrated what can be done once the data from multiple devices interoperate.

  • You can put some logic behind the multiple signals so that a patient’s actual condition can be assessed far more accurately: no more sirens when an oxygen sensor falls off a finger.

  • You can create displays that are more informative and easier to read — and easier to spot anomalies on — than the standard bedside monitor.

  • You can transform data into other standards, such as the HL7 format for entry into electronic medical records.

  • If there is more than one sensor monitoring a factor, you can do automatic validation of signals.

  • You can record and perhaps share alarm histories.

  • You can create what is functionally an API for the data your medical center is generating: a database that makes the information available to programs that need it via publish and subscribe.

  • You can aggregate tons of data (while following privacy protocols, of course) and use machine learning to look for unexpected correlations.

MD PnP makes its stuff available under an open BSD license and publishes its projects on GitHub. This means, for example, that while PnP has created interfaces for 20-25 protocols and data standards used by device makers, you could program its connector to support another device if you need to.

Presumably not all the device manufacturers are thrilled about this. The big ones like to sell entire suites of devices to hospitals on the grounds that all those devices interoperate amongst themselves — what I like to call intraoperating. But beyond corporate greed, it’s hard to find a down side to enabling more market choice and more data integration.

Be the first to comment »

March 1, 2016

[berkman] Dries Buytaert

I’m at a Berkman [twitter: BerkmanCenter] lunchtime talk (I’m moderating, actually) where Dries Buytaert is giving a talk about some important changes in the Web.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

He begins by recounting his early days as the inventor of Drupal, in 2001. He’s also the founder of Acquia, one of the fastest growing tech companies in the US. It currently has 750 people working on products and services for Drupal. Drupal is used by about 3% of the billion web sites in the world.

When Drupal started, he felt he “could wrap his arms” around everything going on on the Web. Now that’s impossible, he says. E.g, Google AdWords were just starting, but now AdWords is a $65B business. The mobile Web didn’t exist. Social media didn’t yet exist. Drupal was (and is) Open Source, a concept that most people didn’t understand. “Drupal survived all of these changes in the market because we thought ahead” and then worked with the community.

“The Internet has changed dramatically” in the past decade. Big platforms have emerged. They’re starting to squeeze smaller sites out of the picture. There’s research that shows that many people think that Facebook is the Internet. “How can we save the open Web?,” Dries askes.

What do we mean by the open or closed Web? The closed Web consists of walled gardens. But these walled gardens also do some important good things: bringing millions of people online, helping human rights and liberties, and democratizing the sharing of information. But, their scale is scary . FB has 1.6B active users every month; Apple has over a billion IoS devices. Such behemoths can shape the news. They record data about our behavior, and they won’t stop until they know everything about us.

Dries shows a table of what the different big platforms know about us. “Google probably knows the most about us” because of gMail.

The closed web is winning “because it’s easier to use.” E.g., After Dries moved from Belgium to the US, Facebook and etc. made it much easier to stay in touch with his friends and family.

The open web is characterized by:

  1. Creative freedom — you could create any site you wanted and style it anyway you pleased

  2. Serendipity. That’s still there, but it’s less used. “We just scroll our FB feed and that’s it.”

  3. Control — you owned your own data.

  4. Decentralized — open standards connected the pieces

Closed Web:

  1. Templates dictate your creative license

  2. Algorithms determine what you see

  3. Privacy is in question

  4. Information is siloed

The big platforms are exerting control. E.g., Twitter closed down its open API so it could control the clients that access it. FB launched “Free Basics” that controls which sites you can access. Google lets people purchase results.

There are three major trends we can’t ignore, he says.

First, there’s the “Big Reverse of the Web,” about which Dries has been blogging about. “We’re in a transformational stage of the Web,” flipping it on its head. We used to go to sites and get the information we want. Now information is coming to us. Info, products, and services will come to us at the right time on the right device.”

Second, “Data is eating the world.”

Third, “Rise of the machines.”

For example, “content will find us,” AKA “mobile or contextual information.” If your flight is cancelled, the info available to you at the airport will provide the relevant info, not offer you car rentals for when you arrive. This creates a better user experience, and “user experience always wins.”

Will the Web be open or closed? “It could go either way.” So we should be thinking about how we can build data-driven, user-centric algorithms. “How can we take back control over our data?” “How can we break the silos” and decentralize them while still offering the best user experience. “How do we compete with Google in a decentralized way. Not exactly easy.”

For this, we need more transparency about how data is captured and used, but also how the algorithms work. “We need an FDA for data and algorithms.” (He says he’s not sure about this.) “It would be good if someone could audit these algorithms,” because, for example, Google’s can affect an election. But how to do this? Maybe we need algorithms to audit the algorithms?

Second, we need to protect our data. Perhaps we should “build personal information brokers.” You unbundle FB and Google, put the data in one place, and through APIs give apps access to them. “Some organizations are experimenting with this.”

Third, decentralization and a better user experience. “For the open web to win, we need to be much better to use.” This is where Open Source and open standards come in, for they allow us to build a “layer of tech that enables different apps to communicate, and that makes them very easy to use.” This is very tricky. E.g., how do you make it easy to leave a comment on many different sites without requiring people to log in to each?

It may look almost impossible, but global projects like Drupal can have an impact, Dries says. “We have to try. Today the Web is used by billions of people. Tomorrow by more people.” The Internet of Things will accelerate the Net’s effect. “The Net will change everything, every country, every business, every life.” So, “we have a huge responsibility to build the web that is a great foundation for all these people for decades to come.”

[Because I was moderating the discussion, I couldn’t capture it here. Sorry.]

Comments Off on [berkman] Dries Buytaert

March 28, 2013

[annotation][2b2k] Critique^it

Ashley Bradford of Critique-It describes his company’s way of keeping review and feedback engaging.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

To what extent can and should we allow classroom feedback to be available in the public sphere? The classroom is a type of Habermasian civic society. Owning one’s discourse in that environment is critical. It has to feel human if students are to learn.

So, you can embed text, audio, and video feedback in documents, video and images. It translates docs into HTML. To make the feedback feel human, it uses slightly stamps. You can also type in comments, marking them as neutral, positive, or critique. A “critique panel” follows you through the doc as you read it, so you don’t have to scroll around. It rolls up comments and stats for the student or the faculty.

It works the same in different doc types, including Powerpoint, images, and video.

Critiques can be shared among groups. Groups can be arbitrarily defined.

It uses HTML 5. It’s written in Javascript, PHP, and uses Mysql.

“We’re starting with an environment. We’re building out tools.” Ashley aims for Critique^It to feel very human.



Andy Wasklewicz and Jeff Austin from Entwine [twitter:entwinemedia] describe a multi-institutional project to build a platform-agnostic tool for enriching video through note-taking, structured annotations, and sharing. It uses HTML 5, and allows for structured tagging, time-based annotation, and more.

Comments Off on [annotation][2b2k]Opencast-Matterhorn

[annotation][2b2k] Mediathread

Jonah Bossewich and Mark Philipsonfrom Columbia University talk about Mediathread, an open source project that makes it easy to annotate various digital sources. It’s used in many courses at Columbi, as well as around the world.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

It comes from Columbia’s Center for New Media Teaching and Learning. It began with Vital, a video library tool. It let students clip and save portions of videos, and comment on them. Mediathread connects annotations to sources by bookmarking, via a bookmarklet that interoperates with a variety of collections. The bookmarklet scrapes the metadata because “We couldn’t wait for the standards to be developed.” Once an item is in Mediathread, it embeds the metadata as well.

It has always been conceived of a “small-group sharing and collaboration space.” It’s designed for classes. You can only see the annotations by people in your class. It does item-level annotation, as well as regions.

Mediathread connects assignments and responses, as well as other workflows. [He’s talking quickly :)]

Mediathread’s bookmarklet approach requires it to have to accommodate the particularities of sites. They are aiming at making the annotations interoperable in standard forms.

Comments Off on [annotation][2b2k] Mediathread

[annotation][2b2k] Phil Desenne on Harvard annotation tools

Phil Desenne begins with a brief history of annotation tools at Harvard. There are a lot, for annotating from everything to texts to scrolls to music scores to video. Most of them are collaborative tools. The collaborative tool has gone from Adobe AIR to Harvard iSites, to open source HTML 5. “It’s been a wonderful experience.” It’s been picked up by groups in Mexico, South America and Europe.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Phil works on edX. “We’re beginning to introduce annotation into edX.” It’s being used to encourage close reading. “It’s the beginning of a new way of thinking about teaching and assessing students.” Students tag the text, which “is the beginning of a semantic tagging system…Eventually we want to create a semantic ontology.”

What are the implications for the “MOOC Generation”? MOOC students are out finding information anywhere they can. They stick within a single learning management system (LMS). LMS’s usually have commentary tools “but none of them talk with one another . Even within the same LMS you don’t have cross-referencing of the content.” We should have an interoperable layer that rides on top of the LMS’s.

Within edX, there are discussions within classes, courses, tutorials, etc. These should be aggregated so that the conversations can reach across the entire space, and, of course, outside of it. edX is now working on annotation systems that will do this. E.g., imagine being able to discuss a particular image or fragments of videos, and being able to insert images into streams of commentary. Plus analytics of these interations. Heatmaps of activity. And a student should be able to aggregate all her notes, journal-like, so they can be exported, saved, and commented on, “We’re talking about a persistent annotation layer with API access.” “We want to go there.”

For this we need stable repositories. They’ll use URNs.

Comments Off on [annotation][2b2k] Phil Desenne on Harvard annotation tools

[annotation][2b2k] Paolo Ciccarese on the Domeo annotation platform

Paolo Ciccarese begins by reminding us just how vast the scientific literature is. We can’t possibly read everything we should. But “science is social” so we rely on each other, and build on each other’s work. “Everything we do now is connected.”

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Today’s media do provide links, but not enough. Things are so deeply linked. “How do we keep track of it?” How do we communicate with others so that when they read the same paper they get a little bit of our mental model, and see why we found the article interesting?

Paolo’s project — Domeo [twitter:DomeoTool] — is a web app for “producing, browsing, and sharing manual and semi-automatic (structure and unstructured) annotations, using open standards. Domeo shows you an article and lets you annotate fragments. You can attach a tag or an unstructured comment. The tag can be defined by the user or by a defined ontology. Domeo doesn’t care which ontologies you use, which means you could use it for annotating recipes as well as science articles.

Domeo also enables discussions; it has a threaded msg facility. You can also run text mining and entity recognition systems (Calais, etc.) that automatically annotates the work with those words, which helps with search, understanding, and curation. This too can be a social process. Domeo lets you keep the annotation private or share it with colleagues, groups, communities, or the Web. Also, Domeo can be extended. In one example, it produces information about experiments that can be put into a database where it can be searched and linked up with other experiments and articles. Another example: “hypothesis management” lets readers add metadata to pick out the assertions and the evidence. (It uses RDF) You can visualize the network of knowledge.

It supports open APIs for integrating with other systems., including into the Neuroscience Information Framework and Drupal. “Domeo is a platform.” It aims at supporting rich source, and will add the ability to follow authors and topics, etc., and enabling mashups.

Comments Off on [annotation][2b2k] Paolo Ciccarese on the Domeo annotation platform

[annotation][2b2k] Neel Smith: Scholarly annotation + Homer

Neel Smith of Holy Cross is talking about the Homer Multitext project, a “long term project to represent the transmission of the Iliad in digital form.”

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

He shows the oldest extant ms of the Iliad, which includes 10th century notes. “The medieval scribes create a wonderful hypermedia” work.

“Scholarly annotation starts with citation.” He says we have a good standard: URNs, which can point to, for example, and ISBN number. His project uses URNs to refer to texts in a FRBR-like hierarchy [works at various levels of abstraction]. These are semantically rich and machine-actionable. You can google URN and get the object. You can put a URN into a URL for direct Web access. You can embed an image into a Web page via its URN [using a service, I believe].

An annotation is an association. In a scholarly notation, it’s associated with a citable entity. [He shows some great examples of the possibilities of cross linking and associating.]

The metadata is expressed as RDF triples. Within the Homer project, they’re inductively building up a schema of the complete graph [network of connections]. For end users, this means you can see everything associated with a particular URN. Building a facsimile browser, for example, becomes straightforward, mainly requiring the application of XSL and CSS to style it.

Another example: Mise en page: automated layout analysis. This in-progress project analyzes the layout of annotation info on the Homeric pages.

1 Comment »

[annotations][2b2k] Rob Sanderson on annotating digitized medieval manuscripts

Rob Sanderson [twitter:@azaroth42] of Los Alamos is talking about annotating Medieval manuscripts.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

He says many Medieval manuscripts are being digitized. The Mellon Foundation is funding many such projects. But these have tended to reinvent the same tech, and have not been designed for interoperability with other projects. So the Digital Medieval Initiative was founded, with a long list of prestigious partners. They thought about what they’d like: distributed, linked data, interoperable, etc. For this they need a shared description format.

The traditional approach is annotate an image of a page. But it can be very difficult to know which images to annotate; he gives as an example a page that has fold-outs. “The naive assuption is that an image equals a page.” But there may be fragments, or only portions of the page have been digitized (e.g., the illuminations), etc. There may be multiple images on a page, revealed by multi-spectral imaging. There may be multiple orientations of the page, etc.

The solution? The canvas paradigm. A canvas is an empty space corresponding to the rectangle (or whatever) of the page. You allow rich resources to be associated with it, and allow users to comment. For this, they use Open Annotation. You can specify a choice of images. You can associate text with an area of the canvas. There are lots of different ways to visualize those comments: overlays, side-by-side, etc.

You can build hybrid pages. For example, and old scan might have a new color scan of its illustrations pointing at it. Or you could have a recorded performance of a piece of music pointing at the musical notation.

In summary, the SharedCanvas model uses open standards (HTML 5, Open Annotation, TEI, etc.) and can be implement distributed across reporsitories, encouraging engagement by domain experts.

Comments Off on [annotations][2b2k] Rob Sanderson on annotating digitized medieval manuscripts

June 15, 2012

Interop: The podcast

My Radio Berkman interview of John Palfrey and Urs Gasser about their suprisingly wide-ranging book Interop is now up, as is the video of their Berkman book talk…

Comments Off on Interop: The podcast

Next Page »