I’m in Oslo for Kunnskapsorganisasjonsdagene, which my dear friend Google Translate tells me is Knowledge Organization Days. I have been in Oslo a few times before — yes, once in winter, which was as cold as Boston but far more usable — and am always re-delighted by it.
Alex Wright is keynoting this morning. The last time I saw him was … in Oslo. So apparently Fate has chosen this city as our Kismet. Also coincidence. Nevertheless, I always enjoy talking with Alex, as we did last night, because he is always thinking about, and doing, interesting things. He’s currently at Etsy , which is a fascinating and inspiring place to work, and is a professor interaction design,. He continues to think about the possibilities for design and organization that led him to write about Paul Otlet who created what Alex has called an “analog search engine”: a catalog of facts expressed in millions of index cards.
NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.
Alex begins by telling us that he began as a librarian, working as a cataloguer for six years. He has a library degree. As he works in the Net, he finds himself always drawn back to libraries. The Net’s fascination with the new brings technologists to look into the future rather than to history. Alex asks, “How do we understand the evolution of the Web and the Net in an historical context?” We tend to think of the history of the Net in terms of computer science. But that’s only part of the story.
A big part of the story takes us into the history of libraries, especially in Europe. He begins his history of hypertext with the 16th century Swiss naturalist Conrad Gessner who created a “universal bibliography” by writing each entry on a slip of paper. Leibniz used the same technique, writing notes on slips of paper and putting them in an index cabinet he had built to order.
In the 18th century, the French started using playing cards to record information. At the beginning of the 19th, the Jacquard loom used cards to guide weaving patterns, inspiring Charles Babbage to create what many [but not me] consider to be the first computer.
In 1836, Isaac Adams created the steam powered printing press. This, along with economic and social changes, enabled the mass production of books, newspapers, and magazines. “This is when the information explosion truly started.”
To make sense of this, cataloging systems were invented. They were viewed as regimented systems that could bring efficiencies … a very industrial concept, Alex says.
“The mid-19th century was also a period of networking”: telegraph systems, telephones, internationally integrated postal systems. “Goods, people, and ideas were flowing across national borders in a way they never had before.” International journals. International political movements, such as Marxism. International congresses (conferences). People were optimistic about new political structures emerging.
Alex lists tech from the time that spread information: a daily reading of the news over copper wires, pneumatic tubes under cities (he references Molly Wright Steenson‘s great work on this), etc.
Alex now tells us about Paul Otlet, a Belgian who at the age of 15 started designing his own cataloging system. He and a partner, Henri La Fontaine, started creating bibliographies of disciplines, starting with the law. Then they began a project to create a universal bibliography.
Otlet thought libraries were focused on the wrong problem. Getting readers to the right book isn’t enough. People also need access to the information in the books. At the 1900 [?] world’s fair in Paris, Otlet and La Fontaine demonstrated their new system. They wanted to provide a universal language for expressing the connections among topics. It was not a top-down system like Dewey’s.
Within a few years, with a small staff (mainly women) they had 15 million cards in their catalog. You could buy a copy of the catalog. You could send a query by telegraphy, and get a response telegraphed back to you, for a fee.
Otlet saw this in a bigger context. He and La Fontaine created the Union of International Associations, an association of associations, as the governing body for the universal classification system. The various associations would be responsible for their discpline’s information.
Otlet met a Scotsman named Patrick Geddes who worked against specialization and the fracturing of academic disciplines. He created a camera obscura in Edinburgh so that people could see all of the city, from the royal areas and the slums, all at once. He wanted to stitch all this information together in a way that would have a social effect. [I’ve been there as a tourist and had no idea!] He also used visual forms to show the connections between topics.
Geddes created a museum, the Palais Mondial, that was organized like hypertext., bringing together topics in visually rich, engaging displays. The displays are forerunners of today’s tablet-based displays.
Another collaborator, Hendrik Christian Andersen, wanted to create a world city. He went deep into designing it. He and Otlet looked into getting land in Belgium for this. World War I put a crimp in the idea of the world joining in peace. Otlet and Andersen were early supporters of the idea of a League of Nations.
After the War, Otlet became a progressive activist, including for women’s rights. As his real world projects lost momentum, in the 1930s he turned inward, thinking about the future. How could the new technologies of radio, television, telephone, etc., come together? (Alex shows a minute from the documentary, The Man who wanted to Classify the World.”) Otlet imagines a screen and television instead of books. All the books and info are in a separate facility, feeding the screen. “The radiated library and the televised book.” 1934.
So, why has no one ever heard of Otlet? In part because he worked in Belgium in the 1930s. In the 1940s, the Nazis destroyed his work. They replaced his building, destrooying 70 tons of materials, with an exhibit of Nazi art.
Although there are similarities to the Web, how Otlet’s system worked was very different. His system was a much more controlled environment, with a classification system, subject experts, etc. … much more a publishing system than a bottom-up system. Linked Data and the Semantic Web are very Otlet-ish ideas. RDF triples and Otlet’s “auxiliary tables” are very similar.
Alex now talks about post-Otlet hypertext pioneers.
H.G. Wells’ “World Brain” essay from 1938. “The whole human memory can be, and probably in a shoirt time will be, made accessibo every individual.” He foresaw a complete and freely avaiable encyclopedia. He and Otlet met at a conference.
Emanuel Goldberg wanted to encode punchcard-style information on microfilm for rapid searching.
Then there’s Vannevar Bush‘s Memex that would let users create public trails between documents.
And Liklider‘s idea that different types of computers should be able to share infromation. And Engelbart who in 1968’s “Mother of all Demos” had a functioning hypertext system.
Ted Nelson thought computer scientists were focused on data computation rather than seeing computers as tools of connection. He invnted the term “hypertext,” the Xanadu web, and “transclusion” (embedding a doc in another doc). Nelson thought that links always should be two way. Xanadu= “intellectual property” controls built into it.
The Internet is very flat, with no central point of control. It’s self-organizing. Private corporations are much bigger on the Net than Otlet, Engelbart, and Nelson envisioned “Our access to information is very mediated.” We don’t see the classification system. But at sites like Facebook you see transclusion, two-way linking, identity management — needs that Otlet and others identified. The Semantic Web takes an Otlet-like approach to classification, albeit perhaps by algorithms rather than experts. Likewise, the Google “knowledge vaults” project tries to raise the ranking of results that come from expert sources.
It’s good to look back at ideas that were left by the wayside, he concludes, having just decisively demonstrated the truth of that conclusion :)
Q: Henry James?
A: James had something of a crush on Anderson, but when he saw the plan for the World City told him that it was a crazy idea.
I got to spend yesterday with an awesome group of about twenty people at the United Nations, brainstorming what a UN museum might look like. This was under the auspices of the UN Live project which (I believe) last week was endorsed by UN Secretary General Ban Ki-moon.
Some of us
Although it was a free-ranging discussion from many points of view, there seemed to be general implicit agreement about a few points. (What the UN Live group does with this discussion is up to them, of course.)
Where we did not meet
First, there was no apparent interest in constructing a museum that takes telling the UN’s story as its focus. Rather, the discussion was entirely about ways in which the values of the UN could be furthered by enabling people to connect with one another around the world.
Second, No one even considered the possibility that it might be only a physical museum. Physical elements were part of many of the ideas, but primarily to enable online services.
Here are some of the ideas that I particularly liked, starting (how rude!) with mine.
I stole it directly from a Knight Foundation proposal by my friend Nate Hill at Chattanooga Public Library. He proposed setting up 4K displays in a few libraries that have gigabit connections, to enable local residents to interact with one another. At the meeting yesterday I suggested (crediting Nate, but probably too fast for anyone to hear me, so I’m clear, right?) that the Museum be distributed via “magic mirrors” – Net-connected video monitors – that connect citizens globally. These would go into libraries and other safe spaces where there can be facilitators. (We’re all local people, so we need help talking globally.) Where possible, there might be two screens so that people can see themselves and the group they’re talking with. (For some reason, I like the idea of the monitors being circular. More like portals.)
These magic mirrors would be a platform for activities to be invented. For example:
Kids could play together. Virtual Jenga? Keep a virtual ball afloat? (Assume Kinect-like sensors.) Collaborative virtual jigsaw puzzle of a photo of one of their home towns? Or maybe each group is working collaboratively on one puzzle, but each team’s pieces are part of the image of the other’s team’s home. A simple mirror imitation game where each kid mimics the other’s movements? It’s a platform, so it’d be open to far better ideas than these.
Kids could create together. Collaborative drawing? Collaborative crazy machines a la Rube Goldberg?
Real-time, video AMAs: “We’re Iranian parents. AUA [ask us anything] at 10am EDT.”
Listings for other activities, including those proposed below.
Someone suggested that the UN create pop-up museums by bringing in a shipping container stocked with media tools. (Technically, a plop-down museum, it seems to me.) The local community would be invited to tell its story, perhaps in 100 images (borrowing the British Museum’s “A History of the World in 100 Objects”), or perhaps by providing a StoryCorps-style recording booth. Or send the kids out with video cameras. (There might have to be someone who could help with the media.) The community would be able to tell its story to the world. The world could react and interact. (These containers could contain magic mirrors.)
Another idea: Facilitate local people coming together virtually to share solutions to common problems, building on the multiple and admirable efforts to do this already.
Another idea: One group pointed out that museums typically face backwards in time. So suppose the UN museum instead constructed itself in real time as significant events occurred. E.g., as an earthquake disaster unrolls, the UN Museum would track it live, presenting its consequences intimately to the world, recording it for posterity, and facilitating relief efforts.
There was general agreement, I believe, that all of the UN Museum’s content should be openly available through APIs.
There were many, many more ideas, many of which I find exciting. I don’t know if any of the ideas discussed are going to make it past the cool-way-to-spend-an-afternoon phase, but I am thrilled by the general prospect of a UN Museum that takes as its mission not just the curation of artifacts that tell a story but advancing the UN’s mission by connecting people globally around common concerns, shared interests, and a desire to help and delight one another.
Well, it’s snowing in Boston and I’m in Florence. Italy. (I’m SO sorry, Ann!) I’m here to keynote an OCLC EMEA (Europe, Middle East, Africa) conference about libraries.
After three major revisions, I believe that on Tuesday I’m going to propose thinking about libraries as community centers. But not the usual sort where local people gather, work, socialize, play, learn … all good things, for sure. In addition, I’m going to suggest that they view themselves as community centers of meaning.
I know it sounds silly, and I’m open to better phrases, but I think it’s not entirely pointless. (The idea arose in a conversation with Robert Fleming, executive director of the Emerson College library. I’m teaching a course at Emerson this semester.)
The idea is simple. It used to be that once a user checked a book out of the library, the library was out of the loop. The user read it at home, talked about it with friends or a Significant Other, maybe spent an evening with a book club discussing it. The library might be slightly in the loop if they enabled users to review or rate books, or if they have an awesome Awesome Box. But even so, the pickings were pretty slim.
Now, of course, users are likely to talk on line about what they’re reading. At least as important, the library has tons of metadata that it can use to gauge how relevant an item is to its community, and even get a glimmer of what makes it relevant. Of course, much of this information is private, but there are ways to use it without violating anyone’s privacy.
If a community can be made more aware of what it’s finding meaningful and relevant, it can learn from itself, push its own boundaries, unearth new ideas, and find ever-better disagreements.
Note that I am not suggesting that libraries curate community meaning. Rather, libraries can provide services to facilitate the development of community meaning, making the community aware of itself. And this is of course an additional opportunity for librarians to contribute their own expertise at contextualizing and expanding our understanding.
Who is currently the custodian of community meaning? No one. Who is in the best position to be that custodian and facilitator? Your local library.
Alex Hodgson of ReadCube is leading a panel called “Accessing Content: New Thinking and New Business Models or Accessing Research Literature” at the Shaking It Up conference.
NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.
Robert McGrath is from ReadCube, a platform for managing references. You import your pdfs, read them with their enhanced reader, and can annotate them and discover new content. You can click on references in the PDF and go directly to the sources. If you hit a pay wall, they provide a set of options, including a temporary “checkout” of the article for $6. Academic libraries can set up a fund to pay for such access.
Eric Hellman talks about Unglue.it. Everyone in the book supply chain wants a percentage. But free e-books break the system because there are no percentages to take. “Even libraries hate free ebooks.” So, how do you give access to Oral Literature in Africain Africa? Unglue.it ran a campaign, raised money, and liberated it. How do you get free textbooks into circulation? Teachers don’t know what’s out there. Unglue.it is creating MARC records for these free books to make it easy for libraries to include the. The novel Zero Sum Game is a great book that the author put it out under a Creative Commons license, but how do you find out that it’s available? Likewise for Barbie: A Computer Engineer, which is a legal derivative of a much worse book. Unglue.it has over 1,000 creative commons licensed books in their collection. One of Unglue.it’s projects: an author pledges to make the book available for free after a revenue target has been met. [Great! A bit like the Library License project from the Harvard Library Innovation Lab. They’re now doing Thanks for Ungluing which aggregates free ebooks and lets you download them for free or pay the author for it. [Plug: John Sundman’s Biodigital is available there. You definitely should pay him for it. It’s worth it.]
Marge Avery, ex of MIT Press and now at MIT Library, says the traditional barriers sto access are price, time, and format. There are projects pushing on each of these. But she mainly wants to talk about format. “What does content want to be?” Academic authors often have research that won’t fit in the book. Univ presses are experimenting with shorter formats (MIT Press Bits), new content (Stanford Briefs), and publishing developing, unifinished content that will become a book (U of Minnesota). Cambridge Univ Press published The History Manifesto, created start to finish in four months and is available as Open Access as well as for a reasonable price; they’ve sold as many copies as free copies have been downloaded, which is great.
Jenn Farthing talks about JSTOR’s “Register and Read” program. JSTOR has 150M content accesses per year, 9,000 institutions, 2,000 archival journals, 27,000 books. Register and Read: Free limited access for everyone. Piloted with 76 journals. Up to 3 free reads over a two week period. Now there are about 1,600 journals, and 2M users who have checked out 3.5M articles. (The journals are opted in to the program by their publishers.)
Q: What have you learned in the course of these projects?
ReadCube: UI counts. Tracking onsite behavior is therefore important. Iterate and track.
Marge: It’d be good to have more metrics outside of sales. The circ of the article is what’s really of importance to the scholar.
Mendeley: Even more attention to the social relationships among the contributors and readers.
JSTOR: You can’t search for only content that’s available to you through Read and Register. We’re adding that.
Unglue.it started out as a crowdfunding platform for free books. We didn’t realize how broken the supply chain is. Putting a book on a Web site isn’t enough. If we were doing it again, we’d begin with what we’re doing now, Thanks for Ungluing, gathering all the free books we can find.
Q: How to make it easier for beginners?
Unglue .it: The publishing process is designed to prevent people from doing stuff with ebooks. That’s a big barrier to the adoption of ebooks.
ReadCube: Not every reader needs a reference manager, etc.
Q: Even beginning students need articles to interoperate.
Q: When ReadCube negotiates prices with publishers, how does it go?
ReadCube: In our pilots, we haven’t seen any decline in the PDF sales. Also, the cost per download in a site license is a different sort of thing than a $6/day cost. A site license remains the most cost-effective way of acquiring access, so what we’re doing doesn’t compete with those licenses.
Q: The problem with the pay model is that you can’t appraise the value of the article until you’ve paid. Many pay models don’t recognize that barrier.
ReadCube: All the publishers have agreed to first-page previews, often to seeing the diagrams. We also show a blurred out version of the pages that gives you a sense of the structure of the article. It remains a risk, of course.
Q: What’s your advice for large legacy publishers?
ReadCube: There’s a lot of room to explore different ways of brokering access — different potential payers, doing quick pilots, etc.
Mendeley: Make sure your revenue model is in line with your mission, as Geoff said in the opening session.
Marge: Distinguish the content from the container. People will pay for the container for convenience. People will pay for a book in Kindle format, while the content can be left open.
Mendeley: Reading a PDF is of human value, but computing across multiple articles is of emerging value. So we should be getting past the single reader business model.
JSTOR: Single article sales have not gone down because of Read and Register. They’re different users.
Unglue.it: Traditional publishers should cut their cost basis. They have fancy offices in expensive locations. They need to start thinking about how they can cut the cost of what they do.
Last night I got to give a talk at a public meeting of the Gloucester Education Foundation and the Gloucester Public School District. We talked about learning commons and libraries. It was awesome to see the way that community comports itself towards its teachers, students and librarians, and how engaged they are. Truly exceptional.
Afterwards there were comments by Richard Safier (superintendent), Deborah Kelsey (director of the Sawyer Free Library), and Samantha Whitney (librarian and teacher at the high school), and then a brief workshop at the attendees tables. The attendees included about a dozen of Samantha’s students; you can see in the liveliness of her students and the great questions they asked that Samantha is an inspiring teacher.
I came out of these conversations thinking that if my charter were to establish a “learning commons” in a school library, I’d ask what sort of learning I want to be modeled in that space. I think I’d be looking for four characteristics:
1. Students need to learn the basics (and beyond!) of online literacy: not just how to use the tools, but, more important, how to think critically in the networked age. Many schools are recognizing that, thankfully. But it’s something that probably will be done socially as often as not: “Can I trust a site?” is a question probably best asked of a network.
2. Old-school critical thinking was often thought of as learning how to sift claims so that only that which is worth believing makes it through. Those skills are of course still valuable, but on a network we are almost always left with contradictory piles of sifted beliefs. Sometimes we need to dispute those other beliefs because they are simply wrong. But on a network we also need to learn to live with difference — and to appreciate difference — more than ever. So, I would take learning to love difference to be an essential skill.
3. It kills me that most people have never clicked on a Wikipedia “Talk” page to see the discussion that resulted in the article they’re reading. If we’re going to get through this thing — life together on this planet — we’re really going to have to learn to be more meta-aware about what we read and encounter online. The old trick of authority was to erase any signs of what produced the authoritative declaration. We can’t afford that any more. We need always to be aware the what we come across resulted from humans and human processes.
4. We can’t rely on individual brains. We need brains that are networked with other brains. Those networks can be smarter than any of their individual members, but only if the participants learn how to let the group make them all smarter instead of stupider.
I am not sure how these skills can be taught — excellent educators and the communities that support them, like those I met last night, are in a better position to figure it out — but they are four skills that seem highly congruent with a networked learning commons.
A new report on Ithaka S+R‘s annual survey of libraries suggests that library directors are committed to libraries being the starting place for their users’ research, but that the users are not in agreement. This calls into question the expenditures libraries make to achieve that goal. (Hat tip to Carl Straumsheim and Peter Suber.)
The question is good. My own opinion is that libraries should let Google do what it’s good at, while they focus on what they’re good at. And libraries are very good indeed at particular ways of discovery. The goal should be to get the mix right, not to make sure that libraries are the starting point for their communities’ research.
The Ithaka S+R survey found that “The vast majority of the academic library directors…continued to agree strongly with the statement: ‘It is strategically important that my library be seen by its users as the first place they go to discover scholarly content.'” But the survey showed that only about half think that that’s happening. This gap can be taken as room for improvement, or as a sign that the aspiration is wrongheaded.
The survey confirms that many libraries have responded to this by moving to a single-search-box strategy, mimicking Google. You just type in a couple of words about what you’re looking for and it searches across every type of item and every type of system for managing those items: images, archival files, books, maps, museum artifacts, faculty biographies, syllabi, databases, biological specimens… Just like Google. That’s the dream, anyway.
Lorcan Dempsey has been outspoken in emphasizing that much of “discovery happens elsewhere” relative to the academic library, and that libraries should assume a more “inside-out” posture in which they attempt to reveal more effectively their distinctive institutional assets.
Yes. There’s no reason to think that libraries are going to be as good at indexing diverse materials as Google et al. are. So, libraries should make it easier for the search engines to do their job. Library platforms can help. So can Schema.org as a way of enriching HTML pages about library items so that the search engines can easily recognize the library item metadata.
But assuming that libraries shouldn’t outsource all of their users’ searches, then what would best serve their communities? This is especially complicated since the survey reveals that preference for the library web site vs. the open Web varies based on just about everything: institution, discipline, role, experience, and whether you’re exploring something new or keeping up with your field. This leads Roger to provocatively ask:
While academic communities are understood as institutionally affiliated, what would it entail to think about the discovery needs of users throughout their lifecycle? And what would it mean to think about all the different search boxes and user login screens across publishes [sic] and platforms as somehow connected, rather than as now almost entirely fragmented? …Libraries might find that a less institutionally-driven approach to their discovery role would counterintuitively make their contributions more relevant.
I’m not sure I agree, in part because I’m not entirely sure what Roger is suggesting. If it’s that libraries should offer an experience that integrates all the sources scholars consult throughout the lifecycle of their projects or themselves, then, I’d be happy to see experiments, but I’m skeptical. Libraries generally have not shown themselves to be particularly adept at creating grand, innovative online user experiences. And why should they be? It’s a skill rarely exhibited anywhere on the Web.
If designing great Web experiences is not a traditional strength of research libraries, the networked expertise of their communities is. So is the library’s uncompromised commitment to serving its community’s interests. A discovery system that learns from its community can do something that Google cannot: it can find connections that the community has discerned, and it can return results that are particularly relevant to that community. (It can make those connections available to the search engines also.)
This is one of the principles behind the Stacklife project that came out of the Harvard Library Innovation Lab that until recently I co-directed. It’s one of the principles of the Harvard LibraryCloud platform that makes Stacklife possible. It’s one of the reasons I’ve been touting a technically dumb cross-library measure of usage. These are all straightforward ways to start to record and use information about the items the community has voted for with its library cards.
It is by far just the start. Anonymization and opt-in could provide rich sets of connections and patterns of usage. Imagine we could know what works librarians recommend in response to questions. Imagine if we knew which works were being clustered around which topics in lib guides and syllabi. (Support the Open Syllabus Project!) Imagine if we knew which books were being put on lists by faculty and students. Imagine if knew what books were on participating faculty members’ shelves. Imagine we could learn which works the community thinks are awesome. Imagine if we could do this across institutions so that communities could learn from one another. Imagine we could do this with data structures that support wildly messily linked sources, many of them within the library but many of them outside of it. (Support Linked Data!)
Let the Googles and Bings do what they do better than any sane person could have imagined twenty years ago. Let libraries do what they have been doing better than anyone else for centuries: supporting and learning from networked communities of scholars, librarians, and students who together are a profound source of wisdom and working insight.
The idea is that libraries that want to make data about how relevant items are to their communities could algorithmically assign a number between 1-100 to those items. This number would present a very low risk of re-identification, would be easily compared across libraries, and would give local libraries control over how they interpret relevance.
I finally got to see the Chattanooga Library. It was even better than I’d expected. In fact, you can see the future of libraries emerging there.
That’s not to say that you can simply list what it’s doing and do the same things and declare yourself the Library of the Future. Rather, Chattanooga Library has turned itself into a platform. That’s where the future is, not in the particular programs and practices that happen to emerge from that platform.
I got to visit, albeit all too briefly, because my friend Nate Hill, assistant director of the Library, invited me to speak at the kickoff of Chattanooga Startup Week. Nate runs the fourth floor space. It had been the Library’s attic, but now has been turned into an open space lab that works in both software and hardware. The place is a pleasing shambles (still neater than my office), open to the public every afternoon. It is the sort of place that invites you to try something out — a laser cutter, the inevitable 3D printer, an arduino board … or to talk with one of the people at work there creating apps or liberating data.
The Library has a remarkable open data platform, but that’s not what makes this Library itself into a platform. It goes deeper than that.
Go down to the second floor and you’ll see the youth area under the direction/inspiration of Justin Hoenke. It’s got lots of things that kids like to do, including reading books, of course. But also playing video games, building things with Legos, trying out some cool homebrew tech (e.g., this augmented reality sandbox by 17-year-old Library innovator, Jake Brown (github)), and soon recording in audio studios. But what makes this space a platform is its visible openness to new ideas that invites the community to participate in the perpetual construction of the Library’s future.
This is physically manifested in the presence of unfinished structures, including some built by a team of high school students. What will they be used for? No one is sure yet. The presence of lumber assembled by users for purposes to be devised by users and librarians together makes clear that this is a library that one way or another is always under construction, and that that construction is a collaborative, inventive, and playful process put in place by the Library, but not entirely owned by the Library.
As conversations with the Library Director, Corinne Hill (LibraryJournal’s Librarian of the Year, 2014), and Mike Bradshaw of Colab — sort of a Chattanooga entrepreneurial ecosystem incubator — made clear, this is all about culture, not tech. Open space without a culture of innovation and collaboration is just an attic. Chattanooga has a strong community dedicated to establishing this culture. It is further along than most cities. But it’s lots of work: lots of networking, lots of patient explanations, and lots and lots of walking the walk.
The Library itself is one outstanding example. It is serving its community’s needs in part by anticipating those needs (of course), but also by letting the community discover and develop its own interests. That’s what a platform is about.