The true promise of Interactive Computing: Leveraging our Collective IQ


Interactive computing pioneer Doug Engelbart coined the term Collective IQ to inform the IT research agenda

Doug Engelbart was recog­nized as a great pioneer of interactive computing. He received more than a dozen awards citing his role in human-computer interaction or the invention of the mouse. But in his mind, the true promise of interactive computing and the digital revolution was a larger stra­tegic vision of using the technology to facilitate society’s evolution. His research agenda focused on augmenting human intellect, boosting our Collective IQ, enhancing human effectiveness at addressing our toughest challenges in business and society – a strategy I have come to call Bootstrapping Brilliance.

In his mind, interactive computing was about interacting with computers, and more importantly, interacting with the people and with the knowledge needed to quickly and intelligently identify problems and opportunities, do the research, develop and implement responses or solutions, integrate learnings, and iterate rapidly. It’s about the people, knowledge, and tools interacting, how we intelligently leverage that, that provides the brilliant outcomes we all seek – the type of outcomes that result in more brilliant organizations, and a brilliant world.

One of the greatest disappointments for Doug Engelbart was how oppositional the paradigms coming out of the personal computing, office automation, information technology, artificial intelligence, and world wide web movements became, and the resulting lost opportunity cost to society from not actively pursuing the rapid advancement of society’s IQs on the level of a Grand Challenge.

Call for a world wide ‘open hyperdocument system’

After three decades of experience pioneering technologies specifically designed to boost collective IQ, through intensive pilot experimentation and evolution, and drawing from experience of hundreds of end user organizations, in 1990 he published a call for a world wide Open Hyperdocument System (OHS), which he outlined in OHS Framework: Technology Template, and detailed in several seminal papers and a 1995 commemorative booklet (Ref). What he outlined for the tools in 1990 is not only still relevant, but more crucial to business and society than ever, and shockingly still largely missing in the prevailing technology!

Toward boosting Collective IQ

At the same time, Doug Engelbart evolved a model of Collective IQ in action – depicting the basic processes by which people work and interact, and advance their knowledge, within and across knowledge domains and organizational boundaries, individually and collectively. If we understand that, we understand how to dramatically boost our Collective IQ. It all boils down to how effectively people concurrently develop, integrate and apply their knowledge (CoDIAK), and how effectively they capture, interact with and manage the ongoing swirl of tangible stuff that arises – all the messages, dialog, scratch notes, commentary, meeting records, research intelligence, drafts, think pieces, memos, work products, design documents, proposals, etc. — within dynamic knowledge repositories (DKRs).  Fundamentally, it’s the quality of interactions of the people and their knowledge, and the tools employed to facilitate those interactions, that will either augment, or diminish, their collective IQ. For more see DKRs and CoDIAK. This remains the best depiction of its kind, and an important reference to inform requirements.

While the OHS Framework expressly addresses the tool requirements, touching on the user system, here I will attempt to describe a little more about what is still largely missing from the user experience of people with their knowledge given the prevailing IT environment.

A New Paradigm

Designing for tomorrow’s dramatically more effective ways of thinking and working, in a climate of accelerating change, how we organize ourselves, our teams, networks and initiatives, along with our knowledge and our tools, is rapidly shifting from a compartmentalized stovepipe paradigm, to one that is agile, cross-functional, dynamic, interactive, networked, user-centered, open, collaborative, evolvable, and scaleable, within and across organizations. See also Doug’s description of Networked Improvement Communities as a model for how tomorrow’s more agile organizations will be organizing for innovation and transformation.

A big part of the change involves dissolving boundaries, barriers, road blocks, speed bumps, and ‘knowledge islands’, while enhancing for the end user a seamless free-flow of connectivity and interactivity throughout.   See also Doug’s concise Paradigm Shift Summary in the OHS Framework: Technology Template section on End User Systems. For the big picture vision, see Doug’s Bootstrap “Paradigm Map”.

Dynamic Knowledge Ecosystems

To begin, picture the user immersed in a virtual ecosystem of interactions with people, knowledge, and tools, within and across many knowledge domains, beginning with their personal work environment, nested in the knowledge domains of their teams and networks, associations, departments, collaborations, partnerships, and so on, scaling up to the entire world wide web. We need to interact with the whole shebang through Dynamic Knowledge Ecosystems.

A Birds-Eye View


We need visual cues to navigate our knowledge landscapes

We need more facile ways to traverse our knowledge ecosystem, as if we are flying around in the information space. We need to be able to quickly skim across the landscapes, and dive down into whatever detail suits our needs in the moment, zooming in and out of detail as desired. In a landscape of interrelated documents, commentary, scratch notes, articles, dialog records, research intelligence, todo lists, videos, manuscripts, reports, images, calendars, source code, etc., we need to be able to glance across with a variety of visual cues about what’s there and how it all relates.We need a GPS.

When zeroing in on a particular document or subset of documents, visual cues reveal more detail about what it is and what’s in it, with the ability to zoom in and out of its contents and/or structure at will.

Some apps do provide an outline view or table of contents view of the file contents in a sidebar, such as MS Word, GoogleDocs Outlining, and our website, or a thumbnail view as in PowerPoint or Adobe Acrobat, but do not enable the end user any sense of ‘flying around’ or zooming in and out of the file, or the greater knowledge ecosystem in which it resides. The Video Digest tool does give a sense of fluid navigation, by turning a video with transcripts into browsable, skimmable chapter headings with annotated thumbnails, all clickable, in a web file, which then invites other open source tools, such as for granular annotation, tagging, direct links – watch the Intro to Video Digest; watch the Intro to

Fine-Grained Interactivity

Once inside a document, users need to be able to point with precision to any chunk of text, video, imagery, etc., to study, highlight, underscore, comment, tag, or manipulate, as well as jump to, copy link to, bookmark, or ‘transclude’ that piece of the file directly.

Spreadsheets have always provided fine-grained addressability for goto and specifying in a formula; YouTube now allows right-click on a frame to Copy Link to that place in the video;  Our website offers Purple Numbers to support direct linking and jump to by number. The prize however goes to open source which lets you comment, highlight, tag and/or create a link to any part of most files anywhere on the web. Second prize to both open source Purple which provides purple numbers on your webpages to enable and direct linking and jumping, and Video Digest (above).


Most files out there have a static quality for the user of information, meaning the content is carefully crafted, formatted, arranged, and “set in stone”, severely limiting the user’s options for viewing or interacting. Sincere effort has gone into “fluid” or “responsive” styling, which is lovely, but it’s all about preserving the publisher’s carefully crafted view across various screen or window sizes, but in no way addresses the users’ need to dynamically interact with content in different ways depending on . Much of our knowledge today is virtually trapped inside apps, layouts formats, linear formats, and knowledge islands. What users need is dynamic interaction with the media, including with text, that essentially “come alive” to enable, for example, different views of the content at the whim of the user — what we call “on the fly” view control, where the user can ask for any number of views of the content from various vantage points, in the moment. Or figures and formulae that can be interactively animated, played out, or manipulated on the spot. Dynamic media has a maleable or fluid quality, more closely mapping to how our minds think.

For example, see Bret Victor’s excellent talk on this subject, Media for Thinking the Unthinkable.

When we scale up, a more dynamic knowledge ecosystem provides for all of the above on a meta level, and includes easy capture, organization, tagging and cataloging of the evolving knowledge repository, plus a variety of ways to easily and accurately pinpoint and access what’s in there without getting overwhelmed by the sheer volume of information.


Users need all documents – whether text, video, or images – to be porous and permeable, so we can see in from the outside with a birds-eye view, and jump or link directly to a specific arbitrary place within the document, without having to enter at the top of the file and then find or scroll our way down. (Note that by definition, ‘porous’ implies concrete, while ‘permeable’ implies membranes.)

Google does a terrific job of searching out specific text and images we’ve asked for, but  when we click on the a textual hit we invariably are taken to the top of its file, which is not at all what we want or need, we must then search within the file to reach the intended hit. Video Digest creates a completely permeable and traversable view of the video which you can skim over, watch as a whole, or watch selectively in no particular order, without ever ‘opening’ the video file in its native venue.

Further, the user needs to be able to reach in from the outside of a file without ever having to actually enter the file, to insert, copy, manipulate, annotate, or ‘transclude’ some part of that file. We do some of that now with Siri as gobetween: “remind me at 6 to do x”, and the reminder gets entered without us having to open the app and be looking at the right part of our reminders. We need to take similar shortcuts on our own, with or without a smart agent. Given a link to a point in a file, I should be able to ask for a copy of what’s at the other end, or ask to insert something at the other end, and so forth. I call this “flying blind”, and it’s a powerful capability to increase efficiency when you know how to use it.

Seamless and Ubiquitous

As previously stated, the end user needs to fly around unencumbered by boundaries or speed bumps between apps, platforms, static files, and so on, to have the right information at the right time. Not being able to locate information because it was never properly captured is unacceptable. If it happened, it should be there, our ecosystem needs to make it as easy as possible to capture, store, tag, etc. and offer many avenues for later access.

Hand in hand with seamless is the secure feeling that everywhere I go, these capabilities will be there to support me ubiquitously.

Real Language

The point and click user interface has its place, but it’s akin to visiting a foreign country without learning the language – you can get by with sign language, pointing, menus and phrase books, but you’re terribly limited compared with someone who speaks the language. Similarly our depth and reach of interactivity throughout our knowledge ecosystem will be greatly enriched and expanded as we add more real language options to the UI.

Watch the Demos

You can watch demos of early prototype systems that satisfied most of the features and capabilities outlined above in a seamless, integrated environment. Important to note that the OHS Framework specifies a variety of user interface to suit different users and style preferences:

Experiment thru collaborative pilot expeditions

Where do you get these capabilities? I would consider the above, especially including the OHS Framework: Technology Template, and start looking at enabling tools out there that most closely map to the desired features and user experience. Start piecing together the best tools and practices you can find that support working this way, and start working this way experimentally, learn, and evolve. For the fastest progress with the least amount of downtime for your ongoing projects, consider joining forces with your peers to set up an experimental Dynamic Knowledge Ecosystem pilot off to the side, where you can build, try, make mistakes and learn together as a separate initiative. I strongly recommend a Lean Startup approach, brainstorm, assemble a minimum viable product, try it out, and iterate. Start by pooling your collective intelligence to identify the best tools and practices that pass muster, and integrate them into a simple, bare bones working prototype. That’s now your expedition’s Dynamic Knowledge Ecosystem, in which you advance your knowledge collectively, while facilitating your collaboration, pilot experimentation, and rapid iteration. You can each then integrate those more refined learnings into enabling Dynamic Knowledge Ecosystems, in your own projects and initiatives.

This seems to be the most direct and efficient path for learning how to dramatically improve our capability for producing more brilliant outcomes, resulting in more brilliant business and societies, and a brilliant world.


1 Comment on The true promise of Interactive Computing: Leveraging our Collective IQ

  1. Reblogged this on Tekkie and commented:
    Excellent summary of Doug Engelbart’s ideas, and what’s missing from digital media.


3 Trackbacks / Pingbacks

  1. Game Changer: announces ‘direct linking’ across the web | collective iq review
  2. Why the Internet? |
  3. Week One: Bootcamp |

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: