the dangers of philoso-gineering

homunculus-drawingLike many others, I was/remain inspired by the visions set out by the original Semantic Web paper, and the nearly continual stream of research that has been generated by the Semantic Web / Linked Data communities, in relative isolation.  Still, from these communities, there seem to be two kinds of contributions: first, ideas about the kinds of things that can be achieved with better, more descriptive, and most importantly, available, data, and, second, specific concrete instantiations of technology that are meant to serve as exemplars of these ideas.

The lack of separation between the ideas of the semantic web / linked data (which I will call philosophy) and example instantiations of practice / praxis / proof of concept (which I will call, somewhat controversially, ‘engineering’), has and continues to have, a problematic effect on the field.  The first is that people equate the engineering with the vision.  Thinking that the great ideas of  “Semantic Web”  can all be realised with a simple application of  RDF, a triple store, and magic sauce (a reasoner?) will lead to inevitable disappointment.

Second, it dissuades exploration of alternative, potentially better solutions, where many might exist.  When someone asks me, “is your system a linked data platform”, they usually mean “does it store data natively in RDF” (even internally), use a triple store, provide a SPARQL endpoint, use WebID for authentication, and so on.  Since when did it become a good idea for philosophy to micromanage the design of system platforms?

The answer is: never.  Engineers become great through experience — like many other forms of artisans, good software designers become good by building lots of systems, and failing frequently.  Much knowledge that goes into the construction of good tools and architectures becomes felt and learned, encoded in ‘implicit knowledge’, the biases and ‘feelings’ engineers have towards particular techniques and tools.

If we treat engineers as equals, we can have conversations with them about how to best realise visions instead of telling them essentially how to do it. This can open the floodgates with respect to lots of kinds of representations that could be used and each of their weaknesses.

The difficult job of standards bodies such as the W3C

This is partially why the work of standards bodies such as the W3C is so difficult: they have to dictate, to the protocol/language level how things must work or be specified, even before any implementations/applications have been created.  I think the W3C is very self aware of the difficulties between philosophy and practice, and has changed the way they work — since everything has to start with ideas, the initial proposals are grounded heavily in philosophy. However, as a recommendation matures, working groups actively attract practitioners to shape the spec – I’m mostly talking about how the W3C broke out of its traditional cycle with substantial industrial involvement in the WHATWG for HTML5.

One commonly cited problem with “letting the dogs out” — e.g., unleashing engineers on a philosophical challenge is that many competing implementations will be generated – all with similar (but different) functionality, all unlike and incompatible with one another.  Standards bodies such as the W3C exist to make systems interoperable, so that great systems like the Web can be built.

Nevertheless, such mass generation phases are great ways by which large (design) spaces can be explored, and by which great ideas emerge.  Without such a random generation phase, it becomes unlikely that the single, initial proposed way of doing something will be anywhere near suited to achieve the ultimately ambitious goals initially proposed.

What the Semantic Web / Linked Data need

What we need is, with the evolution of “Semantic Web 2.0” / Linked Data 2.0 / whatever you really want to call it, a re-collection (deliberate hyphenation) of the great ideas proposed out of the SW community over time, and the high order bits (only) of the lessons learned with how these capabilities might be implemented.  These, handed over to the world’s best hackers, implementors, and systems designers, to get them excited and thinking about alternate futures for ways these could be realised.  With these insights, we can more effectively evolve the tools and languages that have been proposed, as well as develop perhaps a huge suite of new ideas that nobody has thought of yet.

Second, we need more papers and research about (1) above – big ideas – applications, ultimately culminating in new user experiences.  In my opinion the linked data/semantic web community has produced an embarassingly few number of user experience innovations, given that the whole idea of the Semantic Web is for the purpose of a better user experience — so more big ideas, cool interfaces and apps, please!

(Image courtesy of http://www.confound.com/forum/read.php?2,415

Leave a comment