Saturday, March 28, 2009

"The Social Semantic Web – Where Web 2.0 Meets Web 3.0"

I attended the Association for The Advancement of Artificial Intelligence Spring Symposia, 2009, at Stanford University, from March 23 through March 25, 2009, and participated in the symposium “The Social Semantic Web – Where Web 2.0 Meets Web 3.0.” “Web 2.0” refers to applications and technologies that have emerged in the last few years on the Web that enable social networking, collaboration and user provided content. This includes sites such as Facebook and Twitter, as well as Web logs and wikis. “Web 3.0” is more or less synonymous with the notion of the Semantic Web, where structured metadata associated with Web content can be used for reasoning and inference. The idea of the Semantic Web goes back to a paper in Scientific American in 2001 by Tim Bernes-Lee, Jim Hendler and Ora Lassila. They described a world where agent based applications can use semantics-based metadata on the web to reason and infer and present choices for people as they go through their daily activity. Much of the technology for enabling this vision is based on the principles of logic programming paired with Web centric technology such as XML-based metadata.


The Symposium was organized by Li Ding and Jen Bao from Rensselaer Polytechnic Institute and Mark Greaves from Vulcan, Inc. Li Ding opened the discussion and described a situation where Semantic Web technologies may be poised to increase the range and effectiveness of Web 2.0 tools for information retrieval, social networking and collaboration. We spent the next two and a half days discussing examples of this technology and the issues their use introduce into how people interact with the Web.


A number of applications were described that bridge the gap between collaborative technology and semantics. Twine is a site that allows users to group links into what are called twines. A twine is a group of sites that are topically related. Tags are generated when a site is added to a twine and domain ontologies are used to link different twines together and recommend to a user other twines that may interest them. Radar Networks Inc. developed Twine and their CEO Nova Spivack gave the first presentation. Twine looks like a very useful application. It is somewhat similar to delic.io.us in concept, but with explicit semantics.


Denny Vrandecic from Insitut AIFB, Karlsruhe, Germany described ongoing work on Semantic MediaWiki. SMW is an extension of MediaWiki that allows for semantic annotation of wiki data. Vrandecic is one of the original developers of Semantic MediaWiki and spoke about adding automated quality checks to the application.

Semantic MediaWiki was the basis for a number of other applications discussed at the symposium. One was Metavid.org, an “open video archive of the US Congress.” Metavid.com captures video and closed captioning of Congressional proceedings. Semantic MediaWiki’s extensions allow for categorical searches of recorded speeches.


The Halo Project, funded by Paul Allen’s Vulcan Inc. and sponsored by Mark Greaves, has developed extensions to Semantic MediaWiki that go a long way toward showing the power of embedding semantics in applications. The work was done by Ontoprise and they have produced a video of its features that is worth viewing.


Some of the applications discussed provide collaborative, distributed development environments for authoring ontologies. Tania Tudorache of the Stanford Center for Biomedical Informatics Research described Collaborative Protégé. Collaborative Protege extends the Protege ontology development environment to support “collaborative ontology editing as well as annotation of both ontology components and ontology changes.” Natasha Noy, who is also one of the prime movers behind Protégé, presented BioPortal, a repository of biomedical ontologies that allows users to critique posted ontologies, collaborate on ontology development, and submit mappings between ontologies. The same codebase that is behind BioPortal also supports the OOR Open Ontology Repository which is a domain-independent repository of ontologies. Nova Spivack of Radar Networks also mentioned a new site that they plan on standing up called Tripleforge, which, like Sourceforge, will support open source development of ontologies.


In regard to architecting systems that use semantics to leverage Web 2.0 features, a number of approaches kept coming up. Ontologies for describing tagging behavior by users were mentioned by a few of the presenters. This is a way to capture the relationships between taggers (two users who tag the same site with the same or similar tags) and the temporal dimension of tagging (“who tagged what tag when?”). Another common thread was defining a semantic layer to describe the syntactic or functional layers of a system. Hans-George Fill of the University of Vienna described a model-based approach for developing “Semantic Information Systems” using model based tools that defined just such a layered architecture.


Some other applications described at the conference use existing collaborative technology, such as Wikipedia, to jumpstart Semantic Web applications. Tim Finin described an approach that he and his colleagues at the University of Maryland, Baltimore County developed that treats Wikipedia as an ontology. They call it Wikitology. They assert that Wikipedia represents a “consensus view” of topics arrived at via a “social process.” They use the existing categories defined in Wikipedia, along with links between articles to discover the concepts, and the relationships between concepts, that describe article topics. A similar approach was described by Maria Grineva, Maxim Grinev and Dimitry Lizorkin from the Russian Academy of Sciences where Wikipedia was used as a Knowledge Base to discover semantically related key terms for documents. In another paper, Jeremy Witmer and Jugal Kalita of the University of Colorado, Colorado Springs used a named entity recognizer to tag locations in Wikipedia articles and also used machine learning techniques to extract geospatial relations from the articles. They posit that disambiguated locations and extracted relations could then be used to add semantic, geospatial annotations to the articles to aid search or create map-based mashups of Wikipedia data.


Our team presented a paper that described how the location of bloggers could be inferred from location entity mentions in their blog posts. We described an experiment where we were able to correctly geolocate 61% of blogs based on a test set of ~800 blogs with known locations. While our work was somewhat tangential to the Semantic Web, it is a demonstration of the “inference problem,” where information not stated directly, can be inferred from other available information. This raises issues of privacy given the explosion of the use of social networking sites such as Facebook and the proliferation of personal Web logs. Three other papers presented at the symposium addressed privacy and access control issues. Mary-Ann Williams of the University of Technology, Sydney, Australia, gave an excellent overview of privacy as it relates to Web-based business. Paul Groth of the University of Southern California discussed privacy obligation policies and described how the users of a social networking site might use them to control access to their personal data from outside of the site. Ching-man Au Yeung, Lalana Kagal, Nicholas Gibbins, and Nigel Shadbolt of the University of Southampton and MIT described a method for controlling access to photos on Flickr based on how photos are tagged using a tagging ontology, FOAF, OpenID authentication and the AIR policy language.


Panels presented during the symposium addressed some cross-cutting issues for Web 2.0 and Semantic Web applications; usability, scale and privacy. On the 25th, the panel included Steve White of Radar Networks , Denny Vrandecic, Natasha Noy, Jaime Taylor, Minister of Information for Metaweb, the home of Freebase (an excellent open collaborative database), and Jeff Pollock of Oracle and the author of the recently published “The Semantic Web for Dummies.” This panel was dedicated to the topic of usability, but also addressed the issue of scale. All agreed that usability issues on the Semantic Web are the same as with Webs 1.0 and 2.0; simple is better, hide confusing bits like RDF and OWL tags, etc. Noy made the point, however, that there are different classes of users for semantic applications on the web, such as the users of BioPortal and those actually involved in ontology development. A lot of time was spent talking about users of applications such as Excel and how even a killer application like the Semantic Web can be overtaken by simple, inelegant solutions. The issue of scale came down to how Semantic Web applications will handle billions of triples, and the difficulty of doing anything more than simple reasoning over such large amounts of data. Taylor described the power law phenomena where some entities are overloaded with properties while most only have a few. This suggests the need for smart partitioning of resources based on their semantics. As far as the scalability of reasoning is concerned, full RDFS or OWL reasoning is probably too expensive, at least for large amounts of data. Though, as one participant said, “a little bit of semantics” goes a long way, so basic relations such as subsumption and transitivity may be all that is required for most reasoning.


The next day’s panel included Paul Groth, Denny Vrandecic, Tim Finin and Rajesh Balakrishnan and touched on issues of privacy and trust. One conclusion of this discussion was that the structured metadata that comes with the Semantc Web, along with ability to reason over the data – albeit, probably in small bites – will just multiply the inference problem. There was no real consensus on what can be done about that.


This symposium did a great job of framing how social computing and semantics are quickly coming together. There was quite a bit of excitement about Twine and the success of Semantic MediaWiki. There was no clear consensus whether this technology will revolutionize the user experience or just provide enabling technology to intelligently link applications and make current functionalities such as search more effective. For developers, however, there is a whole new universe of challenges here.

3 comments:

Unknown said...

Wow, thanks for the great summary Clay!

Kelley said...

Thought food - I will definitely explore the links in the weeks ahead. I kept hoping to see mention of hardware integration into some of the ideas. Which reminded me of this TED TALK by Pattie Mae of MIT, have you seen it? http://www.ted.com/index.php/talks/pattie_maes_demos_the_sixth_sense.html

Tim Finin said...

very comprehensive, thanks! It helps fill in some of the gaps I missed when I dropped in on another symposium.

 
Creative Commons License
finegameofnil by Clay Fink is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License.