tag:blogger.com,1999:blog-86375933168313222252024-03-13T07:34:49.227-07:00finegameofnilClay Finkhttp://www.blogger.com/profile/03588769556501828590noreply@blogger.comBlogger9125tag:blogger.com,1999:blog-8637593316831322225.post-67563387212716891432009-11-13T06:03:00.000-08:002009-11-13T11:03:06.584-08:00Hadoopification - Phase 1We are interested in processing large amounts of text and doing non-trivial things with it. For example, we want to do part-of-speech tagging, entity tagging, parsing and extensive featurization of text from large text corpora. These sorts of operations can be both memory and CPU intensive. We can't wait a week for a job running on a standalone machine to finish processing multi-gigs of text. We also don't have easy access to a capitalized cluster. The best option for us is to push the data up to a <a href="http://en.wikipedia.org/wiki/Cloud_computing">cloud</a>, and to use the <a href="http://labs.google.com/papers/mapreduce.html">map/reduce paradigm</a> for parallelizing the processing. The Apache Hadoop project provides the framework for doing this, and <a href="http://aws.amazon.com/elasticmapreduce/">Amazon's Elastic Map Reduce</a> makes this easy and cheap. We have some preliminary benchmarks based on a small set of sample text data that demonstrate the value of this approach.<br /><br />Our task was to run a part-of-speech tagger on sentences extracted from blogs. We implemented our map/reduce in Java. The mapper class used the <a href="http://nlp.stanford.edu/software/tagger.shtml">Stanford POS tagger</a> to get parts-of-speech for each word in a sentence, with each sentence assigned a key that consisted of a unique blog post id with the sentence's relative position in the document as a suffix. The reducer just wrote the results of the tagging of each sentence to a file with the key.<br /><br /><span style="font-weight: bold;">Example Input - Raw sentences </span><br /><br /><div style="border: 2px solid black; overflow: scroll; height: 150px; width: 400px; font-family: Georgia,Garamond,Serif; font-style: normal; font-variant: normal; font-weight: normal; font-size: 12px; line-height: normal; font-size-adjust: none; font-stretch: normal;"><br /><pre>2272096_0 For the most part, the traditional news outlets lead and the blogs follow, typically by 2.<br />2272096_1 5 hours, according to a new computer analysis of news articles and commentary on the Web during the last three months of the 2008 presidential campaign.<br />2272096_2 Skip to next paragraph Multimedia Graphic Picturing the News Cycle The finding was one of several in a study that Internet experts say is the first time the Web has been used to track — and try to measure — the news cycle, the process by which information becomes news, competes for attention and fades.<br />2272096_3 Researchers at Cornell, using powerful computers and clever algorithms, studied the news cycle by looking for repeated phrases and tracking their appearances on 1.<br />2272096_4 6 million mainstream media sites and blogs.<br />2272096_5 Some 90 million articles and blog posts, which appeared from August through October, were scrutinized with their phrase-finding software.<br />2272096_6 Frequently repeated short phrases, according to the researchers, are the equivalent of “genetic signatures??<br />2272096_7 for ideas, or memes, and story lines.<br />2272096_8 The biggest text-snippet surge in the study was generated by “lipstick on a pig.</pre><br /></div><br /><br /><span style="font-weight: bold;">Example Output - Tagged Sentences</span><br /><br /><div style="border: 2px solid black; overflow: scroll; height: 150px; width: 400px; font-family: Georgia,Garamond,Serif; font-style: normal; font-variant: normal; font-weight: normal; font-size: 12px; line-height: normal; font-size-adjust: none; font-stretch: normal;"><br /><pre>2272096_0 For/IN the/DT most/JJS part,/VBP the/DT traditional/JJ news/NN outlets/NNS lead/VBP and/CC the/DT blogs/NNS follow,/VBP typically/RB by/IN 2./CD<br />2272096_1 5/CD hours,/NN according/VBG to/TO a/DT new/JJ computer/NN analysis/NN of/IN news/NN articles/NNS and/CC commentary/NN on/IN the/DT Web/NNP during/IN the/DT last/JJ three/CD months/NNS of/IN the/DT 2008/CD presidential/JJ campaign./NN<br />2272096_2 Skip/VB to/TO next/JJ paragraph/NN Multimedia/NNP Graphic/NNP Picturing/VBG the/DT News/NN Cycle/NN The/DT finding/NN was/VBD one/CD of/IN several/JJ in/IN a/DT study/NN that/IN Internet/NNP experts/NNS say/VBP is/VBZ the/DT first/JJ time/NN the/DT Web/NNP has/VBZ been/VBN used/VBN to/TO track/VB �/NN and/CC try/VB to/TO measure/VB �/SYM the/DT news/NN cycle,/VBD the/DT process/NN by/IN which/WDT information/NN becomes/VBZ news,/NN competes/VBZ for/IN attention/NN and/CC fades./NN<br />2272096_3 Researchers/NNS at/IN Cornell,/NNP using/VBG powerful/JJ computers/NNS and/CC clever/JJ algorithms,/NN studied/VBD the/DT news/NN cycle/NN by/IN looking/VBG for/IN repeated/VBN phrases/NNS and/CC tracking/VBG their/PRP$ appearances/NNS on/IN 1./CD<br />2272096_4 6/CD million/CD mainstream/NN media/NNS sites/NNS and/CC blogs./VB<br />2272096_5 Some/DT 90/CD million/CD articles/NNS and/CC blog/NN posts,/VBP which/WDT appeared/VBD from/IN August/NNP through/IN October,/NNP were/VBD scrutinized/VBN with/IN their/PRP$ phrase-finding/JJ software./NN<br />2272096_6 Frequently/RB repeated/VBN short/JJ phrases,/NN according/VBG to/TO the/DT researchers,/NN are/VBP the/DT equivalent/NN of/IN �genetic/JJ signatures??/NN<br />2272096_7 for/IN ideas,/NN or/CC memes,/NN and/CC story/NN lines./NN<br />2272096_8 The/DT biggest/JJS text-snippet/NN surge/NN in/IN the/DT study/NN was/VBD generated/VBN by/IN �lipstick/CD on/IN a/DT pig./NN</pre><br /></div><br /><br />The example text was from the post <a href="http://desirableroastedcoffee.com/2009/08/study-measures-the-chatter-of-the-news-cycle.html">"Study Measures the Chatter of the News Cycle"</a> by <a href="http://desirableroastedcoffee.com/allan-jenkins">Allen Jenkins</a>.<br /><br />The results here are based on 40K sentences. This is not a lot of data. The blog corpus these sentences were taken from, as of October 6, 2008, had 450K posts and 4.9M sentences. In truth, that's not even a lot of data. All that said, the results are shown below. Note that an EC2MR small instance is $0.015 an hour and a medium instance is $0.03 and hour. They round up, so you pay on an hour basis.<br /><br />1 Small Instance - 97 minutes. Cost, $0.03<br />4 Small Instances - 39 minutes. Cost, $0.06<br />10 Small Instances - 16 minutes. Cost, $0.15<br />20 Small Instances - 9 minutes. Cost, $0.30<br />20 Medium Instances - 4 minutes. Cost $1.20.<br /><br />Assuming this is linear - and I'm not sure if that's a totally safe assumption - 5M sentences on 20 medium instances will take 8 hours, 33 minutes and cost $150.00. Given how expensive time is, being able to get processing results in a day which would otherwise take multiple days or weeks is a real plus.<br /><br />We will keep you up to date as to how this all works out. So far, the results are encouraging.Clay Finkhttp://www.blogger.com/profile/03588769556501828590noreply@blogger.com1tag:blogger.com,1999:blog-8637593316831322225.post-66638871336801937092009-10-14T06:20:00.000-07:002009-10-14T12:21:24.100-07:00"Meme-tracking"Jure Leskovec, Lars Backstrom and Jon Kleinberg published the paper <a href="http://memetracker.org/quotes-kdd09.pdf">"Meme-tracking and the Dynamics of the News Cycle"</a> earlier this year. Their work revolves about how <a href="http://en.wikipedia.org/wiki/Meme">memes</a> flow through online news sources and social media. The "meme" is a notion suggested by biologist Richard Dawkins that describes a basic unit of thought - an idea - as it is transmitted through human culture and is modified in ways similar to how genes change over time in biological systems. Leskovec et al base their work on 90 million news stories and blog posts collected during the three months prior to the 2009 US presidential election. They extract memes by looking at short text phrases, and variations of those phrases, that appear with significant volume across news stores and blog posts. The most significant phrases from August through October, 2008 are shown in the in this visualization using the <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.39.2977&rep=rep1&type=pdf">"ThemeRiver</a>" technique:<br /><br /><br /><div style="border: 2px solid black; overflow: scroll; height: 350px; width: 400px; font-family: Georgia,Garamond,Serif; font-style: normal; font-variant: normal; font-weight: normal; font-size: 12px; line-height: normal; font-size-adjust: none; font-stretch: normal;"><br /><a href="http://memetracker.org/images/news-cycle-image2.png"><img style="border: 0px none ; margin: 0pt 10px 10px 0pt; float: left; cursor: pointer; width: 640px; height: 302px;" src="http://memetracker.org/images/news-cycle-image2.png" alt="" /></a><br /></div><br /><br /><b>Most mentioned phrases during the 2008 U.S. presidential campaign</b><br /><br /><br />The most significant meme found during this period was associated with then candidate Barak Obama, when he compared John McCain's policies to those of President George W. Bush, by saying that <a href="http://www.huffingtonpost.com/2008/09/09/obama-lipstick-on-a-pig-v_n_125253.html">"But you know, you can -- you know, you can put lipstick on a pig; it's still a pig."</a> They found some characteristic behavior for this and other memes. In the eight hours around the peak volume of a meme, the volume increases and decreases exponentially with time, with the decrease being somewhat slower. They also found that the peak volume in the blogosphere lags behind that of online media by about 2.5 hours. Another interesting phenomenon was how the media volume shows an additional peak after the blogs get a hold of a story, and then another peak in blog interest as the story bounces back into the blogosphere. They were also able to detect a small number of memes - 3.5% - that originated in the blogosphere and then spread to the online media.<br /><br />There's much that's important about this paper. Their methodology is impressive and they show how you can work with this large volume of messy data and use scalable approaches - in this case, a relatively simple graph partitioning heuristic - to get interesting results. They also know the right questions to ask of the data. Based on what they saw in the ThemeRiver visualization, they developed a simple model of the news cycle, based on the volume and recency of news stories, that basically duplicates the phenomenon captures in the visualization.<br /><br />Beyond these sorts of geeky accomplishments, the fact is that they demonstrated how you can extract open source, online data and do quantitative analysis on cultural phenomena such as the news cycle. Analysis at this scale was not possible before the advent of the Web. The tools we have today, such as the <a href="http://labs.google.com/papers/mapreduce.html">map/reduce methodology</a> and cloud computing, along with being able to build years of work in NLP, graph theory and machine learning, make working with enormous amounts of text possible. What was previously the province of more qualitative disciplines such as Political Science, Sociology - not dissing you quantitative Soc guys - and Journalism, can now be integrated with more quantitative disciplines. They say it well in the paper:<br /><br /><blockquote><div>Moving beyond qualitative analysis has proven difficult here, and<br />the intriguing assertions in the social-science work on this topic<br />form a significant part of the motivation for our current approach.<br />Specifically, the discussions in this area had largely left open the<br />question of whether the “news cycle” is primarily a metaphorical<br />construct that describes our perceptions of the news, or whether<br />it is something that one could actually observe and measure. We<br />show that by tracking essentially all news stories at the right level<br />of granularity, it is indeed possible to build structures that closely<br />match our intuitive picture of the news cycle, making it possible to<br />begin a more formal and quantitative study of its basic properties.<br /></div></blockquote><br /><br />Check out the <a href="http://memetracker.org/">Meme-tracker site</a> when you get a chance.Clay Finkhttp://www.blogger.com/profile/03588769556501828590noreply@blogger.com0tag:blogger.com,1999:blog-8637593316831322225.post-63659515352870223492009-10-06T16:11:00.001-07:002009-10-07T12:38:50.751-07:00Hadoop World 2009<p><a href="http://hadoop.apache.org/">Hadoop</a> is a Java framework for implementing the Map/Reduce programming model described in the <a href="http://www.usenix.org/publications/library/proceedings/osdi04/tech/full_papers/dean/dean_html/">2004 paper</a> by Jeffrey Dean and Sanjay Ghemawat of Google. Map/Reduce allows you to parallelize a large task by breaking it into small <i style="">map</i> functions that perform a transformation on a key/value pair and combine the results of the mappings using a <i style="">reduce</i> function. Using a cluster of nodes - either your own or those in a computing cloud like Amazon’s EC2 - you can do large scale parallel processing.</p><br /><p>I attended the Hadoop World conference in NYC on October 1, 2009 and came back very excited about this technology. We have had an interest in exploring Map/Reduce to support our work analyzing blog text, but have not until now had the resources to devote to it. After getting a feel for what this technology makes possible, we are ready to dive in. A copy of Tom White’s <a href="http://oreilly.com/catalog/9780596521981">“Hadoop:The Definitive Guide”</a> was provided, so I am on my way.</p><br /><p>The message I took away from thus conference is that with the vast amount of data available today, from genomic and other biological data to data in online social networks, we can use new computational tools to help us better understand people at the micro and macro levels. We need processing capabilities that scale to the pentabyte level to do this, however, and technologies like Hadoop, coupled with cloud computing, are one way to approach this problem. The notion that<br />“more data beats better algorithms” may or may not hold in all cases, but “big data” is here and we now have usable and relatively cheap ways to process it.</p><br /><p>Hadoop grew out of the Lucene/Nutch community and became an official Apache project in 2006. Yahoo! soon afterward adopted it for their Web crawls. Since then a number of subprojects have started up under Hadoop, allowing for querying, analysis and data management.</p><br /><p><a href="http://www.cloudera.com/">Cloudera</a> (“Cloud-era”, get it?) was the major organizer of Hadoop World, and Christophe Bisciglia, one of the principals of Cloudera, started off giving us a review of the history of Hadoop, and describing a number of the subprojects it has spawned. Cloudera’s business is based on providing Linux packages for deploying Hadoop on private servers, and maintaining Amazon EC2 instances for use in the cloud. Cloudera also has introduced a browser based GUI for managing Hadoop clusters: the <a href="http://www.cloudera.com/desktop">Cloudera Desktop</a>.</p><br /><p>Peter Sirota, the manager of Amazon’s Elastic MapReduce, was up next and discussed<br /><a href="http://hadoop.apache.org/pig/">Pig</a>, an Apache/Hadoop subproject that provides a high-level data analysis layer for Map/Reduce, and <a href="http://www.karmasphere.com/">Karmasphere</a>, their own NetBeans based GUI for managing EC2 instances and Map/Reduce jobs.</p><br /><p>Eric Baldeschwieler of Yahoo! described their use of Hadoop. Yahoo!, he said, is the “engine<br />room” of Hadoop development and is the largest contributor to the open source project. They use Hadoop to <a href="http://developer.yahoo.net/blogs/hadoop/2008/02/yahoo-worlds-largest-production-hadoop.html">process the data for the Yahoo! Web search</a>, using a 10,000 core<br />Lunux cluster and with 5 Pentabytes of raw disk space. They also used Hadoop to <a href="http://developer.yahoo.net/blogs/hadoop/2009/05/hadoop_sorts_a_petabyte_in_162.html">win</a> the <a href="http://sortbenchmark.org/">Jim Gray Sort Benchmark competition</a>, sorting one Terabyte in 62 seconds and one pentabyte<br />in 16 hours.</p><br /><p>Other presenters included Ashish Thusoo of Facebook. The amount of new data added to Facebook on a daily basis is astounding. In March of 2008 it was a modest 200 gigabytes a day. In April of 2009 it was over two terabytes a day, and in October of ’09 it is over four terabytes per day. They have made Hadoop an integral part of their processing pipeline in order to deal with this rate of growth. One new Apache/Hadoop subproject that they have been using is <a href="http://hadoop.apache.org/hive/">Hive</a>. Hive is a data warehouse infrastructure that allows for data analysis and querying of data in Hadoop files.</p><br /><p>There were a number of talks in the afternoon portion of the conference. Two that interested me in particular were Jake Hofman’s talk on Yahoo! Research’s use of Hadoop for <a href="http://bit.ly/hadoopworldjmh">social network analysis</a>, and Charles Ward’s talk on a joint effort by Stony Brook University and General Sentiment (a Stony Brook commercial spinoff) on <a href="http://www.cs.sunysb.edu/%7Embautin/pdf/cewit_2008_poster.pdf">analyzing entity references and sentiment in blogs and online media</a>. One other from Paul Brown of Booz Allen discussed how they used Hadoop for calculating protein alignments. He also demonstrated a visualization of a Hadoop cluster in action that needs to be seen – I’ll see if I can find a link...</p><br /><p>There is a rich set of <a href="http://twitter.com/#search?q=hadoopworld">tweets from the conference</a> on Twitter. These give a detailed, minute-by-minute picture of what went on there. The one oddity about the conference – the yellow elephant in the room, if you will – was the absence of Google. There’s an interesting angle to that, I’m sure, but I have no idea what it might be.</p><br /><p>This was a great conference and it looks like we are just at the start of a revolution in the way we deal with large volumes of data.</p>Clay Finkhttp://www.blogger.com/profile/03588769556501828590noreply@blogger.com1tag:blogger.com,1999:blog-8637593316831322225.post-19206345479364911392009-03-28T09:25:00.000-07:002009-03-31T14:54:24.110-07:00"The Social Semantic Web – Where Web 2.0 Meets Web 3.0"<o:smarttagtype namespaceuri="urn:schemas-microsoft-com:office:smarttags" name="PlaceType"></o:smarttagtype><o:smarttagtype namespaceuri="urn:schemas-microsoft-com:office:smarttags" name="PlaceName"></o:smarttagtype><o:smarttagtype namespaceuri="urn:schemas-microsoft-com:office:smarttags" name="place"></o:smarttagtype><o:smarttagtype namespaceuri="urn:schemas-microsoft-com:office:smarttags" name="City"></o:smarttagtype><o:smarttagtype namespaceuri="urn:schemas-microsoft-com:office:smarttags" name="country-region"></o:smarttagtype><!--[if gte mso 9]><xml> <w:worddocument> <w:view>Normal</w:View> <w:zoom>0</w:Zoom> <w:punctuationkerning/> <w:validateagainstschemas/> <w:saveifxmlinvalid>false</w:SaveIfXMLInvalid> <w:ignoremixedcontent>false</w:IgnoreMixedContent> <w:alwaysshowplaceholdertext>false</w:AlwaysShowPlaceholderText> <w:compatibility> <w:breakwrappedtables/> <w:snaptogridincell/> <w:wraptextwithpunct/> <w:useasianbreakrules/> <w:dontgrowautofit/> </w:Compatibility> <w:browserlevel>MicrosoftInternetExplorer4</w:BrowserLevel> </w:WordDocument> </xml><![endif]--><!--[if gte mso 9]><xml> <w:latentstyles deflockedstate="false" latentstylecount="156"> </w:LatentStyles> </xml><![endif]--><!--[if !mso]><object classid="clsid:38481807-CA0E-42D2-BF39-B33AF135CC4D" id="ieooui"></object> <style> st1\:*{behavior:url(#ieooui) } </style> <![endif]--><style> <!-- /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {mso-style-parent:""; margin:0in; margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-fareast-font-family:"Times New Roman";} @page Section1 {size:8.5in 11.0in; margin:1.0in 1.0in 1.0in 1.0in; mso-header-margin:.5in; mso-footer-margin:.5in; mso-paper-source:0;} div.Section1 {page:Section1;} --> </style><!--[if gte mso 10]> <style> /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} </style> <![endif]--> <p class="MsoNormal">I attended the Association for The Advancement of Artificial Intelligence Spring Symposia, 2009, at <st1:place st="on"><st1:placename st="on">Stanford</st1:placename> <st1:placetype st="on">University</st1:placetype></st1:place>, from March 23 through March 25, 2009, and participated in the symposium <a href="http://tw.rpi.edu/portal/AAAI-SSS-09:_Social_Semantic_Web:_Where_Web_2.0_Meets_Web_3.0">“The Social Semantic Web – Where Web 2.0 Meets Web 3.0.”</a> “Web 2.0” refers to applications and technologies that have emerged in the last few years on the Web that enable social networking, collaboration and user provided content. This includes sites such as Facebook and Twitter, as well as Web logs and wikis. “Web 3.0” is more or less synonymous with the notion of the <a href="http://en.wikipedia.org/wiki/Semantic_Web">Semantic Web</a>, where structured metadata associated with Web content can be used for reasoning and inference. The idea of the Semantic Web goes back to <a href="http://www.sciam.com/article.cfm?id=the-semantic-web">a paper in Scientific American in 2001 by Tim Bernes-Lee, Jim Hendler and Ora Lassila</a>. They described a world where agent based applications can use semantics-based metadata on the web to reason and infer and present choices for people as they go through their daily activity. Much of the technology for enabling this vision is based on the principles of logic programming paired with Web centric technology such as XML-based metadata.</p><p class="MsoNormal"><br /></p> <p class="MsoNormal"><o:p> </o:p></p> <p class="MsoNormal">The Symposium was organized by <a href="http://www.cs.rpi.edu/%7Edingl/">Li Ding</a> and Jen Bao from Rensselaer Polytechnic Institute and <a href="http://ontolog.cim3.net/cgi-bin/wiki.pl?MarkGreaves">Mark Greaves</a> from Vulcan, Inc. Li Ding opened the discussion and described a situation where Semantic Web technologies may be poised to increase the range and effectiveness of Web 2.0 tools for information retrieval, social networking and collaboration. We spent the next two and a half days discussing examples of this technology and the issues their use introduce into how people interact with the Web.</p><p class="MsoNormal"><br /></p> <p class="MsoNormal"><o:p> </o:p></p> <p class="MsoNormal">A number of applications were described that bridge the gap between collaborative technology and semantics. <a href="http://www.twine.com/home">Twine </a>is a site that allows users to group links into what are called twines. A twine is a group of sites that are topically related. Tags are generated when a site is added to a twine and domain ontologies are used to link different twines together and recommend to a user other twines that may interest them. Radar Networks Inc. developed Twine and their CEO Nova Spivack gave the first presentation. Twine looks like a very useful application. It is somewhat similar to delic.io.us in concept, but with explicit semantics.</p><p class="MsoNormal"><br /></p> <p class="MsoNormal"><o:p> </o:p></p> <p class="MsoNormal"><a href="http://www.aifb.uni-karlsruhe.de/Personen/viewPersonenglish?id_db=2097">Denny Vrandecic</a> from Insitut AIFB, <st1:place st="on"><st1:city st="on">Karlsruhe</st1:city>, <st1:country-region st="on">Germany</st1:country-region></st1:place> described ongoing work on <a href="http://semantic-mediawiki.org/wiki/Semantic_MediaWiki">Semantic MediaWiki</a>.<span style=""> </span>SMW is an extension of MediaWiki that allows for semantic annotation of wiki data. Vrandecic is one of the original developers of Semantic MediaWiki and spoke about adding automated quality checks to the application.</p> <p class="MsoNormal"><o:p> </o:p></p> <p class="MsoNormal">Semantic MediaWiki was the basis for a number of other applications discussed at the symposium. One was <a href="http://metavid.org/wiki/">Metavid.org</a>, an “open video archive of the US Congress.” Metavid.com captures video and closed captioning of Congressional proceedings. Semantic MediaWiki’s extensions allow for categorical searches of recorded speeches.</p><p class="MsoNormal"><br /></p> <p class="MsoNormal"><o:p> </o:p></p> <p class="MsoNormal"><a href="http://www.projecthalo.com/">The Halo Project</a>, funded by Paul Allen’s Vulcan Inc. and sponsored by Mark Greaves, has developed <a href="http://www.mediawiki.org/wiki/Extension:Halo_Extension">extensions to Semantic MediaWiki</a> that go a long way toward showing the power of embedding semantics in applications. The work was done by Ontoprise and they have produced a <a href="http://www.ontoprise.de/smwdemo/">video</a> of its features that is worth viewing.</p><p class="MsoNormal"><br /></p> <p class="MsoNormal"><o:p> </o:p></p> <p class="MsoNormal">Some of the applications discussed provide collaborative, distributed development environments for authoring ontologies. <a href="http://bmir.stanford.edu/people/view.php/tania_tudorache">Tania Tudorache</a> of the <st1:place st="on"><st1:placename st="on">Stanford</st1:placename> <st1:placetype st="on">Center</st1:placetype></st1:place> for Biomedical Informatics Research described <a href="http://protegewiki.stanford.edu/index.php/Collaborative_Protege">Collaborative Protégé</a>. Collaborative Protege extends the <a href="http://protege.stanford.edu/">Protege</a> ontology development environment to support “collaborative ontology editing as well as annotation of both ontology components and ontology changes.” <a href="http://www.stanford.edu/%7Enatalya/">Natasha Noy</a>, who is also one of the prime movers behind Protégé, presented <a href="http://bioportal.bioontology.org/">BioPortal,</a> a repository of biomedical ontologies that allows users to critique posted ontologies, collaborate on ontology development, and submit mappings between ontologies. The same codebase that is behind BioPortal also supports the<a href="http://oor-01.cim3.net/"> OOR Open Ontology Repository</a> which is a domain-independent repository of ontologies. Nova Spivack of Radar Networks also mentioned a new site that they plan on standing up called Tripleforge, which, like Sourceforge, will support open source development of ontologies.</p><p class="MsoNormal"><br /></p> <p class="MsoNormal"><o:p> </o:p></p> <p class="MsoNormal">In regard to architecting systems that use semantics to leverage Web 2.0 features, a number of approaches kept coming up. Ontologies for describing tagging behavior by users were mentioned by a few of the presenters. This is a way to capture the relationships between taggers (two users who tag the same site with the same or similar tags) and the temporal dimension of tagging (“who tagged what tag when?”). Another common thread was defining a semantic layer to describe the syntactic or functional layers of a system.<span style=""> </span>Hans-George Fill of the <st1:place st="on"><st1:placetype st="on">University</st1:placetype> of <st1:placename st="on">Vienna</st1:placename></st1:place> described a model-based approach for developing “Semantic Information Systems” using model based tools that defined just such a layered architecture.<br /></p><p class="MsoNormal"><br /></p> <p class="MsoNormal"><o:p> </o:p></p> <p class="MsoNormal">Some other applications described at the conference use existing collaborative technology, such as Wikipedia, to jumpstart Semantic Web applications. <a href="http://ebiquity.umbc.edu/blogger/author/tim-finin/">Tim Finin</a> described an approach that he and his colleagues at the <st1:placetype st="on">University</st1:placetype> of <st1:placename st="on">Maryland</st1:placename>, <st1:place st="on"><st1:placename st="on">Baltimore</st1:placename> <st1:placetype st="on">County</st1:placetype></st1:place> developed that treats Wikipedia as an ontology. They call it Wikitology. They assert that Wikipedia represents a “consensus view” of topics arrived at via a “social process.” They use the existing categories defined in Wikipedia, along with links between articles to discover the concepts, and the relationships between concepts, that describe article topics. A similar approach was described by Maria Grineva, Maxim Grinev and Dimitry Lizorkin from the <st1:place st="on"><st1:placename st="on">Russian</st1:placename> <st1:placetype st="on">Academy</st1:placetype></st1:place> of Sciences where Wikipedia was used as a Knowledge Base to discover semantically related key terms for documents. In another paper, Jeremy Witmer and Jugal Kalita of the University of Colorado, Colorado Springs used a named entity recognizer to tag locations in Wikipedia articles and also used machine learning techniques to extract geospatial relations from the articles. They posit that disambiguated locations and extracted relations could then be used to add semantic, geospatial annotations to the articles to aid search or create map-based mashups of Wikipedia data.</p><p class="MsoNormal"><br /></p> <p class="MsoNormal"><o:p> </o:p></p> <p class="MsoNormal">Our team presented a paper that described how the location of bloggers could be inferred from location entity mentions in their blog posts. We described an experiment where we were able to correctly geolocate 61% of blogs based on a test set of ~800 blogs with known locations. While our work was somewhat tangential to the Semantic Web, it is a demonstration of the “inference problem,” where information not stated directly, can be inferred from other available information. This raises issues of privacy given the explosion of the use of social networking sites such as Facebook and the proliferation of personal Web logs. Three other papers presented at the symposium addressed privacy and access control issues. <a href="http://datasearch2.uts.edu.au/qcis/members/detail.cfm?StaffID=6879">Mary-Ann Williams</a> of the <st1:placetype st="on">University</st1:placetype> of <st1:placename st="on">Technology</st1:placename>, <st1:place st="on"><st1:city st="on">Sydney</st1:city>, <st1:country-region st="on">Australia</st1:country-region></st1:place>, gave an excellent overview of privacy as it relates to Web-based business. Paul Groth of the <st1:place st="on"><st1:placetype st="on">University</st1:placetype> of <st1:placename st="on">Southern California</st1:placename></st1:place> discussed privacy obligation policies and described how the users of a social networking site might use them to control access to their personal data from outside of the site. Ching-man Au Yeung, Lalana Kagal, Nicholas Gibbins, and Nigel Shadbolt of the University of Southampton and MIT described a method for controlling access to photos on Flickr based on how photos are tagged using a tagging ontology, FOAF, OpenID authentication and the AIR policy language.</p><p class="MsoNormal"><br /></p> <p class="MsoNormal"><o:p> </o:p></p> <p class="MsoNormal">Panels presented during the symposium addressed some cross-cutting issues for Web 2.0 and Semantic Web applications; usability, scale and privacy. On the 25<sup>th</sup>, the panel included Steve White of Radar Networks , Denny Vrandecic, Natasha Noy, <a href="http://www.freebase.com/view/en/jamie_taylor">Jaime Taylor</a>, Minister of Information for Metaweb, the home of <a href="http://www.freebase.com/">Freebase</a> (an excellent open collaborative database), and Jeff Pollock of Oracle and the author of the recently published <a href="http://www.amazon.com/Semantic-Web-Dummies-Computer-Tech/dp/0470396792/ref=pd_bbs_sr_1?ie=UTF8&s=books&qid=1238259042&sr=8-1">“The Semantic Web for Dummies.”</a> This panel was dedicated to the topic of usability, but also addressed the issue of scale. All agreed that usability issues on the Semantic Web are the same as with Webs 1.0 and 2.0; simple is better, hide confusing bits like RDF and OWL tags, etc. Noy made the point, however, that there are different classes of users for semantic applications on the web, such as the users of BioPortal and those actually involved in ontology development. A lot of time was spent talking about users of applications such as Excel and how even a killer application like the Semantic Web can be overtaken by simple, inelegant solutions. The issue of scale came down to how Semantic Web applications will handle billions of triples, and the difficulty of doing anything more than simple reasoning over such large amounts of data. <st1:city st="on"><st1:place st="on">Taylor</st1:place></st1:city> described the power law phenomena where some entities are overloaded with properties while most only have a few. This suggests the need for smart partitioning of resources based on their semantics. As far as the scalability of reasoning is concerned, full RDFS or OWL reasoning is probably too expensive, at least for large amounts of data. Though, as one participant said, “a little bit of semantics” goes a long way, so basic relations such as subsumption and transitivity may be all that is required for most reasoning.</p><p class="MsoNormal"><br /></p> <p class="MsoNormal"><o:p> </o:p></p> <p class="MsoNormal">The next day’s panel included Paul Groth, Denny Vrandecic, Tim Finin and Rajesh Balakrishnan and touched on issues of privacy and trust. One conclusion of this discussion was that the structured metadata that comes with the Semantc Web, along with ability to reason over the data – albeit, probably in small bites – will just multiply the inference problem. There was no real consensus on what can be done about that.</p><p class="MsoNormal"><br /></p> <p class="MsoNormal"><o:p> </o:p></p> <p class="MsoNormal">This symposium did a great job of framing how social computing and semantics are quickly coming together. There was quite a bit of excitement about Twine and the success of Semantic MediaWiki. There was no clear consensus whether this technology will revolutionize the user experience or just provide enabling technology to intelligently link applications and make current functionalities such as search more effective. For developers, however, there is a whole new universe of challenges here. </p>Clay Finkhttp://www.blogger.com/profile/03588769556501828590noreply@blogger.com3tag:blogger.com,1999:blog-8637593316831322225.post-33806079595608120222008-12-29T18:47:00.000-08:002008-12-30T12:19:21.089-08:00MediaWiki Search Configuration IssuesMediaWiki is easy to set up, and the search capability out of the box is OK. Some of the tweeks described here, however, may help make searches more useful.<br /><br />The default database back end for MediaWiki is MySQL and the search capability is based on MySQL's <a href="http://dev.mysql.com/doc/refman/5.0/en/fulltext-search.html">full-text search </a>capability. Specifically, the MediaWiki <a href="http://www.mediawiki.org/wiki/Manual:Database_layout">database schema</a> contains the <code>searchindex </code>table which defines FULLTEXT indicies on the <code>si_title</code> and <code>si_text</code> columns:<br /><br /><code>CREATE TABLE /*$wgDBprefix*/searchindex (<br />-- Key to page_id<br />si_page int unsigned NOT NULL,<br /><br />-- Munged version of title<br />si_title varchar(255) NOT NULL default '',<br /><br />-- Munged version of body text<br />si_text mediumtext NOT NULL,<br /><br />UNIQUE KEY (si_page),<br />FULLTEXT si_title (si_title),<br />FULLTEXT si_text (si_text)<br /><br />) TYPE=MyISAM;</code><br /><br />See the link above for more details about FULLTEXT indices. The important point is that the article text (in the <code>text </code>table) is not searched. The <em>munged</em> text in the <code>searchindex </code>table is searched instead. Munged, in this case, means that Wiki tags, URLs and some language-specific characters are removed to facilitate searches. See the <code>includes/UpdateSearch.php</code> script in your MediaWiki distribution to see exactly what's done.<br /><br /><u>If you are loading pages programatically into your Wiki, make sure the <code>searchindex </code>table is updated appropriately</u>. It's best to take advantage of the <code>includes/Article.php</code> script here since it takes care of all the necessary bookkeeping. I've not done this myself, so it's best to do some homework on your own before preoceeding.<br /><br />By default, MySQL will only index words in a FULLTEXT index that are of 4-10 characters in length. The minimum length of 4 can be a problem if you have a lot of three letter abbreviations. MySQL also uses a large list of <a href="http://dev.mysql.com/doc/refman/5.0/en/fulltext-stopwords.html">stop words</a>. Stop words are very common words that are ignored by indexing programs. MySQL's default stop word list may be too restrictive for you, so a shorter list might improve search results.<br /><br />The minimum indexed word length and the stop word list are configurable under MySQL. Changing these system settings requires a restart of the server, as well as a rebuild of the <code>searchindex </code>table's indicies. Rebuilding the index can take a long time if you have a lot of data in your Wiki, so I would consider making these changes before you load the data.<br /><br />Making these changes in easy. I'm using MediaWiki on a Windows box, just so you know. <br /><br />First, edit the <code>my.ini </code>file (<code>my.cnf</code> on a Unix box) in the MySQL installation directory. Add the following options to the file and then save the file:<br /><code><br />ft_min_word_len=3<br />ft_stopword_file="<span style="font-style:italic;">mysqlhome</span>/stop-words.txt" </code><br /><br />In this case, <code>ft_stopword_file</code> is pointing to a file in the mySQL installation directory, <code>stop-words.txt</code>. For stop words I used the default set of english stop words used by <a href="http://lucene.apache.org/java/docs/">Lucene</a>:<br /><br /><code>a, an, and, are, as, at, be, but, by, for, if, in, into, is, it, no, not, of, on, or, such, that, the, their, then, there, these, they, this, to, was, will, with</code><br /><br />This is a compact and reasonable set of stop words and should improve upon the default MySQL list. This will increase the time required to index the <code>searchindex </code>table, however.<br /><br />Next, restart the MySQL server. Do this via the <code>mysqladmin </code>command line tool or just open services under Windows Control Panel/Administrative Tools and restart the MySQL service.<br /><br />Finally, reindex the <code>searchindex </code>table. The easiest way to do this is from the MySQL command line:<br /><code><br />mysql> REPAIR TABLE searchindex QUICK;</code><br /><br />Additional information about tweeking MySQL for full text searches can be found <a href="http://dev.mysql.com/doc/refman/5.0/en/fulltext-fine-tuning.html">here</a>, though changing the minimum indexed word size and the stop word list should improve your search capability well enough.Clay Finkhttp://www.blogger.com/profile/03588769556501828590noreply@blogger.com2tag:blogger.com,1999:blog-8637593316831322225.post-74580261625391025882008-12-04T10:40:00.000-08:002008-12-04T10:41:57.229-08:00Using Conditional Random Fields for Sentiment ExtractionI found <a href="http://www-connex.lip6.fr/~amini/RelatedWorks/EMNLP05-ChoiCardie.pdf">this paper</a> to be very helpful in understanding how to use <a href="http://www.seas.upenn.edu/~strctlrn/bib/PDF/crf.pdf">Conditional Random Fields</a>. Have a look. They are trying to extract the source of sentiment from sentences. Their approach also uses extraction patterns in addition to CRFs, but I'm not entirely convinced that the extraction patterns help all that much in increasing P&R. Especially helpful here is a good, detailed description of the features they use for the CRF. They used the <a href="http://mallet.cs.umass.edu/">Mallet toolkit</a> for the CRFs, too.Clay Finkhttp://www.blogger.com/profile/03588769556501828590noreply@blogger.com0tag:blogger.com,1999:blog-8637593316831322225.post-5182972421464194072008-12-03T07:40:00.001-08:002008-12-03T07:40:53.305-08:00Fun with ReificationConverting from one graph representation to another can be problematic, when properties are allowed on edges in one representation but not in the other. I had to implement a service that queried a graph store that allowed edge properties and serialized the result to <a href="http://www.w3.org/TR/owl-features/">OWL</a>. The client, in turn, had to convert the returned OWL to another native graph representation that also allows edge properties. OWL does not allow edge properties, so I had to deal with the problem of preserving the edge properties somehow. <br /><br />Enter <em>reification</em>. What's reification? Basically it's making statements about a statement. RDF-wise, it is turning a triple into the subject of another triple. If you have a triple <a,knows,b> you can reify the triple as S and say <S,isAbout,a>. I use <a href="http://jena.sourceforge.net/">Jena </a>- a Java API for processing RDF and OWL- and have used its <a href="http://jena.sourceforge.net/how-to/reification.html">reification support</a> to implement named graphs. There were some performance issues here with large numbers of reified statements, but for reifying a single statement, as long as there are not a large number of properties for the reified statement, there will probably not be too much of a performance hit. That assertion hasn't been tested, though, so take it with a grain of salt.<br /><br />To deal with preserving edge properties in OWL, you need to reify the triple that represents the edge in the RDF graph and then add triples representing the edge properties that have that reified statment as the subject. When I came across an edge in the source graph, I created triple, or Statement in Jena parlance, describing the edge, <s,p,o>, where s is is the source node resource, p is a property, and o is the target node resource (I'm implicitly assuming a directed graph):<br /><br /><code>Statement stmt = model.createStatement(s, p, o);<br />// create statement doesn't add the statement to the <br />// model, so add it.<br />model.addStatement(stmt);<br /></code><br /><br />I then reified the statement and added statements that had the reified statement as the subject for each edge property:<br /><br /><code><br />// reify the statement<br />ReifiedStatement reifiedStmt=<br /> stmt.createReifiedStatement();<br />// Add "edge" propertes<br />Statement edgePropStmt=model.createStatement(reifiedStmt, <br /> someEdgeProperty, "foo");<br />model.addStatement(edgePropStmt);<br />...</code><br /><br />On the client side, I checked any statement that had an object property as the predicate for reification. If it was a reified statement, I knew I was looking at an edge, so I extracted the property values and added them to the edge in the target representation:<br /><br /><code>// Check for reified statement<br />if (stmt.isReified()) {<br />RSIterator j=statement.listReifiedStatements();<br />while (j.hasNext()) {<br /> Statement statement2 = k.nextStatement();<br /> if (!statement2.getPredicate().equals(RDF.subject)<br /> && !statement2.getPredicate().equals(RDF.predicate)<br /> && !statement2.getPredicate().equals(RDF.object)<br /> && !statement2.getPredicate().equals(RDF.type)) {<br /> // Add edge property to native graph representation<br /> }<br />}</code><br /><br />The one thing to note here is that when you reify a triple, <s,p,o> as S, it implies the triples <S,rdf:type,rdf:Statement>, <S,rdf:subject,s>, <S,rdf:predicate,p>, and <S,rdf:object,o>. You need to filter these properties out when proecessing the reified statement.Clay Finkhttp://www.blogger.com/profile/03588769556501828590noreply@blogger.com0tag:blogger.com,1999:blog-8637593316831322225.post-51669037115212871472008-12-01T13:03:00.001-08:002008-12-01T13:03:46.990-08:00SAAJ Performance IssuesJava 1.6 comes with the <a href="https://saaj.dev.java.net/source/browse/*checkout*/saaj/saaj-ri/docs/index.html">SOAP with attachments API for Java</a> (SAAJ). It's really easy to set up a stand alone web service endpoint using SAAJ and this <a href="http://www.ibm.com/developerworks/xml/library/x-jaxmsoap/">tutorial</a> (free registration required) tells you how to get one up and running.<br /><br />For one of my projects I wanted a quick and dirty demo I could run from the command line, using Ant, that started a service and demonstrated a client call. What I ran into, though, was that it was taking <em>forever</em> for the client to access the response message body after the call. The result value was about 80K but it was still taking about three miniutes of wall time for the call to SOAPMessage::getSOAPBody to complete! It turns out that this is a bug in Java 1.6 (I'm running 1.6_0_06 but I believe I saw the same problem under release _10 as well). The fix <a href="http://forums.java.net/jive/thread.jspa?threadID=36208">posted </a>here works. I now get my data back in milleseconds rather than minutes.Clay Finkhttp://www.blogger.com/profile/03588769556501828590noreply@blogger.com0tag:blogger.com,1999:blog-8637593316831322225.post-3262530104442774052008-06-17T13:07:00.000-07:002008-06-17T14:26:13.335-07:00Unwanted "xmlns=" Attribute in Elements After TransformationI was doing a simple XSLT transformation that involved renaming elements. Starting with a basic example like:<br /><br /><code><br /><?xml version="1.0" encoding="UTF-8"?><br /><mydoc xmlns="http://finegameofnil.blogger.com/xml-examples/rename"><br /> <foo someattr1="a" someattr2="b"><br /> </foo><br /></mydoc><br /></code><br /><br />I want to change "foo" to "foobar".<br /><br />I run the the stylesheet:<br /><br /><code><br /><?xml version="1.0" encoding="UTF-8"?><br /><xsl:stylesheet xsl="http://www.w3.org/1999/XSL/Transform" xsi="http://www.w3.org/2001/XMLSchema-instance" fgon="http://finegameofnil.blogger.com/xml-examples/rename" version="2.0"><br /><xsl:import href="copy.xsl"><br /><xsl:output method="xml" version="1.0" standalone="yes" indent="yes" encoding="UTF-8"><br /><xsl:template match="fgon:foo"><br /> <xsl:element name="foobar"><br /> <xsl:apply-templates select="@* | node()"><br /> </xsl:apply-templates><br /> </xsl:element><br /></xsl:template><br /></code><br /><br />I get:<br /><br /><code><br /><?xml version="1.0" encoding="UTF-8"?><br /><mydoc xmlns="http://finegameofnil.blogger.com/xml-examples/rename"><br /> <foobar xmlns="" someattr1="a" someattr2="b"><br /> </foobar><br /></mydoc><br /></code><br /><br />So what's with the xmlns=""? I'm not sure what the semantics of an empty namespace are. It seems to mean the that element and its children are not in any namespace. In a case where you are renaming a document to conform to a schema change you will get a schema validation error.<br /><br />To prevent this I tried the following stylesheet which explicitly sets the namespace for the target element to the default namespace:<br /><br /><code><br /><?xml version="1.0" encoding="UTF-8"?><br /><xsl:stylesheet xsl="http://www.w3.org/1999/XSL/Transform" xsi="http://www.w3.org/2001/XMLSchema-instance" fgon="http://finegameofnil.blogger.com/xml-examples/rename" version="2.0"><br /><xsl:import href="copy.xsl"><br /><xsl:output method="xml" version="1.0" standalone="yes" indent="yes" encoding="UTF-8"><br /><xsl:template match="fgon:foo"><br /> <xsl:element name="foobar" namespace="{namespace-uri()}"><br /> <xsl:apply-templates select="@* | node()"><br /> </xsl:apply-templates><br /> </xsl:element><br /></xsl:template><br /></code><br /><br />Finally, this gives me what I wanted:<br /><code><br /><?xml version="1.0" encoding="UTF-8"?><br /><mydoc xmlns="http://finegameofnil.blogger.com/xml-examples/rename"><br /> <foobar someattr1="a" someattr2="b"><br /> </foobar> <br /></mydoc><br /></code><br /><br />The copy.xsl imported in the examples above is from Sal Magnano's "XSLT Cookbook - 2nd Edition".<br /><br /><code><br /><?xml version="1.0" encoding="UTF-8"?><br /><xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="2.0"><br /> <!-- General purpose copy translation stylesheet. <br />Taken from XSLT Cookbook, 2nd Edition, page 275. --><br /> <xsl:template match="node() | @*"><br /> <xsl:copy><br /> <xsl:apply-templates select="@* | node()"/><br /> </xsl:copy><br /> </xsl:template><br /></xsl:stylesheet><br /></code>Clay Finkhttp://www.blogger.com/profile/03588769556501828590noreply@blogger.com0